Poll - Is endless September a threat to LW and what should be done?
post by Epiphany · 2012-12-08T23:42:14.949Z · LW · GW · Legacy · 262 commentsContents
Why this is worth your consideration: Two Theories on How Online Cultures Die: Poll Link: Request for Feedback: None 262 comments
Various people raised concerns that growth might ruin the culture after reading my "LessWrong could grow a lot" thread. There has been some discussion about whether endless September, a phenomenon that kills online discussion groups, is a significant threat to LessWrong and what can be done. I really care about it, so I volunteered to code a solution myself for free if needed. Luke invited debate on the subject (the debate is here) and will be sent the results of this poll and asked to make a decision. It was suggested by him in an email that I wait a little while and then post my poll (meta threads are apparently annoying to some, so we let people cool off). Here it is, preceded by a Cliff's notes summary of the concerns.
Why this is worth your consideration:
- Yvain and I checked the IQ figures in the survey against other data this time, and the good news is that it's more believable that the average LessWronger is gifted. The bad news is that LessWrong's IQ average has decreased on each survey. It can be argued that it's not decreasing by a lot or we don't have enough data, but if the data is good, LessWrong's average has lost 52% of it's giftedness since March of 2009.
- Eliezer documented the arrival of poseurs (people who superficially copycat cultural behaviors - they are reported to over-run subcultures) which he termed "Undiscriminating Skeptics".
- Efforts to grow LessWrong could trigger an overwhelming deluge of newbies.
- LessWrong registrations have been increasing fast and it's possible that growth could outstrip acculturation capacity. (Chart here)
- The Singularity Summit appears to cause a deluge of new users that may have similar effect to the September deluges of college freshman that endless September is named after. (This chart shows a spike correlated with the 2011 summit where 921 users joined that month, which is roughly equal to the total number of active users LW tends to have in a month if you go by the surveys or Vladmir's wget.)
- A Slashdot effect could result in a tsunami of new users if a publication with lots of readers like the Wall Street Journal (they used LessWrong data in this article) decides to write an article on LessWrong.
- The sequences contain a lot of the culture and are long meaning that "TLDR" may make LessWrong vulnerable to cultural disintegration. (New users may not know how detailed LW culture is or that the sequences contain so much culture. I didn't.)
- Eliezer said in August that the site was "seriously going to hell" due to trolls.
- A lot of people raised concerns.
Two Theories on How Online Cultures Die:
Overwhelming user influx.
There are too many new users to be acculturated by older members, so they form their own, larger new culture and dominate the group.
Trending toward the mean.
A group forms because people who are very different want a place to be different together. The group attracts more people that are closer to mainstream than people who are equally different because there are more mainstream people than different people. The larger group attracts people who are even less different in the original group's way for similar reasons. The original group is slowly overwhelmed by people who will never understand because they are too different.
Poll Link:
Request for Feedback:
In addition to constructive criticism, I'd also like the following:
-
Your observations of a decline or increase in quality, culture or enjoyment at LessWrong, if any.
-
Ideas to protect the culture.
-
Ideas for tracking cultural erosion.
- Ways to test the ideas to protect the culture.
262 comments
Comments sorted by top scores.
comment by Alicorn · 2012-12-09T00:20:22.002Z · LW(p) · GW(p)
So far, I've been more annoyed on LessWrong by people reacting to fear of "cultural erosion" than by any extant symptoms of same.
Replies from: Vaniver, Viliam_Bur, pleeppleep, Kindly↑ comment by Vaniver · 2012-12-09T21:03:54.440Z · LW(p) · GW(p)
The fear is that this is due to a selection effect. Of the people I know through LW, a disappointing number have stopped reading the site. One of my hobbies, for over a decade now, has been posting on forums, and so the only way I'd stop reading / posting on LW is if I find a forum more relevant to my interests. (For the curious, I've moved from 3rd edition D&D to xkcd to here over that timeframe, and only post in xkcd's MLP and gaming threads these days.) For many of the former LWers I know, forum-posting isn't one of their hobbies, and they came here for the excellent content, primarily by EY. Now that there aren't blog posts that they want to read frequently enough, they don't come, and I'm not sure that any of them even know that EY has started posting a new sequence.
I think that this fear is mostly misplaced, because the people in that class generally aren't the people posting the good content, and I think any attempt to improve LW should be along the lines of "more visible good content" and not "less bad content," but it's important for evaporative cooling reasons to periodically assess the state of content on LW.
Replies from: David_Gerard, Epiphany, Viliam_Bur↑ comment by David_Gerard · 2012-12-10T00:30:30.767Z · LW(p) · GW(p)
Not only do communities have a life cycle, people's membership in communities does. People give all sorts of reasons for leaving a community (e.g. boredom, other interests, deciding the community is full of assholes, an incident they write over one megabyte of text complaining about), but the length of participation is typically 12 to 18 months regardless. Anything over that, you're a previous generation.
So I wouldn't be disappointed unless they stopped before 12-18 months.
↑ comment by Epiphany · 2012-12-09T23:14:32.878Z · LW(p) · GW(p)
I wonder if it would make a big difference to add email notifications... perhaps the type where you only receive the notification when something over X number of karma is posted?
That would keep users from forgetting about the site entirely. And draw more attention and karma (aka positive reinforcement) to those who post quality things.
Hmm that would also keep older users logging in, which would help combat both trending toward the mean and new users outstripping acculturation capacity.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-12-10T14:00:50.783Z · LW(p) · GW(p)
I think that would bring back only the most marginally interested users, and would be likely to annoy a good many people who'd drifted away.
Notification of posts with karma above a chosen threshold might be better.
For that matter, a customizable LW which you could choose to only see posts with karma above a threshold might be good. It would be even better if posts could also be selected/deselected by subject, but that sounds like a hard problem.
↑ comment by Viliam_Bur · 2012-12-09T22:35:39.023Z · LW(p) · GW(p)
any attempt to improve LW should be along the lines of "more visible good content" and not "less bad content,"
Why not both?
Speaking for myself, a lot of bad content would make me less likely to post good content. My instincts tell me -- if other people don't bother here with quality, why should I?
Replies from: Vaniver↑ comment by Vaniver · 2012-12-09T23:28:15.908Z · LW(p) · GW(p)
Why not both?
I separate those because I think the second is a distraction. It seems to me that the primary, and perhaps only, benefit from reducing bad content is increasing the visibility of good content.
Speaking for myself, a lot of bad content would make me less likely to post good content. My instincts tell me -- if other people don't bother here with quality, why should I?
It still seems like there are incentives- better posts will yield more karma- and I suspect it matters who the other people who don't bother are. Right now, we have spammers (particularly on the wiki) who don't bother at all with being helpful. Does that make you more likely to post commercial links on the wiki? If everyone you thought was more insightful than you stopped bothering to write posts and comments, then it seems likely that you would wonder what the benefit to putting more effort into LW was. More high quality posts seems useful as an aspirational incentive.
↑ comment by Viliam_Bur · 2012-12-09T22:40:54.743Z · LW(p) · GW(p)
I am annoyed by both. Not enough to consider leaving this weedy garden yet.
↑ comment by pleeppleep · 2012-12-09T14:28:09.597Z · LW(p) · GW(p)
The possibility still warrants consideration, even if it isn't actively harming the site
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-12-09T19:36:20.913Z · LW(p) · GW(p)
I think the idea is that considering the possibility is actively harming the site.
Which would be .. problematic.
Replies from: pleeppleep↑ comment by pleeppleep · 2012-12-09T20:59:31.602Z · LW(p) · GW(p)
If we've gotten to the point where we refuse to think because we're afraid of where it'll lead, then this place really is dead.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-10T13:57:29.423Z · LW(p) · GW(p)
That granted, there are conversations it's not worth having.
Replies from: pleeppleep↑ comment by pleeppleep · 2012-12-10T15:50:08.890Z · LW(p) · GW(p)
probably, but this isn't one of them.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-10T17:06:11.958Z · LW(p) · GW(p)
I suspect you are correct, and if it doesn't go very very badly indeed, I am very confident of it.
comment by printing-spoon · 2012-12-09T00:14:05.845Z · LW(p) · GW(p)
I think this site is dying because there's nothing interesting to talk about anymore. Discussion is filled with META, MEETUP, SEQ RERUN, links to boring barely-relevant articles, and idea threads where the highest comment has more votes than the thread itself (i.e. a crappy idea). Main is not much better. Go to archive.org and compare (date chosen randomly, aside from being a while ago). I don't think eternal september is the whole explanation here -- you only need 1 good user to write a good article.
Replies from: Viliam_Bur, palladias, NancyLebovitz, Epiphany↑ comment by Viliam_Bur · 2012-12-09T22:23:28.924Z · LW(p) · GW(p)
Discussion is filled with META, MEETUP, SEQ RERUN, links to boring barely-relevant articles
The website structure needs to be changed. "Main" and "Discussion" simply do not reflect the LW content today.
We should have a separate "Forum" (or some other name) category for all the non-article discussion threads like Open Thread, Media Thread, Group Rationality Thread, and stuff like this.
Then, the "Discussion" should be renamed to "Articles" (and possibly "Main" to "Main Articles") to make it obvious what belongs there.
Everything else should be downvoted; optionally with a comment: "This belongs to the Open Thread". (And if the author says they didn't know that Open Thread exists, there is something seriously wrong... about the structure of the website.)
I feel like I wrote this to the LW discussions at least dozen times...
there's nothing interesting to talk about anymore.
I think there are interesting things here. They are just drowned in too many less interesting things.
Let's look at the numbers: 6 articles so far on Dec 9th; 6 articles on Dec 8th; 4 articles on Dec 7th; 11 articles on Dec 6th; 8 articles on Dec 5th; and some of the articles from Dec 4th -- less than one week ago -- already don't fit on the first "Discussion" page. (The exact numbers may differ depending on your Preferences settings.) The page is scrolling insanely fast. If I stopped reading LW for one week, I would have problem to catch up with all the new stuff; I would probably just skip some of that. That's not good if we have too much stuff, but low average quality.
We don't downvote enough. Let me explain -- if someone makes a post that is not very good, but is not completely stupid or trolling also, it will almost certainly gain more upvotes that downvotes. Because it feels wrong to punish someone only for being uninteresting. But in terms of rewarding/punishing behavior, we probably should punish them. If we try to be too friendly, the site will become boring, precisely because most of the social talk is not about giving new information.
Perhaps it could help to use some timeless deciding. If you read an article, ask yourself a question: "If the next week here would be 10 new articles like this, would it make LW better or worse?" If the answer is worse, downvote it. Because although the same author will not write 10 more articles like this during the next week, other authors will.
TL;DR -- I think the Eternal September is most visible on the article level, because it is not obvious what kind of content belongs here. "Discussion" is horribly misleading -- we don't want discussion-level articles. That's what the comments and Open Threads are for.
↑ comment by palladias · 2012-12-09T01:59:41.660Z · LW(p) · GW(p)
One issue with the LW/CFAR approach is that the focus is on getting better/more efficient at pursuing your goals, but not on deciding whether you're applying your newfound superpowers to the right goals. (There's a bit of this with efficient altruism, but those giving opportunities are more about moving people up Maslow's hierarchy of needs, not on figuring out what to want when you're not at subsistence level).
Luke's recent post suggest that almost no one here has the prereqs to tackle metaphysics or normative ethics, but that always has seemed like the obvious next topic for rationality-minded people. I was glad when Luke was writing his Desirism sequences back at CSA, but it never got to the point where I had a decent enough model of what normative claims desirism made to be able to evaluate it.
Basically, I think these topics would let us set our sights a little higher than "Help me optimize my computer use" but I think one major hurdle is that it's hard to tackle these topics in individual posts, and people may feel intimidated about starting sequences.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-12-10T03:35:25.220Z · LW(p) · GW(p)
The problem is that there is an unfortunate tendency here, going all the way up to EY to dismiss philosophy and metaphysics.
↑ comment by NancyLebovitz · 2012-12-10T13:54:23.980Z · LW(p) · GW(p)
idea threads where the highest comment has more votes than the thread itself (i.e. a crappy idea)
It depends-- if the higher-voted comments are expending on the original post, then I'd say the post was successful because it evoked good-quality thought, assuming that the voters have good judgement. If the higher-voted comments are refuting the original post, then it was probably a bad post.
↑ comment by Epiphany · 2012-12-09T00:39:42.363Z · LW(p) · GW(p)
Do you have a theory as to why there aren't enough good users, or why they are not writing good articles?
Replies from: John_Maxwell_IV, None, printing-spoon↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-09T07:18:56.576Z · LW(p) · GW(p)
One possibility is that the kind of content printing-spoon likes is easy to get wrong, and therefore easy to get voted down for, and therefore the system is set up with the wrong incentives (for the kind of content printing-spoon likes). I'd guess that for most users, the possibility of getting voted down is much more salient than the possibility of getting voted up. Getting voted down represents a form of semi-public humiliation (it's not like reddit, where if you post something lame it gets downvoted and consequentially becomes obscure).
The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go.
See this thread for more: http://lesswrong.com/lw/5pf/what_were_losing/
Overall, I suspect that LW could stand to rely less on downvoting in general as a means of influencing user behavior. It seems like meta threads of this type often go something like "there's content X I hate, content Y I hate, and practically no content at all, really!" Well if you want more content, don't disparage the people writing content! It may make sense to moderate voting behavior based on how much new stuff is being posted--if hardly any new stuff is being posted, be more willing to upvote. If there's lots of stuff competing for attention, vote down lamer stuff so the good stuff gets the recognition it deserves.
I think we could stand to see high-karma LWers who rarely post in Main/Discussion post there more. Maybe make it impossible for anyone with over X karma to get voted below 0 in a Main or Discussion post. Or make a new subforum where high-karma users can post free of moderation. (I'll admit, the oligarchical aspect of this appeals to me.)
Also, maybe be realistic about the fact that most people are not going to be willing to go to lukeprog/gwern lengths to dig up papers related to their posts, and figure out the best way to live with that.
Replies from: Nominull↑ comment by printing-spoon · 2012-12-09T04:53:49.993Z · LW(p) · GW(p)
I'm not sure... I think the topics I find most interesting are simply used up (except for a few open questions on TDT or whatever). Also the recent focus on applied rationality / advice / CFAR stuff... this is a subject which seems to invite high numbers of low quality posts. In particular posts containing advice are generally stuffed with obvious generalizations and lack arguments or evidence beyond a simple anecdote.
Also, maybe the regular presence of EY's sequences provided a standard for quality and topic that ensured other people's posts were decent (I don't think many people read seq reruns, especially not old users who are more likely to have good ideas).
comment by metatroll · 2012-12-09T01:40:06.725Z · LW(p) · GW(p)
The Popular Struggle Committee for Salvation of Less Wrong calls for the immediate implementation of the following measures:
1) Suspension of HPMOR posting until the site has been purged. All new users who join during the period of transition will be considered trolls until proven otherwise. Epiphany to be appointed Minister of Acculturation.
2) A comprehensive ban on meta-discussion. Articles and comments in violation of the ban will be flagged as "meta" by the moderators, and replying to them will incur a "meta toll" of -5 karma. A similar "lol toll" shall apply to jokes that aren't funny.
3) All meetups for the next six months to consist of sixty minutes of ideological self-criticism and thirty minutes of weapons training.
Replies from: gwern↑ comment by gwern · 2012-12-09T03:04:46.927Z · LW(p) · GW(p)
I second these motions... with a vengeance. For is it not said:
The revolutionary war is a war of the masses; it can be waged only by mobilizing the masses and relying on them.
and
Liberalism is extremely harmful in a revolutionary collective. It is a corrosive which eats away unity, undermines cohesion, causes apathy and creates dissension. It robs the revolutionary ranks of compact organization and strict discipline, prevents policies from being carried through and alienates the Party organizations from the masses which the Party leads. It is an extremely bad tendency.
and especially:
To criticize the people's shortcomings is necessary, . . . but in doing so we must truly take the stand of the people and speak out of whole-hearted eagerness to protect and educate them.
comment by Nominull · 2012-12-09T00:32:37.158Z · LW(p) · GW(p)
The destruction of LW culture has already happened. The trigger was EY leaving, and people without EY's philosophical insight stepping in to fill the void by chatting about their unconventional romantic lives, their lifehacks, and their rational approach to toothpaste. If anything, I see things having gotten somewhat better recently, with EY having semi-returned, and with the rise of the hypercontrarian archconservative clique, which might be wrong about everything but at least they want to talk about it and not toothpaste.
Replies from: John_Maxwell_IV, None, None, metatroll↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-09T06:35:29.791Z · LW(p) · GW(p)
A related request: there are a lot of goals common enough that better achieving these goals should be of interest to a large-ish portion of LW. I'm thinking here of: happiness; income; health; avoiding auto accidents; reading more effectively; building better relationships with friends, family, dating partners, or co-workers; operationalizing one's goals to better track progress; more easily shedding old habits and gaining new ones.
Could we use our combined knowledge base, and our ability to actually value empirical data and consider counter-evidence and so on, to find and share some of the better known strategies for achieving these goals? (Strategies that have already been published or empirically validated, but that many of us probably haven’t heard?) We probably don’t want to have loads and loads of specific-goaled articles or links, because we don’t want to look like just any old random internet self-help site. But a medium amount of high-quality research, backed by statistics, with the LW-community’s help noticing the flaws or counter-arguments -- this sounds useful to me. Really useful. Much of the advantage of rationality comes from, like, actually using that rationality to sort through what’s known and to find and implement existing best practices. And truth being singular, there’s no reason we should each have to repeat this research separately, at least for the goals many of us share.
Anna Salamon, 2009. So this "destruction" was at least semi-planned.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T02:24:48.372Z · LW(p) · GW(p)
I read that twice, and went to the post you linked to, and am still not seeing why it supports the idea:
this "destruction" was at least semi-planned.
Maybe you are viewing optimization related posts as a form of cultural collapse?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-11T03:20:39.559Z · LW(p) · GW(p)
Nominull seemed to be. I was patterning my use of "destruction" after theirs. I don't see it as destruction myself.
↑ comment by [deleted] · 2012-12-09T04:59:05.396Z · LW(p) · GW(p)
hypercontrarian archconservative clique
lulz. Why do I feel identity-feels for that phrase? I should watch out for that, but,
which might be wrong about everything
That's what I thought a few months ago. Then everything turned inside out and I realized there is no god . What a feeling! Now I see people confidently rationalizing the cultural default, and realize how far we have to go WRT epistemic rationality.
↑ comment by metatroll · 2012-12-15T10:01:27.564Z · LW(p) · GW(p)
tl;dr: The following is a non-profit fan-based parody. Less Wrong, the Singularity Institute, and the Centre for Applied Rationality are owned by Hogwarts School, Chancellor Ray Kurzweil, and the Bayesian Conspiracy. Please support the official release.
Troll Wrongosphers with Baumeister and Eddington, not Benedict and Evola
Wrongosophical trolling should be based on genuinely superior psychological insights ("Baumeister" for breakthroughs in social psychology such as those summarized in Vohs & Baumeister 2010) and on crackpot science that is nonetheless difficult to debunk ("Eddington" for the fundamental theory described in Durham 2006). Starting from reaction and religion, as many trolls still do, both (1) promotes unpleasant ideas like God and conservatism and (2) fails to connect with the pragmatic and progressive sensibility of 21st-century culture. Once young trollosophers are equipped with some of the best newthink and pseudoscience, then let them dominate the subversive paradigm. I'll bet they get farther than the other kind.
comment by [deleted] · 2012-12-09T05:52:02.227Z · LW(p) · GW(p)
Here's two things we desperately need:
An authoritative textbook-style index/survey-article on eveything in LW. We have been generating lots of really cool intellectual work, but without a prominently placed, complete, hierarchical, and well-updated overview of "here's the state of what we know", we arent accumulating knowledge. This is a big project and I don't know how I could make it happen, besides pushing the idea, which is famously ineffective.
LW needs a king. This idea is bound to be unpopular, but how awesome would it be to have someone who's paid job it was to make LW into an awesome and effective community. I imagine things like getting proper studies done of how site layout/design should be to make LW easy to use and sticky to the right kind of people (currently sucks), contacting, coordinating, and encourageing meetup organizers individually (no one does this right now and lw-organizers has little activity), thinking seriously and strategically about problems like OP, and leading big projects like idea #1. Obviously this person would have CEO-level authority.
One problem is that our really high-power agent types who are super dedicated to the community (i.e. lukeprog) get siphoned off into SI. We need another lukeprog or someone to be king of LW and deal with this kind of stuff.
Without a person in this king role, the community has to waste time and effort making community-meta threads like these. Communities and democratic methods suck at doing the kind of strategic, centralized, coherent decision making that we really need. It really isn't the comparative advantage of the community to be having to manage these problems. If these problems were dealt with, it would be a lot easier to focus on intellectual productivity.
Replies from: prase, army1987, Eugine_Nier, Epiphany↑ comment by prase · 2012-12-09T17:30:43.509Z · LW(p) · GW(p)
LW needs a king.
LW as a place to test applied moldbuggery, right?
Communities and democratic methods suck at doing the kind of strategic, centralized, coherent decision making that we really need.
Kings also suck at it, in the average. Of course, if we are lucky and find a good king... the only problem is that king selection is the kind of strategic decision humans suck at.
Replies from: None↑ comment by [deleted] · 2012-12-09T18:12:20.333Z · LW(p) · GW(p)
They should be self-selected, then we don't have to rely on the community at large.
There's this wonderful idea called "Do-ocracy" where everyone understands that the people actually willing to do things get all the say as to what gets done. This is where benevolent dictators like Linus Torvalds get there power.
Our democratic training has taught us to think this idea is a recipe for totalitarian disaster. The thing is, even if the democratic memplex were right in it's injunction against authority, a country and an internet community are entirely different situations.
In a country, if you had king-power, you have military and law power as well, and can physically coerce people to do what you want. There is enough money and power at stake to make it so most of the people who want to do the job are in it for the money and power, not the public good. Thus measures like heritable power (at least you're not selecting for power-hunger), and democracy (now we're theoretically selecting for public support).
On the other hand, in a small artificial community like a meetup, a hackerspace, or lesswrong, there is no military to control, the banhammer is much less power than the noose or dungeon, and there is barely anything to gain by embezzling taxes (as a meetup organizer, I could embezzle about $30 a month...). At worst, a corrupt monarch could ban all the good people and destroy the community, but the incentive do do damage to the community is roughly "for the lulz". Lulz is much cheaper elsewhere. The amount of damage is highly limited by the fact that, in the absence of military power, the do-ocrat's power over people is derived from respect, which would rapidly fall off if they did dumb things. On the other hand, scope insensitivity makes the apparent do-gooder motivation just as high. So in a community like this, most of the people willing to do the job will be those motivated to do public good and those agenty enough to do it, so self-selection (do-ocracy) works and we don't need other measures.
Replies from: prase, Nominull↑ comment by prase · 2012-12-12T15:56:41.193Z · LW(p) · GW(p)
There's this wonderful idea called "Do-ocracy" where everyone understands that the people actually willing to do things get all the say as to what gets done. ... Our democratic training has taught us to think this idea is a recipe for totalitarian disaster.
I can't speak for your democratic training, but my democratic training has absolutely no problem with acknowledging merits and giving active people trust proportional to their achievements and letting them decide what more should be done.
It has become somewhat fashionable here, in the Moldbuggian vein, to blame community failures on democracy. But what particular democratic mechanisms have caused the lack of strategic decisions on LW? Which kind of decisions? I don't see much democracy here - I don't recall participating in election, for example, or voting on a proposed policy, or seeing a heated political debate which prevented a beneficial resolution to be implemented. I recall recent implementation of the karma penalty feature, which lot of LWers were unhappy about but was put in force nevertheless in a quite autocratic manner. So perhaps the lack of strategic decisions is caused by the fact that
- there just aren't people willing to even propose what should be done
- nobody has any reasonable idea what strategic decision should be made (it is one thing to say what kind of decisions should be made - e.g. "we should choose an efficient site design", but a rather different thing to make the decision in detail - e.g. "the front page should have a huge violet picture of a pony on it")
- people aren't willing to work for free
Either of those has little to do with democracy. I am pretty sure that if you volunteer to work on whichever of your suggestions (contacting meetup organisers, improving the site design...), nobody would seriously object and you would easily get some official status on LW (moderator style). To do anything from the examples you have mentioned you wouldn't need dictatorial powers.
↑ comment by Nominull · 2012-12-09T18:30:49.094Z · LW(p) · GW(p)
The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it's only because an online community is generally less important to people's lives than society at large.
A bad king can absolutely destroy an online community. Banning all the good people is actually one of the better things a bad king can do, because it can spark an organized exodus, which is just inconvenient. But by adding restrictions and terrorizing the community with the threat of bans, a bad king can make the good people self-deport. And then the community can't be revived elsewhere.
Replies from: None, Vaniver↑ comment by [deleted] · 2012-12-09T20:02:24.471Z · LW(p) · GW(p)
At worst, a corrupt monarch could ... destroy the community, but the incentive do do damage to the community is roughly "for the lulz". Lulz is much cheaper elsewhere.
I admit, I have seen braindead moderators tear a community apart (/r/anarchism for one).
I have just as often seen lack of moderation prevent a community from becoming what it could. (4chan (though I'm unsure whether 4chan is glorious or a cesspool))
And I have seen strong moderation keep a community together.
The thing is, death by incompetent dictator is much more salient to our imaginations than death by slow entropy and september-effects. incompetent dictators have a face which makes us take it much more seriously than an unbiased assessment of the threats would warrant.
↑ comment by Vaniver · 2012-12-09T21:27:46.510Z · LW(p) · GW(p)
The power of the banhammer is roughly proportional to the power of the dungeon. If it seems less threatening, it's only because an online community is generally less important to people's lives than society at large.
There's a big difference between exile and prison, and the power of exile depends on the desirability of the place in question.
↑ comment by A1987dM (army1987) · 2012-12-09T14:13:52.936Z · LW(p) · GW(p)
LW needs a king.
Why “king” rather than “monarch”? Couldn't a queen do that?
Replies from: faul_sname, J_Taylor, Luke_A_Somers, None, pleeppleep, DanArmak↑ comment by faul_sname · 2012-12-09T23:12:17.300Z · LW(p) · GW(p)
Yes, and a queen could move more than one space in a turn, too.
↑ comment by Luke_A_Somers · 2012-12-10T13:55:49.820Z · LW(p) · GW(p)
Maybe "Princess" would be best, considering everything.
Replies from: None↑ comment by [deleted] · 2012-12-10T14:08:42.918Z · LW(p) · GW(p)
hmmm.. no It definitely has to be a word that implies current authority, not future authority.
Replies from: Luke_A_Somers, army1987↑ comment by Luke_A_Somers · 2012-12-10T14:39:27.257Z · LW(p) · GW(p)
There is a particular princess in the local memespace with nigh-absolute current authority.
edited to clarify: by 'local memespace' I mean the part of the global memespace that is in use locally, not that there's something we have going that isn't known more broadly
Replies from: None↑ comment by [deleted] · 2012-12-10T14:50:15.134Z · LW(p) · GW(p)
I am getting this "whoosh" feeling but I still can't see it.
Replies from: Luke_A_Somers, Kindly, Zack_M_Davis↑ comment by Luke_A_Somers · 2012-12-10T17:03:04.266Z · LW(p) · GW(p)
If you image-search 'obey princess', you will get a hint. Note, the result is... an alicorn.
But more seriously (still not all that seriously), there would be collossal PR and communication disadvantages given by naming a king, that would be mostly dodged by naming a princess.
In particular, people would probably overinterpret king, but file princess under 'wacky'. This would not merely dodge, but could help against the 'cold and calculating' vibe some people get.
↑ comment by Kindly · 2012-12-10T18:43:58.197Z · LW(p) · GW(p)
Luke_A_Somers is referring to Princess Dumbledore, from Harry Potter and the Methods of Rationality, chapter 86.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-10T21:39:19.057Z · LW(p) · GW(p)
I'd love to read that chapter!
↑ comment by Zack_M_Davis · 2012-12-10T18:06:08.671Z · LW(p) · GW(p)
(Almost certainly a reference to the animated series My Little Pony: Friendship Is Magic, in which Princess Celestia rules the land of Equestria.)
↑ comment by A1987dM (army1987) · 2012-12-10T14:43:12.937Z · LW(p) · GW(p)
Let's just say BDFL (Benevolent Dictator For Life)...
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-10T18:30:00.176Z · LW(p) · GW(p)
Insufficiently wacky - would invite accusations of authoritarianism/absolutism from the clue impaired.
↑ comment by pleeppleep · 2012-12-09T14:19:33.591Z · LW(p) · GW(p)
Now you're just talking crazy.
↑ comment by Eugine_Nier · 2012-12-09T18:46:04.806Z · LW(p) · GW(p)
LW needs a king.
The standard term is Benevolent Dictator for Life, and we already have one. What you're asking for strikes me as more of a governor-general.
Replies from: None↑ comment by [deleted] · 2012-12-09T19:49:41.749Z · LW(p) · GW(p)
Our benevolent dictator isn't doing much dictatoring. If I understand correctly that it's EY, he has a lot more hats to wear, and doesn't have the time to do LW-managing full time.
Is he willing to improve LW, but not able? Then he is not a dictator.
Is he able, but not willing? Then he is not benevolent.
Is he both willing and able? Then whence cometh suck?
Is he neither willing nor able? Then why call him God?
As with god, If we observe a lack of leadership, it is irrelevant whether we nominally have a god-emperor or not. The solution is always the same: Build a new one that will actually do the job we want done.
Replies from: Eliezer_Yudkowsky, mrglwrf↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T06:50:12.106Z · LW(p) · GW(p)
Okay, that? That was one of the most awesome predicates of which I've ever been a subject.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-10T20:53:59.522Z · LW(p) · GW(p)
You're defending yourself against accusations of being a phyg leader over there and over here, you're enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god? And not only that, but this might even imply that you endorse the solution that is "always the same" of "building a new one (god-emperor)".
Have you forgotten Luke's efforts to fight the perceptions of SI's arrogance?
That you appear to be encouraging a comment that uses the word god to refer to you in any way, directly or indirectly, is pretty disheartening.
Replies from: Eliezer_Yudkowsky, None, bogus↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T21:06:28.836Z · LW(p) · GW(p)
I tend to see a fairly sharp distinction between negative aspects of phyg-leadership and the parts that seem like harmless fun, like having my own volcano island with a huge medieval castle, and sitting on a throne wearing a cape saying in dark tones, "IT IS NOT FOR YOU TO QUESTION MY FUN, MORTAL." Ceteris paribus, I'd prefer that working environment if offered.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T01:56:14.846Z · LW(p) · GW(p)
And how are people supposed to make the distinction between your fun and signs of pathological narcissism? You and I both know the world is full of irrationality, and that this place is public. You've endured the ravages of the hatchet job and Rationalwiki's annoying behaviors. This comment could easily be interpreted by them as evidence that you really do fancy yourself a false prophet.
What's more is that I (as in someone who is not a heartless and self-interested reporter, who thinks you're brilliant, who appreciates you, who is not some completely confused person with no serious interest in rationality) am now thinking:
How do I make the distinction between a guy who has an "arrogance problem" and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
Replies from: fubarobfusco, ChristianKl, wedrifid↑ comment by fubarobfusco · 2012-12-11T06:30:18.555Z · LW(p) · GW(p)
Try working in system administration for a while. Some people will think you are a god; some people will think you are a naughty child who wants to be seen as a god; and some people will think you are a sweeper. Mostly you will feel like a sweeper ... except occasionally when you save the world from sin, death, and hell.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T09:30:14.406Z · LW(p) · GW(p)
I feel the same way as a web developer. One day I'm being told I'm a genius for suggesting that a technical problem might be solved by changing a port number. The next day, I'm writing a script to compensate for the incompetent failures of a certain vendor.
When people ask me for help, they assume I can fix anything. When they give me a project, they assume they know better how to do it.
↑ comment by ChristianKl · 2012-12-11T14:04:04.398Z · LW(p) · GW(p)
The only way to decide whether someone has a serious issue is to read a bunch from them and then see which patterns you find.
↑ comment by wedrifid · 2012-12-11T05:32:53.039Z · LW(p) · GW(p)
And how are people supposed to make the distinction between your fun and signs of pathological narcissism?
I don't see this as a particular problem in this instance. The responses are of the form that if anything an indication that he isn't taking himself too seriously. The more pathologically narcissistic type tend to be more somber about their power and image.
No, if there was a problem here it would be if the joke was in poor taste. In particular if there were those that had been given the impression that Eliezer's power or Narcissism really was corrupting his thinking. If he had begun to use his power arbitrarily on his own whim or if his arrogance had left him incapable of receiving feedback or perceiving the consequences his actions have on others or even himself. Basically, jokes about how arrogant and narcissistic one is only work when people don't perceive you as actually having problems in that regard. If you really do have real arrogance problems then joking that you have them while completely not acknowledging the problem makes you look grossly out of touch and socially awkward.
For my part, however, I don't have any direct problem with Eliezer appreciating this kind of reasoning. It does strike me as a tad naive of him and I do agree that it is the kind of thing that makes Luke's job harder. Just... as far as PR missteps made by Eliezer this seems so utterly trivial as to be barely worth mentioning.
How do I make the distinction between a guy who has an "arrogance problem" and has fun encouraging comments that imply that people think of him as a god vs. a guy with a serious issue?
The way I make such distinctions is to basically ignore 'superficial arrogance'. I look at the real symptoms. The ones that matter and have potential direct consequences. I look at their ability to comprehend the words of others---particularly those others without the power to 'force' them to update. I look at how much care they take in exercising whatever power they do have. I look at how confident they are in their beliefs and compare that to how often those beliefs are correct.
↑ comment by [deleted] · 2012-12-15T05:12:02.714Z · LW(p) · GW(p)
srsly, brah. I think you misunderstood me.
you're enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god?
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
And not only that, but this might even imply that you endorse the solution that is "always the same" of "building a new one (god-emperor)".
I bet he does endorse it. His life's work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-15T06:28:52.806Z · LW(p) · GW(p)
srsly, brah. I think you misunderstood me.
My response was more about what interpretations are possible than what interpretation I took.
I was drawing an analogy to Epicurus on this issue because the structure of the situation is the same, not because anyone perceives (our glorious leader) EY as a god.
Okay. There's a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
I bet he does endorse it. His life's work is all about building a new god to replace the negligent or nonexistent one that let the world go to shit. I got the idea from him.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
Replies from: None↑ comment by [deleted] · 2012-12-15T07:12:05.394Z · LW(p) · GW(p)
Okay. There's a peculiar habit in this place where people say things that can easily be interpreted as something that will draw persecution. Then I point it out, and nobody cares.
This is concerning. My intuitions suggest that it's not a big deal. I infer that you think it's a big deal. Someone is miscalibrated.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Okay. It probably seems kind of stupid that I failed to realize that. Is there a post that I should read?
I don't know if there's an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I've forgotten.
Replies from: Epiphany, Nornagest↑ comment by Epiphany · 2012-12-15T08:43:42.625Z · LW(p) · GW(p)
This is concerning. My intuitions suggest that it's not a big deal. I infer that you think it's a big deal. Someone is miscalibrated.
I really like this nice, clear, direct observation.
Do you have a history with persecution that makes you more attuned to it? I am blissfully ignorant.
Yes, but more relevantly, humanity has a history with persecution - lots of intelligent people and people who want to change the world from Socrates to Gandhi have been persecuted.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
My take on the "build a God-like AI" idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don't have any sense that Jesus is going to come back and reconstruct us after it does it's optimization...
I don't know if there's an explicit post about it. I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I've forgotten.
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer "It sounds like what you're saying is we need to build a God" and Eliezer is like "Why don't we call it a very powerful optimizing agent?" and grins like he's just fooled someone and Robert Wright thinks and he's like "Why don't we call that a euphemism for God?" which destroys Eliezer's grin.
If Eliezer's intentions are to build a God, then he's far less risk-averse than the type of person who would simply try to avoid being burned at the stake. In that case the problem isn't that he makes himself look bad...
Replies from: wedrifid, None↑ comment by wedrifid · 2012-12-15T14:46:54.015Z · LW(p) · GW(p)
I went out looking for myself and I just watched the bloggingheads video (6:42) where Robert Wright says to Eliezer "It sounds like what you're saying is we need to build a God" and Eliezer is like "Why don't we call it a very powerful optimizing agent?" and grins like he's just fooled someone
Like he's just fooled someone? I see him talking like he's patiently humoring an ignorant child who is struggling to distinguish between "Any person who gives presents at Christmas time" and "The literal freaking Santa Claus, complete with magical flying reindeer". He isn't acting like he has 'fooled' anyone or acting in any way 'sneaky'.
and Robert Wright thinks and he's like "Why don't we call that a euphemism for God?" which destroys Eliezer's grin.
While I wouldn't have been grinning previously whatever my expression had been it would change in response to that question in the direction of irritation and impatience. The answer to "Why don't we call that a euphemism for God?" is "Because that'd be wrong and totally muddled thinking". When your mission is to create an actual very powerful optimization agent and that---and not gods---is actually what you spend your time researching then a very powerful optimization agent isn't a 'euphemism' for anything. It's the actual core goal. Maybe, at a stretch, "God" can be used as a euphemism for "very powerful optimizing agent" but never the reverse.
I'm not commenting here on the question of whether there is a legitimate PR concern regarding people pattern matching to religious themes having dire, hysterical and murderous reactions. Let's even assume that kind of PR concern legitimate for the purpose of this comment. Even then there is a distinct difference between "failure to successfully fool people" and "failure to educate fools". It would be the latter task that Eliezer has failed at here and the former charge would be invalid. (I felt the paragraph I quoted to be unfair on Eliezer with respect to blurring that distinction.)
Replies from: Epiphany↑ comment by Epiphany · 2012-12-15T22:45:22.319Z · LW(p) · GW(p)
I don't think that an AI that goes FOOM would be exactly the same as any of the "Gods" humanity has been envisioning and may not even resemble such a God (especially because, if it were a success, it would theoretically not behave in self-contradictory ways like making sinful people, knowing exactly what they're going to do, making them to do just that, telling them not to act like what they are and then punishing them for behaving the way it designed them to). I don't see a reason to believe that it is possible for any intellect to be omniscient, omnipotent or perfect. That includes an AI. These, to me, would be the main differences.
Robert Wright appears to be aware of this, as his specific wording was "It seems to me that in some sense what you're saying is that we need to build a God."
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said "in some sense" which means that his question is about something deeper than choosing a word. The wording he's using is asking something more like "Do you think we should build something similar to a God?"
The way that I interpret this question is not "What do we call this thing?" but more "You think we should build a WHAT?" with the connotations of "What are you thinking?" because the salient thing is that building something even remotely similar to a God would be very, very dangerous.
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what's salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked "Are you about to commit murder?" I would NOT accept "Why don't we call it knife relocation?" as an acceptable answer.
Afterward, Robert Wright says that Eliezer is being euphemistic. This perception that Eliezer's answer was an attempt to substitute nice sounding wording for something awful confirms, to me, that Robert's intent was not to ask "What word should we use for this?" but was intended more like "You think we should build a WHAT? What are you thinking?"
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer's mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means). In this case, I would classify that as a "social skills / character flaw induced faux pas".
In my personal interpretation of Eliezer's behavior, I'm giving him more credit than that - I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God and have voiced valid and poignant concerns like "Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?" or "Why would you want us imperfect humans to make something so insanely powerful if it's more or less guaranteed to be flawed?" I'm assuming that Eliezer interpreted correctly when the salient part of someone's question is not in it's literal wording but in connotations relating to the situation.
This is why it looks, to me, like Eliezer's intent was to brush him off by choosing to answer this question as if it were a question about what word to use and hoping that Robert didn't have the nerve to go for the throat with valid and poignant questions like the examples above.
The topic of whether this was an unintentional faux pas or an intentional brush-off isn't the most important thing here.
The most important questions, in my opinion, are:
"Does Eliezer intend to build something this powerful?"
"Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?"
"Do you and I agree/disagree that it's a good idea to build something this powerful / that it can be controlled?"
Replies from: wedrifid↑ comment by wedrifid · 2012-12-16T00:42:59.258Z · LW(p) · GW(p)
If you are taking this as a question about what to CALL the thing, then I agree completely that the AI should not be called a God. But he said "in some sense" which means that his question is about something deeper than choosing a word. The wording he's using is asking something more like "Do you think we should build something similar to a God?"
If forced to use that term and answer the question as you ask it, with a "Yes" or "No" then the correct answer would be "No". He is not trying to create a God, he has done years of work working out what he is trying to create and it is completely different to a God in nearly all features except "very powerful". If you insinst on that vocabulary you're going to get "No, I don't" as an answer. That the artificial intelligence Eliezer would want to create seems to Wright (and perhaps yourself) like it should be described as, considered a euphemism for, or reasoned about as if it is God is a feature of Wright's lack of domain knowledge.
There is no disingenuity here. Eliezer can honestly say "We should create a very powerful (and carefully designed) optimizing agent" but he cannot honestly say "We should create a God". (You may begin to understand some of the reasons why there is such a difference when you start considering questions like "Can it be controlled?". Or at least when you start considering the answers to the same.) So Eliezer gave Wright the chance to get the answer he wanted ("Hell yes, I want to make a very powerful optimising agent!") rather than the answer the question you suggest would have given him ("Hell no! Don't create a God! That entails making at least two of the fundamental and critical ethical and practical blunders in FAI design that you probably aren't able to comprehend yet!")
The reason I interpreted it this way is partly because instead of interpreting everything I hear literally, I will often interpret wording based on what's salient about it in the context of the situation. For instance, if I saw a scene where someone was running toward someone else with a knife and I asked "Are you about to commit murder?" I would NOT accept "Why don't we call it knife relocation?" as an acceptable answer.
I reject the analogy. Eliezer's answer isn't like the knife relocation answer. (If anything the connotations are the reverse. More transparency and candidness rather than less.)
Now, it could be argued that Eliezer accidentally failed to detect the salient connotation. It could be argued, and probably fairly effectively (against me anyway) that the reason for Eliezer's mistake is that he was having one of his arrogant moments and he genuinely thought that, because of a gigantic intelligence difference between Robert and himself, that Robert was asking a moronic question based on the stupid perception that a super powerful AI would be exactly the same as a real God (whatever that means).
It could be that there really is an overwhelming difference in crystallized intelligence between Eliezer and Robert. The question---at least relative to Eliezer's standards---was moronic. Or at least had connotations of ignorance of salient features of the landscape.
In this case, I would classify that as a "social skills / character flaw induced faux pas".
There may be a social skills related faux pas here---and it is one where it is usually socially appropriate to say wrong things in an entirely muddled model of reality rather than educate the people you are speaking to. Maybe that means that Eliezer shouldn't talk to people like Robert. Perhaps he should get someone trained explicitly with spinning webs of eloquent bullshit to optimally communicate with the uneducated. However the character flaws that I take it you are referring to---Eliezer's arrogance and soforth, just aren't at play here.
In my personal interpretation of Eliezer's behavior, I'm giving him more credit than that
The net amount of credit given is low. You are ascribing a certain intention to Eliezer's actions where that intention is clearly not achieved. "I infer he is trying to do X and he in fact fails to do X". In such cases generosity suggests that if they don't seem to be achieving X, haven't said X is what they are trying to achieve and X is inherently lacking in virtue then then by golly maybe they were in fact trying to achieve Y! (Eliezer really isn't likely to be that actively incompetent at deviousness.)
I am assuming that he has previously encountered people by that point (2010) who have flipped out about the possibility that he wants to build a God
You assign a high likelyhood to people flipping out (and even persecuting Eliezer) in such a way. Nyan considers it less likely. It may be that Eliezer doesn't have people (and particularly people of Robert Wright's intellectual caliber) flip out at him like that.
and have voiced valid and poignant concerns like "Why do you believe it is possible to succeed at controlling something a bazillion times smarter than you?" or "Why would you want us imperfect humans to make something so insanely powerful if it's more or less guaranteed to be flawed?"
The kind of people to whom there is the remote possibility that it would be useful to even bother to attempt to explain the answers to such questions are also the kind of people who are capable of asking them, without insisting on asking then belligerently emphasizing wrong questions about 'God'. This is particularly the case with the first of those questions where the question of 'controlling' only comes up because of intuitive misunderstanding on how one would relate to such an agent---ie. thinking of it as a "God" which is something we already intuit as "like a human or mammal but way powerful".
"Does Eliezer intend to build something this powerful?"
If he can prove safety mathematically then yes, he does.
At around the time I visited Berkeley there was a jest among some of the SingInst folks "We're thinking of renaming ourselves from The Singularity Institute For Artificial Intelligence to The Singularity Institute For Or Against Artificial Intelligence Depending On What Seems To Be The Best Altruistic Approach All Things Considered".
There are risks to creating something this powerful and, in fact, the goal of Eliezer and SIAI aren't "research AGI"... plenty of researchers work on that. They are focused on Friendliness. Essentially... they are focused on the very dangers that you describe here and are dedicating themselves to combating those dangers.
Note that it is impossible to evaluate a decision to take an action without considering what alternative choice there is. Choosing to dedicate one's efforts to developing an FAI ("safe and desirable very powerful optimizing agent") has a very different meaning if the alternative is millennia of peace and tranquility than the same decision to work on FAI if the alternative is "someone is going to create a very powerful optimizing agent anyway but not bother with rigorous safety research".
"Does Eliezer really think that something a bazillion times as intelligent as himself can be controlled?"
If you're planning to try to control the super-intelligence you have already lost. The task is of selecting from the space of all possible mind designs a mind that will do things that you want done.
"Do you and I agree/disagree that it's a good idea to build something this powerful
Estimate: Slightly disagree. The biggest differences in perception may be surrounding what the consequences of inaction are.
/ that it can be controlled?"
Estimate: Disagree significantly. I believe your understanding of likely superintelligence behavior and self development has too much of an anthropocentric bias. Your anticipations are (in my estimation) strongly influenced by how ethical, intellectual and personal development works in gifted humans.
The above disagreement actually doesn't necessarily change overall risk assessment. I just expect the specific technical problems to be overcome in order to prevent "Super-intelligent rocks fall! Everybody dies." to be slightly different in nature. Probably with more emphasis on abstract mathematical concerns.
↑ comment by [deleted] · 2012-12-15T18:26:18.331Z · LW(p) · GW(p)
I really like this nice, clear, direct observation.
Thank you. I will try to do more of that.
Here Eliezer is in a world full of Christians who believe that dreaded Satan is going to reincarnate soon, claim to be a God, promise to solve all the problems, and take over earth. Religious people have been known to become violent for religious reasons. Surely building an incarnation of Satan would, if that were their interpretation of it, qualify as more or less the ultimate reason to launch a religious war. These Christians outnumber Eliezer by a lot. And Eliezer, according to you, is talking about building WHAT?
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
I don't think that the remains of theistic christianity could reach an effective military/propoganda arm all the way to Berkely even if they did somehow misinterpret FAI as an assault on God.
Nontheistic christianity, which is the ruling religion right now could flex enough military might to shut down SI, but I can't think of any way to make them care.
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant. This may affect my perceptions.
My take on the "build a God-like AI" idea is that it is pretty crazy. I might like this idea less than the Christians probably do seeing as how I don't have any sense that Jesus is going to come back and reconstruct us after it does it's optimization...
This is a good reaction. It is good to take seriously the threat that an AI could pose. However, the point of Friendly AI is to prevent all that and make sure it that if it happens, it is something we would want.
Replies from: Epiphany, army1987↑ comment by Epiphany · 2012-12-15T23:26:19.244Z · LW(p) · GW(p)
Thank you. I will try to do more of that.
:) You can be as direct as you want to with me. (Normal smilie to prevent the tiny sad moments.)
Interesting. Religious people seem a lot less scary to me than this. My impression is that the teeth have been taken out of traditional christianity. There are a few christian terrorists left in north america, but they seem like holdouts raging bitterly against the death of their religion. They are still in the majority in some places, though, and can persecute people there.
Okay, good point. I agree that religion is losing ground. However, I've witnessed some pretty creepy stuff coming out of the churches. Some of them are saying the end is near and doing things like having events to educate about it. Now, that experience was one that I had in a particular location which happens to be very religious. I'm not sure that it was representative of what the churches are up to in general. I admit ignorance when it comes to what average churches are doing. But if there's enough end-times kindling being thrown into the pit here, people who were previously losing faith may flare up into zealous Christians with the right spark. Trying to build what might be interpreted as an Antichrist would be quite the spark. The imminent arrival of an Antichrist may be seen as a fulfillment of the end times prophecies and be seen as a sign that the Christian religion really is true after all.
A lot is at stake here in the mind of the Christian. If it's not the end of the world, opposing a machine "God" is still going to look like a good idea - it's dangerous. If it is the end of the world, they'd better get their s--- in gear and become all super-religious and go to battle against Satan because judgment day is coming and if they don't, they're going to be condemned. Being grateful to God and following a bunch of rules is pretty hard, especially when you can't actually SEE the God in question. How people are responding to the mundane religious stuff shouldn't be seen as a sign of how they'll react when something exceptional happens.
Being terrified out of your mind that someone is building a super-intelligent mind is easy. This takes no effort at all. Heck, at least half of LessWrong would probably be terrified in this case. Being extra terrified because of end times prophecies doesn't take any thought or effort. And fear will kill their minds, perhaps making religious feelings more likely. That, to me, seems to be a likely possibility in the event that someone attempts to build a machine "God". You're seeing a decline in religion and appear to be thinking that it's going to continue decreasing. I see a decline in religion and I think it may decrease but also see the potential for the right kinds of things to trigger a conflagration of religious fervor.
There are other memes that add an interesting twist: The bible told them that a lot of people would lose faith before the Antichrist comes. Their own lack of faith might be taken as evidence that the bible is correct.
And I have to wonder how Christianity survived things like the plagues that wiped out half of Europe. They must have been pretty disenchanted with God - unless they interpreted it as the end of the world and became too terrified of eternal condemnation to question why God would allow such horrible things to happen.
Perhaps one of the ways the Christianity meme defends itself is to flood the minds of the religious with fear at the exact moments in history when they would have the most reason to question their faith.
Last year's Gallup poll says that 78% of Americans are Christan. Even if they've lost some steam, if the majority still uses that word to self-identify, we should really acknowledge the possibility that some event could trigger zealous reactions.
I have been told that before Hitler came to power, the intelligentsia of Germany was laughing at him thinking it would never happen. It's a common flaw of nerds to underestimate the violence and irrationality that the average person is capable of. I think this is because we use ourselves as a model and think they'll behave, feel and think a lot more like we do than they actually will. I try to compensate for this bias as much as possible.
↑ comment by A1987dM (army1987) · 2012-12-16T18:08:17.751Z · LW(p) · GW(p)
I live in Vancouver, where as far as I can tell, most people are either non-religious, or very tolerant.
BTW, where I am (i.e. among twentysomething university students in central Italy) atheists take the piss out of believers waaaaay more often than the other way round.
↑ comment by Nornagest · 2012-12-15T08:20:05.233Z · LW(p) · GW(p)
I picked it up from everything on Friendly AI, the terrible uncaringness of the universe, etc. It is most likely not explicitly represented as replacing a negligent god anywhere outside my own musings, unless I've forgotten.
I'm not sure I've heard any detailed analysis of the Friendly AI project specifically in those terms -- at least not any that I felt was worth my time to read -- but it's a common trope of commentary on Singularitarianism in general.
No less mainstream a work than Deus Ex, for example, quotes Voltaire's famous ""if God did not exist, it would be necessary to create him" in one of its endings -- which revolves around granting a friendly (but probably not Friendly) AI control over the world's computer networks.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-12-15T10:28:21.355Z · LW(p) · GW(p)
No less mainstream a work than Deus Ex, for example, quotes Voltaire's famous ""if God did not exist, it would be necessary to create him" in one of its endings -- which revolves around granting a friendly (but probably not Friendly) AI control over the world's computer networks.
ROT-13:
Vagrerfgvatyl, va gur raqvat Abeantrfg ersref gb, Uryvbf (na NV) pubbfrf gb hfr W.P. Qragba (gur cebgntbavfg jub fgvyy unf zbfgyl-uhzna cersreraprf) nf vachg sbe n PRI-yvxr cebprff orsber sbbzvat naq znxvat vgfrys (gur zretrq NV naq anab-nhtzragrq uhzna) cuvybfbcure-xvat bs gur jbeyq va beqre gb orggre shysvyy vgf bevtvany checbfr.
↑ comment by bogus · 2012-12-11T02:42:39.261Z · LW(p) · GW(p)
over here, you're enjoying a comment that implies that either the commenter, or the people the commenter is addressing perceive you as a god?
I have to agree with Eliezer here: this is a terrible standard for evaluating phygishness. Simply put, enjoying that kind of comment does not correlate at all with what the harmful features of phygish organizations/social clubs, etc. are. There are plenty of Internet projects that refer to their most prominent leaders with such titles as God-King, "benevolent dictator" and the like; it has no implication at all.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-12T07:59:43.736Z · LW(p) · GW(p)
You have more faith than I do that it will not be intentionally or unintentionally misinterpreted.
Also, I am interpreting at that comment within the context of other things. The "arrogance problem" thread, the b - - - - - - k, Eliezer's dating profile, etc.
What's not clear is whether you or I are more realistic when it comes to how people are likely to interpret, in not only a superficial context (like some hatchet jobbing reporter who knows only some LW gossip), but with no context, or within the context of other things with a similar theme.
↑ comment by mrglwrf · 2012-12-09T20:35:17.755Z · LW(p) · GW(p)
Why would you believe that something is always the solution when you already have evidence that it doesn't always work?
Replies from: None↑ comment by [deleted] · 2012-12-09T20:51:22.382Z · LW(p) · GW(p)
Let's go to the object level: in the case of God, the fact that god is doing nothing is not evidence that Friendly AI won't work.
In the case of EY the supposed benevolent dictator, the fact that he is not doing any benevolent dictatoring is explained by the fact that he has many other things that are more important. That prevents us from learning anything about the general effectiveness of benevolent dictators, and we have to rely on the prior belief that it works quite well.
Replies from: mrglwrf↑ comment by mrglwrf · 2012-12-10T18:11:58.763Z · LW(p) · GW(p)
There are alternatives to monarchy, and an example of a disappointing monarch should suggest that alternatives might be worth considering, or at the very least that appointing a monarch isn't invariably the answer. That was my only point.
↑ comment by Epiphany · 2012-12-09T23:54:25.904Z · LW(p) · GW(p)
I don't think a CEO level monarch is necessary though I don't know what job title a community "gardener" would map to. Do you think a female web developer who obviously cares a lot about LW and can implement solutions would be a good choice?
This doesn't look like it's very likely to happen though, considering that they're changing focus:
Then again maybe CFAR will want to do something.
Replies from: Curiouskid, Jayson_Virissimo, None↑ comment by Curiouskid · 2012-12-19T03:22:19.525Z · LW(p) · GW(p)
I think you meant to use a different hyperlink?
Replies from: Epiphany↑ comment by Jayson_Virissimo · 2012-12-11T14:16:19.431Z · LW(p) · GW(p)
In general, the kinds of people that (strongly) hint that they should have power should...not...ever....have...power.
↑ comment by [deleted] · 2012-12-10T00:14:33.757Z · LW(p) · GW(p)
female web developer who obviously cares a lot about LW and can implement solutions would be a good choice?
Female doesn't matter, web development is good for being able to actually write what needs to be written. Caring is really good. The most important factor though is willingness to be audacious, grab power, and make things happen for the better.
Whether or not we need someone with CEO-power is uninteresting. I think such a person having more power is good.
If you're talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.
I've said before in the generic, but in this case we can be specific: If you declare yourself king, I'll kneel.
(good luck)
Replies from: Alicorn, Epiphany↑ comment by Alicorn · 2012-12-10T01:01:27.086Z · LW(p) · GW(p)
If you're talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.
I'm opposed to appointing her as any sort of actual-power-having-person. Epiphany is a relative newcomer who makes a lot of missteps.
Replies from: None, wedrifid↑ comment by wedrifid · 2012-12-10T01:21:47.113Z · LW(p) · GW(p)
I'm opposed to appointing her as any sort of actual-power-having-person.
The personal antipathy there has been distinctly evident to any onlookers who are mildly curious about how status and power tends to influence human behavior and thought.
Replies from: Alicorn↑ comment by Alicorn · 2012-12-10T06:41:17.518Z · LW(p) · GW(p)
I think anyone with any noticeable antipathy between them and any regular user should not have unilateral policymaking power, except Eliezer if applicable because he was here first. (This rules me out too. I have mod power, but not mod initiative - I cannot make policy.)
Replies from: wedrifid↑ comment by wedrifid · 2012-12-10T09:58:47.743Z · LW(p) · GW(p)
I think anyone with any noticeable antipathy between them and any regular user should not have unilateral policymaking power, except Eliezer if applicable because he was here first. (This rules me out too. I have mod power, but not mod initiative - I cannot make policy.)
I agree and note that it is even more important that people with personal conflicts don't have the power (or, preferably, voluntarily waive the power) to actively take specific actions against their personal enemies.
(Mind you, the parent also seems somewhat out of place in the context and very nearly comical given the actual history of power abuses on this site.)
↑ comment by Epiphany · 2012-12-10T00:27:59.219Z · LW(p) · GW(p)
Female doesn't matter, web development is good for being able to actually write what needs to be written. Caring is really good. The most important factor though is willingness to be audacious, grab power, and make things happen for the better.
Well I do have the audacity.
If you're talking about yourself, go for it. Get a foot in the code, make the front page better, be audacious. Make this place awesome.
I would love to do that, but I've just gotten a volunteer offer for a much larger project I had an idea for. I had been hoping to do a few smaller projects on LW in the meantime, while I was putting some things together to launch my larger projects, and the timing seems to have worked out such that I will be doing the small projects while doing the big projects. In other words, my free time is projected to become super scarce.
However, if a job offer were presented to me from LessWrong / CFAR I would seriously consider it.
If you declare yourself king, I'll kneel.
I don't believe in this. I am with Eliezer on sentiments like the following:
In Two More Things to Unlearn from School he warns his readers that "It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism."
In Cached Thoughts he tells you to question what HE says. "Now that you've read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you'll think, "Cached thoughts." My belief is now there in your mind, waiting to complete the pattern. But is it true? Don't let your mind complete the pattern! Think!"
But thank you. (:
Replies from: None↑ comment by [deleted] · 2012-12-10T00:54:29.150Z · LW(p) · GW(p)
I would love to do that, but I've just gotten a volunteer offer for a much larger project I had an idea for. I had been hoping to do a few smaller projects on LW in the meantime, while I was putting some things together to launch my larger projects, and the timing seems to have worked out such that I will be doing the small projects while doing the big projects. In other words, my free time is projected to become super scarce.
grumble grumble. Like I said, everyone who could is doing something else. Me too.
However, if a job offer were presented to me from LessWrong / CFAR I would seriously consider it.
I don't think they'll take the initiative on this. Maybe you approach them?
I don't believe in this. I am with Eliezer on sentiments like the following:
I don't see how those relate.
But thank you.
Thank you for giving a shit about LW, and trying to do something good. I see that you're actively engaging in the discussions in this thread and that's good. So thanks.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T02:15:22.073Z · LW(p) · GW(p)
grumble grumble. Like I said, everyone who could is doing something else. Me too.
Yeah. Well maybe a few of us will throw a few things at it and that'll keep it going...
I don't think they'll take the initiative on this. Maybe you approach them?
I mentioned a couple times that I'm dying to have online rationality training materials and that I want them badly enough I am half ready to run off and make them myself. I said something like "I'd consider doing this for free or giving you a good deal on freelance depending on project size". Nobody responded.
I don't see how those relate.
Simply put: I'm not the type that wants obedience. I'm the type that wants people to think for themselves.
Thank you for giving a shit about LW, and trying to do something good. I see that you're actively engaging in the discussions in this thread and that's good. So thanks.
Aww. I think that's the first time I've felt appreciated for addressing endless September. (: feels warm and fuzzy
Replies from: None↑ comment by [deleted] · 2012-12-12T04:17:07.750Z · LW(p) · GW(p)
Simply put: I'm not the type that wants obedience. I'm the type that wants people to think for themselves.
Please allow me to change your mind. I am not the type who likes obedience either. I agree that thinking for selves is good, and that we should encourage as much of it as possible. However, this does not negate the usefulness of authority:
Argument 1:
Life is big. Bigger than the human mind can reasonable handle. I only have so much attention to distribute around. Say I'm a meetup participant. I could devote some attention to monitoring LW, the mailing list, etc until a meetup was posted, then overcome the activation energy to actually go. Or, the meetup organizer could mail me and say "Hi Nyan, come to Xday's meetup", then I just have to go. I don't have to spend as much attention on the second case, so I have more to spend on thinking-for-myself that matters, like figuring out whether the mainstream assumptions about glass are correct.
So in that way, having someone to tell me what to think and do reduces the effort I have to spend on those things, and makes me more effective at the stuff I really care about. So I actually prefer it.
Argument 2:
Even if I had infinite capacity for thinking for myself and going my own way, sometimes it just isn't the right tool for the job. Thinking for myself doesn't let me coordinate with other people, or fit into larger projects, or affect how LW works, or many other things. If I instead listen to some central coordinator, those things become easy.
So even if I'm a big fan of self-sufficiency and skepticism, I appreciate authority where available. Does this make sense?
Replies to downvoted comments blah blah blah
Perhaps we should continue this conversation somewhere more private... /sleaze
PM me if you want to continue this thread.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-12T07:15:50.098Z · LW(p) · GW(p)
Please allow me to change your mind. I am not the type who likes obedience either.
Well that is interesting and unexpected.
Argument 1:
This seems to be more of a matter of notification strategies - one where you have to check a "calendar" and one where the "calendar" comes to you. I am pattern-matching the concept "reminder" here. It seems to me that reminders, although important and possibly completely necessary for running a functional group, would be more along the lines of a behavioral detail as opposed to a fundamental leadership quality. I don't know why you're likening this to obedience.
Even if I had infinite capacity for thinking for myself
We do not have infinite capacity for critical thinking. True. I don't call trusting other people's opinions obedience. I call it trust. That is rare for me. Very rare for anything important. Next door to trust is what I do when I'm short on time or don't have the energy: I half-ass it. I grab someone's opinion, go "Meh, 70% chance they're right?" and slap it in.
I don't call that obedience, either.
I call it being overwhelmingly busy.
Thinking for myself doesn't let me coordinate with other people, or fit into larger projects, or affect how LW works, or many other things. If I instead listen to some central coordinator, those things become easy.
Organizing trivial details is something I call organizing. I don't call it obedience.
When I think of obedience I think of that damned nuisance demand that punishes me for being right. This is not because I am constantly right - I'm wrong often enough. I have observed, though, that some people are more interested in power than in wielding it meaningfully. They don't listen and use power as a way to avoid updating (leading them to be wrong frequently). They demand this thing "obedience" and that seems to be a warning that they are about act as if might makes right.
My idea of leadership looks like this:
If you want something new to happen, do it first. When everyone else sees that you haven't been reduced to a pile of human rubble by the new experience, they'll decide the "guinea pig" has tested it well enough that they're willing to try it, too.
If you really want something to get done, do it your damn self. Don't wait around for someone else to do it, nag others, etc.
If you want others to behave, behave well first. After you have shown a good intent toward them, invite them to behave well, too. Respect them and they will usually respect you.
If there's a difficulty, figure out how to solve it.
Give people something they want repeatedly and they come back for it.
If people are grateful for your work, they reciprocate by volunteering to help or donating to keep it going.
To me, that's the correct way of going about it. Using force (which I associate with obedience) or expecting people not to have thoughts of their own is not only completely unnecessary but pales in comparison effectiveness-wise.
Maybe my ideas about obedience are completely orthogonal to yours. If you still think obedience has some value I am unaware of, I'm curious about it.
if you want to continue...
Thank you for your interest. It feels good.
I have a romantic interest right now who, although we have not officially deemed our status a "relationship" are considering one another as potential seriously partners.
This came to both of us as a surprise. I had burned out on dating and deleted my dating profile. I was like:
insane amount of dating alienation * ice cube's chance of finding compatible partner > benefits of romance
(Narratives by LW Women thread if you want more)
And so now we're like ... wow this amount of compatibility is special. We should not waste the momentum by getting distracted by other people. So we decided that in order to let the opportunity unfold naturally, we would avoid pursuing other serious romantic interests for now.
So although I am technically available, my expected behavior, considering how busy I am, would probably be best classified as "dance card full".
Replies from: None↑ comment by [deleted] · 2012-12-12T13:43:44.479Z · LW(p) · GW(p)
We seem to have different connotations on "obedience", and might be talking about slightly different concepts. You're observations about how most people use power, and the bad kind of obedience, are spot-on.
The topic came up because of the "I'd kneel to anyone who declared themselves king" thing. I don't think such a behaviour pattern has to go to bad power abusing obedience and submission. I think it's just a really strategically useful thing to support someone who is going to act as the group-agency. You seem to agree on the important stuff and we're just using different words. case closed?
romantic.
lol what? Either you or me has utterly misunderstood something because I'm utterly confused. I made a mock-sleazy joke about the goddam troll toll, and suggested that we wouldn't have to pay it but we could still discuss if we PMed instead. And then suddenly this romantic thing. OhgodwhathaveIdone.
It feels good.
That's good. :)
Replies from: Epiphany↑ comment by Epiphany · 2012-12-12T18:14:04.618Z · LW(p) · GW(p)
You seem to agree on the important stuff and we're just using different words. case closed?
Yeah I think the main difference may be that I am very wary of power abuse, so I avoid using terms like "obedience" and "kneeling" and "king" and choose other terms that imply a situation where power is balanced.
lol what? Either you or me has utterly misunderstood something
Sorry, I think I must have misread that. I've been having problems sleeping lately. If you want to talk in PM to avoid the troll toll go ahead.
That's good. :)
Well not anymore. laughs at self
comment by pleeppleep · 2012-12-09T14:47:05.126Z · LW(p) · GW(p)
I'm disappointed in some of you. Am I the only person who prefers feeling elitist and hipstery to spreading rationality?
In all seriousness, though, I don't see why this is getting down voted. Eternal September probably isn't our biggest issue, but the massive increase in users is likely to cause problems, and those problems should be addressed. I personally don't like the idea of answering the horde of newbies with restrictions based on seniority or karma. That's not really fair and can select for poster who have used up their best ideas while shutting out new viewpoints. I much prefer the calls for restrictions based on merit and understanding, like the rationality quiz proposed below, or attempts to enlighten new users or even older users who have forgotten some of the better memes here. I also like the idea of a moderator of some kind, but my anti-authoritarian tendencies make me wary of allotting that person too much power as they are assuredly biased and will have a severely limited ability to control all the content here, which will generate unfairness and turn some people off.
I doubt that endless September is the main problem here, but I think it's pretty clear that this site just isn't as useful, or more importantly, fun, as it used to be. I notice that I come here less and less every day, and that more and more opinions that should be presented in discussions just aren't.
I think we need to fix that. I maintain that Lesswrong is the best thing to ever happen to me, and I want it to keep happening to other people. We need a more general assessment of the problem and ways to solve it. I honestly do miss some of the (admittedly somewhat elitist) optimism that used to flood this site.
We're rationalists. We aimed to build gods, eradicate the plagues of the human mind, and beat death itself. We said we'd win a staring contest with the empty, uncaring abyss of reality. We sought to rewrite human knowledge; to decide what, over the past 8 thousand years, was useful, and what wasn't.
If we can't keep one little community productive, we might as well hang up our hats and let the world get turned into paper clips, cause we've shown there's not much we can do to about it one way or the other.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T20:09:05.095Z · LW(p) · GW(p)
Eternal September probably isn't our biggest issue
What would that be in your opinion?
the massive increase in users is likely to cause problems, and those problems should be addressed.
Thank you pleeppleep for bringing this up. I am especially curious about why this thread has been hovering between -3 and 1 karma when the majority of people are concerned about this, and have chosen a solution for at least one problem. If you get any theories, please let me know.
more and more opinions that should be presented in discussions just aren't.
People have theorized that the users who might post these discussions are too intimidated to post. Do you have additional theories, or do you think this is the problem, too?
I maintain that Lesswrong is the best thing to ever happen to me
It is one of the best things that's happened to me, too. I feel strongly about protecting it.
We need a more general assessment of the problem and ways to solve it.
How would you describe the problem? What would you suggest for ways to assess it?
I honestly do miss some of the (admittedly somewhat elitist) optimism that used to flood this site.
What do you mean by that exactly? (No, I will not bite your head off about elitism. I have strong feelings about specific the type of elitism that means abusing others with the excuse that one is "better than" them, but I am curious to hear about any and all other varieties.)
We're rationalists. We aimed to build gods, eradicate the plagues of the human mind, and beat death itself. We said we'd win a staring contest with the empty, uncaring abyss of reality. We sought to rewrite human knowledge. To decide what, over the past 8 thousand years, was useful, and what wasn't.
Wow. That's inspirational. Okay, I think I know what "elitist optimism" means now. I don't agree with the goal of building gods (an awesome idea but super dangerous), but I want to quote this in places. I will need to find places to quote it in.
If we can't keep one little community productive, we might as well hang up our hats and let the world get turned into paper clips, cause we've shown there's not much we can do to about it one way or the other.
Upvote. (:
Replies from: pleeppleep↑ comment by pleeppleep · 2012-12-09T20:57:58.754Z · LW(p) · GW(p)
What would that be in your opinion?
I'd say our biggest issue lately is lack of direction. The classic topics are getting kinda old now, and we don't really seem to be able to commit to any replacements. Anything in the sequences is pretty firmly established so nobody talks much about them anymore, and without them we kinda drift to things like the "rational" way to brush your teeth. If the site starts to feel watered down, I don't think it's because of new users, but because of shallow topics. Endless September is probably the biggest issue drawing us towards the mainstream.
People have theorized that the users who might post these discussions are too intimidated to post. Do you have additional theories, or do you think this is the problem, too?
I'm not really sure what the cause for this is, but I'd say that the above theory or general apathy on the part of some of the better contributors are the most likely.
How would you describe the problem? What would you suggest for ways to assess it?
Like I said before, the site's starting to feel watered down. It seems like the fire that drew us here is beginning to die down. It's probably just an effect of time letting the ideas settle in, but I still think we should be able to counter the problem if we're all we're cracked up to be.
I think it's really good that Eliezer is writing a new sequence, but I don't think he can support the community's ambition all on his own anymore. We need something new. Something that gets us at least as excited as the old sequences. Something that gets us back in the mood to take on the universe, blind idiot god and all.
I think that a lot of us just sort of settled back into our mundane lives once the high from thinking about conquering the stars wore off. I think we should find a way to feel as strong as we did once we realized how much of man's mind is malfunctioning and how powerful we would become if we could get past that. I really don't know if we can recapture that spirit, but if it's possible, then it shouldn't be harder than figuring out FAI.
comment by JoshuaFox · 2012-12-09T07:24:20.929Z · LW(p) · GW(p)
- Raise the karma threshold for various actions.
- Split up the various SEQ RERUN, META, MEETUP into "sub-LWs" so that those who are not interested do not need to see it.
- Likewise, split up topics: applied rationality, FAI, and perhaps a few others. There can still be an overview page for those who want to see everything.
- Perhaps this is offtopic, but add an email-notification mechanism for the inbox. This would reduce the need to keep coming back to look for responses, and so reduce the annoyance level.
comment by David_Gerard · 2012-12-10T00:25:16.762Z · LW(p) · GW(p)
As was noted previously: the community is probably doomed. I suspect all communities are - they have a life cycle.
Even if a community has the same content, the people within it change with time.
The essential work on the subject is Clay Shirky's A Group Is Its Worst Enemy. It says at some point it's time for a wizard smackdown, but it's not clear this helps - it also goes from "let's see what happens" development to a retconned fundamentalism, where the understood implicit constitution is enforced. This can lead to problems if people have different ideas on what the understood implicit constitution actually was.
I also think this stuff is constant because Mark Dery's Flame Wars discussed the social structure of online groups (Usenet, BBSes) in detail in 1994, and described the Internet pretty much as it is now and has been since the 1980s.
tl;dr people are a problem.
comment by Kindly · 2012-12-09T15:51:43.140Z · LW(p) · GW(p)
"LessWrong has lost 52% of it's giftedness since March of 2009" is an incredibly sensationalist way of describing a mere 7-point average IQ drop. Especially if the average is dropping due to new users, because then the "giftedness" isn't actually being lost.
Replies from: gwern, Nominull, Epiphany↑ comment by gwern · 2012-12-09T19:00:11.183Z · LW(p) · GW(p)
Some absolute figures:
R> lw2009 <- read.csv("2009.csv"); lw2011 <- read.csv("2011.csv"); lw2012 <- read.csv("2012.csv")
R>
R> sum(as.integer(as.character(lw2009$IQ)) > 140, na.rm=TRUE)
[1] 31
R> sum(as.integer(as.character(lw2011$IQ)) > 140, na.rm=TRUE)
[1] 131
R> sum(as.integer(as.character(lw2012$IQ)) > 140, na.rm=TRUE)
[1] 120
R>
R> sum(as.integer(as.character(lw2009$IQ)) > 150, na.rm=TRUE)
[1] 20
R> sum(as.integer(as.character(lw2011$IQ)) > 150, na.rm=TRUE)
[1] 53
R> sum(as.integer(as.character(lw2012$IQ)) > 150, na.rm=TRUE)
[1] 42
↑ comment by Nominull · 2012-12-09T18:24:15.586Z · LW(p) · GW(p)
Well, I agree, but "mere" probably isn't a sensationalist enough way to describe a 7 point drop in IQ.
Replies from: Kindly, army1987↑ comment by A1987dM (army1987) · 2012-12-13T13:42:07.027Z · LW(p) · GW(p)
Honestly, so long as the drop is due to lower-IQ people arriving rather than higher-IQ people leaving, I can't see why it's such a big deal -- especially if the “new” people mostly just lurk. Now, if the average IQ of only the people with > 100 karma in the last 30 days was also dropping with time...
↑ comment by Epiphany · 2012-12-09T20:18:42.699Z · LW(p) · GW(p)
A. Well, 52% is the real figure. That's just math.
B. We don't know why the average IQ dropped. It could be because some of the gifted users have left.
C. Saying "if the data is good, LessWrong has lost 52% of it's giftedness" in the context of an average should be interpreted as "if the data is good, LessWrong's average has lost 52% of it's giftedness". This is correct. I feel that this is picking at details, but I'll go add the word "average" so that nobody else does it.
If you feel there is some deeper problem with my statement, please continue. If not, does this address your criticism?
Replies from: Kindly↑ comment by Kindly · 2012-12-09T20:33:05.410Z · LW(p) · GW(p)
Although 52% is a figure you calculate with math, describing it as "losing giftedness" is not math. Math is not about dividing numbers by other numbers; it's about figuring out which numbers are the correct ones to divide.
What you have calculated, as far as I know, is that the IQ mean has moved 52% of the way towards an arbitrary cutoff point. I have two objections to this:
The arbitrary cutoff point. Why should we specifically care about whether someone's IQ exceeds 132?
More importantly, even if we did care about this, the correct measure would be the number (or proportion) of people on LW with an IQ of 132+. We can get some idea of what this is from gwern's reply, but the decrease in this is nowhere near 52%.
Edit: to be clear, I think that the "7-point drop in IQ" figure is the initial figure to report. Further analysis might reveal that this drop is due to new users arriving with a new IQ distribution, or to older users leaving at a rate dependent on IQ, or a mix of the two; that would also be useful to know, and report details on.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T21:11:08.267Z · LW(p) · GW(p)
No matter what cutoff point I choose for the giftedness calculations, it will be argued that it is the wrong cutoff point. There are a lot of definitions of giftedness, and there's a lot of controversy over how giftedness should be defined. There's nothing I can do about that.
The reason I chose IQ 132 (or rather the top 2% which can vary from one test to another) is explained in the comment I linked to about this. Briefly: if you have that IQ, you qualify as gifted by most IQ based definitions of giftedness.
The most relevant thing about this IQ is that the research on giftedness tends to be done on people who have IQs over 132. I could have picked 110, but there would be very little research to read about "gifted" people with an IQ of 110. Conversely, had I picked 180, you'd be hard pressed to find any research at all. I looked once and found exactly one book on that IQ range. It's full of case studies. Those people are so rare, that this is about all they could do.
I picked the top 2% because although there's no standard, it is as close to a standard as I've got.
I decided to do the math anyway, and I'm content with having chosen a number that's connected to something meaningful: most of the research on giftedness. (Yes there are meaningful connections between IQ and all kinds of things. It's a common myth to assume IQ is only a number. It's not.)
Since there really isn't an IQ threshold that could have been chosen that would not be controversial, what would you have done?
Replies from: Kindly, army1987↑ comment by Kindly · 2012-12-10T00:25:33.793Z · LW(p) · GW(p)
Since there really isn't an IQ threshold that could have been chosen that would not be controversial, what would you have done?
I would have reported the IQ difference (and possibly some of the figures in this comment, which if you prefer you can also calculate for 132) rather than a dubious measure of distance to the 132 threshold.
In hindsight I regret objecting to the cutoff since my main source of dismay is the second point in my previous comment.
↑ comment by A1987dM (army1987) · 2012-12-13T13:45:08.595Z · LW(p) · GW(p)
The reason I chose IQ 132 (or rather the top 2% which can vary from one test to another) is explained in the comment I linked to about this. Briefly: if you have that IQ, you qualify as gifted by most IQ based definitions of giftedness.
One user with IQ 120 leaving and one with IQ 100 entering decrease the average IQ, but describing that as “losing giftedness” sounds kind of fucked up to me.
Replies from: Kindlycomment by [deleted] · 2012-12-09T05:23:02.105Z · LW(p) · GW(p)
I think we could use more intellectual productivity. I think we already have the capacity for a lot more. I think that would do a lot against any problems we might have, Obviously I am aware of the futility of the vague "we" in this paragraph, so I'll talk about what I could do but don't.
I have a lot of ideas to write up. I want to write something on "The improper use of empathy", something about "leading and following", something about social awkwardness from the inside. I wrote an article about fermi estimation that I've never posted. And some other ideas that I can't remember right now. I'll admit I have one meta-essay in here somewhere too. "Who's in charge here?"
I don't write as much for LW as I could, because I feel like a mere mortal among gods. I feel kindof inadequate, like I would be lowering the level of discussion around here. Ironically, the essays that I do post are all quite well upvoted, and not posting may be one source of the lowered quality of LW.
I may not be the only one.
EDIT: this is also why I post to discussion and not main.
Replies from: Epiphany, John_Maxwell_IV↑ comment by Epiphany · 2012-12-11T02:39:50.740Z · LW(p) · GW(p)
I don't feel inadequate but I do feel likely to get jumped all over for mistakes. I've realized that you really need to go over things with a fine-toothed comb, and that there are countless cultural peculiarities that are, for me, unexpected.
I've decided that the way I will feel comfortable posting here is to carefully word my point, make sure that point is obvious to the reader, identify and mentally outline any other claims in the piece, and make sure every part is supported and then (until I get to know the culture better) ask someone to check it out for spots that will be misunderstood.
That has resulted in me doing a lot of research. So now my main bottleneck is that I feel like posting something requires doing a lot of research. This is well and good IMO, but it means I won't post anywhere near as much simply because it takes a lot of time.
I've wondered if it would do us good to form a writer's group within LW where people can find out what topics everyone else is interested in writing about (which would allow them to co-author, cutting the work in half), see whether there are volunteers to do research for posts, and get a "second pair of eyes" to detect any karma-destroying mistakes in the writings before they're posted.
A group like this would probably result in more writing.
Replies from: None, Armok_GoB↑ comment by [deleted] · 2012-12-12T04:19:27.320Z · LW(p) · GW(p)
A group like this would probably result in more writing.
That's a really good idea.
Let me know when you've organized something.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-12T07:47:01.162Z · LW(p) · GW(p)
(: I do not have time to organize this currently. I'm not even sure I will have time to post on LessWrong. I have a lot of irons on the fire. :/
I would sure love to run a LW writer's group though, that would be awesome. Inevitably, it would be pointed out that I am not an expert on LW culture. If things slow down, and I do not see anyone else doing this, I may go for it anyway.
Replies from: None↑ comment by [deleted] · 2012-12-12T13:48:07.876Z · LW(p) · GW(p)
(:
I can no longer hold my tongue. Your smileys are upside-down, and the tiny moments of empathetic sadness when my eyes haven't sorted out which side of the parens the colon is on are really starting to add up. :)
Replies from: Epiphany↑ comment by Armok_GoB · 2012-12-13T00:20:04.439Z · LW(p) · GW(p)
I have like 10 different articles I'd like to submit to this, many of which have been on ice for literally years!
Replies from: Epiphany↑ comment by Epiphany · 2012-12-13T20:37:34.685Z · LW(p) · GW(p)
What are your reasons for postponing? More interestingly, what would get you to post them? Would the writer's group as described above do it, or this other suggestion here?
Would something else help?
Replies from: Armok_GoB↑ comment by Armok_GoB · 2012-12-14T02:18:13.043Z · LW(p) · GW(p)
Being absolutely, utterly terrible at writing. Being utterly incapable of clear communication. Being a sloppy thinker incapable of formalizing and testing all the awesome theories I come up with.
Being rather shy and caring very very much about the opinions of this community, and very insecure in my own abilities, fearing ridicule and downvotes.
Other than that I am extremely motivated to share all these insights I think might be extremely valuable to the world, somehow.
The suggestion mentioned wouldn't help at all. Really, anything radial enough will look less like fixing something I've written, and more like me explaining the idea and someone else writing an article about it with me pointing out miscommunications.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-09T06:47:06.466Z · LW(p) · GW(p)
Yep, that's my experience as well. Recently, I decided "screw what LW thinks" and started posting more thoughts of mine, and they're all getting upvoted. My vague intuitions about how many upvotes my posts will get doesn't seem to correlate very well with how many upvotes they actually get either. This is probably true for other people as well.
The only potential problem with this, IMO, is if people think I'm more of an authoritative source than I actually am. I'm just sharing random thoughts I have; I don't do scholarly work like gwern.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T02:34:31.164Z · LW(p) · GW(p)
It seems to me that there are lots and lots of people who want to write posts but they're concerned about whether those posts will be received well. I've read, also, that more people put "public speaking" as their worst fear than "death" when surveyed. If we made a karma prediction tool, maybe that would help get people posting here. Here's what I'm thinking:
First, we could create a checklist of the traits that we think will get a LessWrong post upvoted. For instance:
- Is there an obvious main point or constructive goal?
- Is the main point supported / is there a reasonable plan for the constructive goal? (Or are they otherwise framed in the correct context "This is hypothetical" or whatever.)
- What type of support is included (math, citations, graphics, etc).
- Was the topic already covered?
- Is it a topic of interest to LessWrong?
- Is it uplifting or unhappy?
- (Or do a separate survey that asks people's reasons for upvoting / downvoting and populate the checklist with those.)
Then we could post the checklist as a poll in each new post and article for a while.
Then we could correlate the karma data with the checklist poll data and test it to see how accurately it predicts a post's karma.
If you had a karma prediction tool, would it help you post more? [pollid:413]
Replies from: satt↑ comment by satt · 2013-02-24T21:04:45.495Z · LW(p) · GW(p)
Posting that checklist as a poll in each new post would likely end up irritating people.
A simpler approach, with the twin advantages of being simpler and being something one can do unilaterally, would be to just count the proportion of recent, non-meetup-related Discussion posts with positive karma. Then you could give potential post authors an encouraging reference class forecast like "85% of non-meetup Discussion posts get positive karma".
Replies from: Epiphany↑ comment by Epiphany · 2013-02-24T21:51:32.192Z · LW(p) · GW(p)
You know what? That is simple and elegant. I like that about it... but in the worst case scenario, that will encourage people to post stuff without thinking about it because they'll make the hasty generalization that "All non-meetup posts have an 85% chance of getting some karma" and even in the best case scenario, a lot of people will probably be thinking something along the lines of "Just because Yvain and Gwern and people who are really good at this get positive karma doesn't mean that I will."
Unfortunately, I think it would be ineffective.
Replies from: sattcomment by Epiphany · 2012-12-14T07:51:00.626Z · LW(p) · GW(p)
It has occurred to me that LessWrong is divided against itself with two conflicting directives:
- Spread rationality.
- Be a well-kept garden.
Spreading rationality implies helping as many new people as possible develop improved rational thinking abilities but being a well-kept garden specifically demands censorship and/or bans of "fools" and people who are not "fun".
"A house divided against itself cannot stand." (Lincoln)
I think this fundamental conflict must be solved in some way. If not, then the risk is that LessWrong's discussion area will produce neither of those outcomes. If it fills with irrational people, the rational ones will go elsewhere and the irrational people won't spread rationality to themselves. They will instead most likely adopt some superficial version of it reminiscent of Feynman's descriptions of cargo cult science or Eliezer's descriptions of undiscriminating skeptics.
Perhaps there's some article from Eliezer I'm unaware of that says something to the effect of "The discussion is supposed to be where the rational people produce rational thought and everyone else can lurk and that's how rationality can be spread." If so, I hope that this is pointed out to me.
Without some clear explanation of how LessWrong is supposed to both spread rationality and be a well-kept garden, we're likely to respond to these directives inadequately.
Replies from: Richard_Kennaway, None↑ comment by Richard_Kennaway · 2012-12-14T09:23:38.880Z · LW(p) · GW(p)
Every school has this problem: how to welcome people who as yet know little and raise them up to the standard we want them to reach, while allowing those already there to develop further. Universities solve this with a caste distinction between the former (students) and the latter (faculty), plus a few bridging roles (grad student, intern, etc.). On a much smaller scale, the taiko group I play with has found the same problem of dividing beginners from the performing team. It doesn't work to have one class that combines introductory practice with performance rehearsal. And there can be social problems of people who simply aren't going to improve getting disgruntled at never being invited to join the performing team.
In another comment I suggested that this division already exists: LessWrong and CFAR. So the question is, does LessWrong itself need a further splitting between welcoming beginners and a "serious" inner circle? Who would be the advanced people who would tend the beginners garden? How would membership in the inner circle be decided?
↑ comment by [deleted] · 2012-12-15T06:12:39.618Z · LW(p) · GW(p)
Missions, perhaps? A few ideas: "We are rationalists, ask us anything" as an occasional post on reddit. Drop links and insightful comments around the internet where interesting people hang out.
Effect #1 is to raise the profile of rationality in the internet community in general, so that more people become interested. Effect #2 is that smart people click on our links and come to LW. I myself was linked to LW at first by a random link dropped in r/transhumanism or something. I immediately recognized the awesomeness of LW, and ate the sequences.
On the home front, I think we should go whole hog on being a well kept garden. Here's why:
There's no such thing as a crowd of philosophers. A movement should stay small and high quality as long as possible. The only way to maintain quality is to select for quality.
There are a lot of people out there, such that we could select for any combination of traits we liked and be unlikely to run out of noobs. We will have a much easier time at integration and community maintenance if we focused on only attracting the right folks.
I don't think we have to worry about creating rationalists from normals. There are enough smart proto-rationalists out there just itching to find something like LW, that all we have to do is find them, demonstrate our powers, and point them here. We should focus on collecting rationalists, not creating them. (Is there anyone for whom this wouldn't have worked? Worse, is there any major subset of good possible LWers that this turns off?)
As for integrating new people, I think the right people will find a way and it's ok if everyone else gets turned off. This might be pure wishful thinking. What are other people's thoughts on this?
Overall, have the low level missionary work happen out there where it belongs. Not in these hallowed halls.
As for what to do with these hallowed halls, here's my recommendations:
Elect or otherwise create an Official Community Organizer who's job it is to integrate all the opinions and make the decisions about the direction of LW. I think they would also provide direct friendly encouragement to the meetup organizers, who are currently totally lacking in coordination and support.
Sort out this crazy discussion/main bullshit. The current setup has very few desirable properties. I don't know what the solution should be, but we should at least be trying things. This of course requires someone to come up with ideas and code them. Would it be bad to try a different arrangement for a month?
Fix the front page. The valuable stuff there is approximately the banner, "featured posts", and current activity links. Everything else is of dubious value. The LW front page should be slick. Right now it looks like it was designed by a committee, and probably turns off most of our potentials.
Properly organize and index the LW material. This is a pretty big project; LW has thousands of good posts. This project neatly fits in with and builds on the current work in the wiki. The goal is a single root page from which every major insight is linked in at least a passable reading order. Like a textbook TOC. This obviously would benefit from wiki-improvements in general, for which I recommend merging wiki and LW accounts, and making wiki activity more visible and encouraged, among other things.
Friendship threads where we pick a partner and get to know them. Would generally increase community-coherence and civility. After meeting a lot of the other posters at the CFAR minicamp, I get more friendly feels in the community.
Somehow come up with the funding and political will to do all this stuff.
Something about trolls and idiots. Is this even a problem once the above are solved?
As for you, Epiphany, I want to commend you for still being at the throat of this problem, and still generating ideas and analysis. I'm impressed and humbled. Keep up the good work.
comment by Larks · 2012-12-10T23:08:50.504Z · LW(p) · GW(p)
Without some measure of who the respondants are, this survey can't mean much. If the recent arrivals vote on mass that there is no problem, the poll will suggest there isn't any, even though Eternal September is the very mechanism that causes the poll outcome! For the same reason that sufficiently large immigration becomes politically impossible to reverse, so too Eternal September cannot be combatted democractically.
To get a more accurate response, we'd have to restrict it to people who had more than 100 karma 12 months ago or something.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-12T07:58:17.815Z · LW(p) · GW(p)
If the recent arrivals vote on mass that there is no problem
Well that's not what's happened. Most of the votes are in, and the majority has voted that they're very or somewhat concerned.
Any other concerns?
Replies from: Larks↑ comment by Larks · 2012-12-13T05:21:49.961Z · LW(p) · GW(p)
It might be still be more of a problem than the poll suggests. Maybe all the old-timers voted very concerned, and are being diluted by the newcomers.
(To clarify, I appreciate that you've done this. I just think it's important to bear in mind that things are probably even worse than they look)
Replies from: Epiphany↑ comment by Epiphany · 2012-12-13T20:57:31.873Z · LW(p) · GW(p)
Do you mean because of normalcy bias / optimism bias? I am concerned about that, too. But in reality, I don't think there's an accurate way to measure the endless September threat. I doubt anyone has done the sort of research that would produce reliable indicators (like following numerous forums, watching for certain signs, determining which traits do have predictive power, testing ideas, etc.).
My POV is basically that if there's a group, and it becomes popular, it will eventually trend toward the mainstream far enough for me personally to be unhappy about it (I have always been very different but were I a mainstream person, I'd probably be cheering for endless September, and were I less different, I would be less concerned about it because my threshold for how much trending I would deem a problem would be higher, so it does seem relevant to acknowledge that my perspective on what would constitute ES is relative to me, as I am easy to alienate and so have a low tolerance for inundation.)
If you are hoping to make me 'very' concerned, you're preaching to the converted though perhaps you were more interested in convincing LW.
comment by Richard_Kennaway · 2012-12-10T14:45:46.354Z · LW(p) · GW(p)
So, what's needed is a division into an introductory place for anyone to join and learn, and a "graduate-level" place for people with serious ability and commitment to making stuff happen. The latter wouldn't be a public forum, in fact it wouldn't even be a forum at all, even if as part of its activities it has one. It would be an organisation founded for the purpose of promulgating rationality and improving that of its members, not merely talking about it. It would present itself to the world as such, with participation by invitation or by application rather than just by signing in on a web site.
In other words, LessWrong and CFAR.
comment by SoftFlare · 2012-12-09T11:40:32.043Z · LW(p) · GW(p)
We might want to consider methods of raising standards for community members via barriers of entry employed elsewhere (Either for posting, getting at some or all the content, or even hearing about the site's existance):
- An application process for entry (Workplaces (ie Valve), MUD sites)
- Regulating influx using a member cap (Torrent sites, betas of web products)
- An activity standard - You have to be atleast this active to maintain membership (Torrent sites, task groups in organizations sometimes)
- A membership fee - Maybe in conjuction with an activity standard - (Torrent sites, private online communities, private real-world communities, etc. etc.)
- Allowing membership only by invitation/sponsorship - (Torrent sites, US Citizenship, Law firms partnership, The Bavarian Illuminati)
- Having a section of the site be a secret, only to be revealed to people who have proven themselves, A-la bayesian conspiracy - (How classified intelligence organizations work sometimes (not a joke), Internet forums)
- Karma-based feature upgrading (Stack Exchange, Internet Forums)
Or any combination of the above applied to different sections. If anyone would like to pursue this, I am willing to spend up to 2 hours a week for the next few weeks constructing a solid plan around this, given someone else is willing to commit at least similar resources.
On a different note, and as anecdotal evidence, I have been lurking on LW for years now, and went to a CFAR camp before posting a single comment - In fear of karma retribution and trolling. (I know that its a bad strategy and that I shouldn't care as much. Sadly, I'm not as good at self-modification as I would like to be sometimes.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-12-09T18:54:48.976Z · LW(p) · GW(p)
On a different note, and as anecdotal evidence, I have been lurking on LW for years now, and went to a CFAR camp before posting a single comment - In fear of karma retribution and trolling. (I know that its a bad strategy and that I shouldn't care as much. Sadly, I'm not as good at self-modification as I would like to be sometimes.)
On the other had, lurking for a while before posting is very much what we want new users to do.
comment by OrphanWilde · 2012-12-10T17:28:12.919Z · LW(p) · GW(p)
This post reminds me of Eliezer's own complaints against Objectivism; that Ayn Rand's ingroup became increasingly selective as time went on, developing a self-reinforcing fundamentalism.
As I wrote in one of my blogs a while back, discussing another community that rejects newcomers:
"This is a part of every community. A community which cannot or will not do this is crippled and doomed, which is to say, it -is- their jobs to [teach new members their mores]. This is part of humanity; we keep dying and getting replaced, and training our replacements is a constant job. We cannot expect that people should "Just know" the right way to behave, we have to teach them that, whether they're twelve, twenty two, or eighty two"
Replies from: Nick_Tarleton, MugaSofer↑ comment by Nick_Tarleton · 2012-12-12T05:03:45.674Z · LW(p) · GW(p)
An elite intellectual community can^H^H^H has to mostly reject newcomers, but those it does accept it has to invest in very effectively (while avoiding the Objectivist failure mode).
I think part of the problem is that LW has elements of both a ground for elite intellectual discussion and a ground for a movement, and these goals seem hard or impossible to serve with the same forum.
I agree that laziness and expecting people to "just know" is also part of the problem. Upvoted for the quote.
Replies from: katydee↑ comment by katydee · 2012-12-14T04:46:28.774Z · LW(p) · GW(p)
I'm not entirely sure that expecting people to "just know" is a huge problem here, as on the Internet appropriate behavior can be inferred relatively easily by reading past posts and comments-- hence the common instruction to "lurk more."
One could construe this as a filter, but if so, who is it excluding? People with low situational awareness?
comment by [deleted] · 2013-02-12T13:02:59.817Z · LW(p) · GW(p)
I really don't see why Epiphany is so obsessed with IQ. Based on anecdotal evidence, there is not much of a correlation between IQ and intellect beyond the first two standard deviations above the mean anyway. I have come across more than a handful of people who don't excel in traditional IQ tests, but who are nevertheless very capable of presenting coherent, well-argued insights. Does it matter to me that their IQ is 132 instead of 139? No. Who cares about the average IQ among members of the LW community as long as we continue demonstrating the ability to engage in thoughtful discussions and generate valuable conclusions?
It is also possible to inflate your IQ score by taking tests repeatedly. "One meta-analysis reports that a person who scores in the 50th percentile on their first test will be to the 80th by their third", according to this page: http://rationalwiki.org/wiki/High_IQ_society. If you are vain and think that doing well on an IQ test is a really important way of signalling intellect, then go ahead and keep doing exercises in Mensa practice books, though that would not make you more capable of critical thinking or logical argumentation.
Replies from: Epiphany, Epiphany↑ comment by Epiphany · 2013-02-13T02:35:07.981Z · LW(p) · GW(p)
I have come across more than a handful of people who don't excel in traditional IQ tests, but who are nevertheless very capable of presenting coherent, well-argued insights. Does it matter to me that their IQ is 132 instead of 139? No. Who cares about the average IQ among members of the LW community as long as we continue demonstrating the ability to engage in thoughtful discussions and generate valuable conclusions?
Another possibility here is that your perceptions of intelligence levels are really off. This isn't too unlikely as I see it:
I've heard reports that people with super high IQs have trouble making distinctions between normal and bright, or even between moderately gifted and mentally challenged. I frequently observe that the gifted people I've met experience their own intelligence level as normal, and accidentally mistake normal people for stupid ones, or mistakenly interpret malice when only ignorance is present (because they're assuming the other person is as smart as they are and would therefore never make such an ignorant mistake).
If the intelligence difference you experience every day is 70 points wide, your perceptions are probably more geared to find some way to make sense of conflicting information, not geared to be sensitive to ten point differences.
As a person who has spent a lot of time learning about intelligence differences, I'd say it's fairly hard to perceive intelligence differences smaller than 15 points anyway. The 30 point differences are fairly easy to spot. A large part of this may be because of the wide gaps in abilities that gifted people tend to have between their different areas of intelligence. So, you've got to figure that IQ 130 might be an average of four abilities that are quite different from each other, and so the person's abilities will likely overlap with some of the abilities of a person with IQ 120 or IQ 140. However, a person with an IQ of 160 will most likely have their abilities spread out across a higher up range of ability levels, so they're more likely to seem to have completely different abilities from people who have IQs around 130.
The reason why a few points of difference is important in this context is because the loss appears to be continuing. If we lose a few points each year, then over time, LessWrong would trend toward the mean and the culture here may die as a result.
Replies from: army1987, army1987, None↑ comment by A1987dM (army1987) · 2013-02-13T13:22:38.372Z · LW(p) · GW(p)
The reason why a few points of difference is important in this context is because the loss appears to be continuing. If we lose a few points each year, then over time, LessWrong would trend toward the mean and the culture here may die as a result.
http://xkcd.com/605/ http://xkcd.com/1007/
(SCNR.)
Replies from: Epiphany↑ comment by Epiphany · 2013-02-13T18:19:38.763Z · LW(p) · GW(p)
Ok, FYI, if you see the words "appears to be" and "if" in my sentences, it means I am acknowledging the ambiguity. If you do not want to annoy me, please wait until I'm using words like "definitely" and "when" or direct your "could not resist" comments at someone else.
If you want to discuss how we may determine the probability of a consistent and continuing downward trend, that would be constructive and I'd be very interested. Please do not waste my time by pointing out the obvious.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-02-13T19:26:38.381Z · LW(p) · GW(p)
If you want to discuss how we may determine the probability of a consistent and continuing downward trend, that would be constructive and I'd be very interested. Please do not waste my time by pointing out the obvious.
(First of all, as I might have already mentioned, I don't think that the average of (IQ - 132) over all readers is a terribly interesting metric; the total number of active contributors with IQ above 132 or something like that might be better.)
I'd guess that the decline in average IQ is mostly due to lower-IQ people arriving rather than to higher-IQ people leaving (EDIT: applying the intraocular trauma test to this graph appears to confirm that), and the population growth appears to have tapered off (there were fewer respondents in the 2012 survey than in the 2011 one, even though the 2011 one was open for longer). I'd guess the average IQ of readers is decreasing with time as a reversed logistic function, but we'd have to fit a four-parameter curve to three data points to test that.
Replies from: Epiphany↑ comment by Epiphany · 2013-02-14T03:57:06.981Z · LW(p) · GW(p)
the total number of active contributors with IQ above 132 or something like that might be better
Actually, a similar concern was brought up in response to my IQ Accuracy comment and Vaniver discovered that the average IQs of the active members and lurkers was almost exactly the same:
165 out of 549 responses without reported positive karma (30%) self-reported an IQ score; the average response was 138.44.
181 out of 518 responses with reported positive karma (34%) self-reported an IQ score; the average response was 138.25.
We could separate the lurkers from the active members and do the analysis again, but I'm not sure it would be worth the effort as it looks to me like active members and lurkers are giving similar answers. If you'd like to do that, I'd certainly be interested in any surprises you uncover, but I don't expect it to be worthwhile enough to do it myself.
I'd guess that the decline in average IQ is mostly due to lower-IQ people arriving rather than to higher-IQ people leaving (EDIT: applying the intraocular trauma test to this graph appears to confirm that)
The sample set for the highest IQ groups is, of course, rather small, but what's been happening with the highest IQ groups is not encouraging. The specific graph in question (although I very much doubt that Gwern would intend to make that graph misleading in any way) is just not designed to clearly illustrate that particular aspect of the results visually.
Here are a few things you wouldn't guess without looking at the numbers:
Exceptionally gifted people used to be 18% of the IQ respondents. Now they are 6%.
The total number of highly and exceptionally gifted respondents decreased in 2012, while normal and moderately gifted respondents increased.
↑ comment by A1987dM (army1987) · 2013-02-13T13:21:10.830Z · LW(p) · GW(p)
or mistakenly interpret malice when only ignorance is present (because they're assuming the other person is as smart as they are and would therefore never make such an ignorant mistake)
I'm under the impression that a substantial part of Hanson's Homo hypocritus observations fall prey to this failure mode.
Replies from: Epiphany↑ comment by Epiphany · 2013-02-13T18:15:10.399Z · LW(p) · GW(p)
Is there a name for this failure mode? For clarity: The one where people use themselves as a map of other people and are frequently incorrect. That would be good to have.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-02-13T18:16:32.384Z · LW(p) · GW(p)
↑ comment by [deleted] · 2013-02-13T09:34:47.936Z · LW(p) · GW(p)
Sorry about my tardiness when responding to comments. I don't visit LessWrong very often. Maybe in future I should refrain from posting comments unless I am sure that I have the time and diligence to participate satisfactorily in any discussion that my comments might generate, since I wouldn't want to come across as rude.
Another possibility here is that your perceptions of intelligence levels are really off.
After reading and thinking a bit about this comment, I think you might be right, especially regarding the point that gifted people might often
mistakenly interpret malice when only ignorance is present.
I am rather bad at reading other people. I am not diagnosed with any degree of autism, but I am rather socially stunted nevertheless. As I mentioned in an earlier comment, I can be socially inept. This self-assessment was the conclusion of many instances where I was informed that I had grossly misunderstood certain social situations or inadvertently committed some kind of faux pas.
It is also generally difficult for me to gauge whether specific comments of mine might be construed as passive-aggressive/condescending. When you asked if my intention was to insult you, my response was "No, but I am sorry that you feel that way". In the past, when I did not know any better, I would have said, "No, and don't be so sensitive." As you can imagine, that response usually escalated things instead of calming people down. It is a long and ongoing learning process for me to understand how to react appropriately in social contexts in order to avoid hurt feelings.
In short, it seems like I commit the mind projection fallacy a lot when interacting with other people: If I wouldn't feel offended by certain ways of phrasing things, I assume that other people wouldn't either. If I wouldn't make such an ignorant mistake, I assume that other people wouldn't either.
The reason why a few points of difference is important in this context is because the loss appears to be continuing.
When you put it like this, I can understand your concern.
↑ comment by Epiphany · 2013-02-12T20:09:49.383Z · LW(p) · GW(p)
I really don't see why Epiphany is so obsessed with IQ. Based on anecdotal evidence, there is not much of a correlation between IQ and intellect beyond the first two standard deviations above the mean anyway.
Try reading this response to Slade's suicidal post and you will begin to understand why giftedness is relevant, in a general sense. Gifted people, especially highly gifted people, are very different from most. If you haven't seen that for yourself, then perhaps:
A. You haven't met someone with an IQ like 160 or 180. Those people tend to be very, very different so maybe you are only comparing people with much smaller IQ differences with each other.
B. The people you've met with super high IQs behave in a way that blends in when they're with you and minimize social contact so that you don't notice the differences. The ones that I know tend to do that. They don't just barge into a room and solve unsolvable science problems for all to see. They tend to be quiet, or away hiding in their caves.
C. You never asked the IQs of the smartest people you know and therefore haven't seen the difference.
D. You feel strongly that we should express egalitarianism by treating everyone as if they are all intellectually exactly the same. There's a movement of people who want to believe everyone is gifted, that giftedness does not exist, that it goes away, or that gifted people have some horrible flaw that "balances" them out, that they should be stifled in schooling environments in order to destroy their giftedness so that they're intellectually equal to everybody else, and all kinds of other things. Many people hate inequality and cannot deal with the scientifically proven fact that intellectual inequalities do exist. Wanting to solve inequalities is great, but it's important that we don't deny that intellectual inequalities exist, and it's absolutely, undeniably wrong to stifle a person, especially a child, in the name of "equality". I care a lot about this cause. I hope you read this PDF by developmental psychologist Linda Silverman (I want everyone to read it):
I have come across more than a handful of people who don't excel in traditional IQ tests, but who are nevertheless very capable of presenting coherent, well-argued insights.
One in six gifted people has a learning disorder. About one in three are creative. Some of them have mental disorders or physical conditions. All three of these can reduce one's IQ score and should be compensated for on an IQ test. Unfortunately, a lot of the IQ tests that are administered (by Mensa for instance) do not include any sort of evaluation for multiple exceptionalities (jargon for when you've got multiple differences that affect learning).
Who cares about the average IQ among members of the LW community as long as we continue demonstrating the ability to engage in thoughtful discussions and generate valuable conclusions?
You missed my point. My point was: "LessWrong may be headed toward cultural collapse so we need some way to determine whether this is a real threat. Do we have numbers? Yes we do. We have IQ numbers." The IQ blurb was a data point for an ongoing discussion on the controversial yet critical topic of whether LessWrong's subculture is dying. My point was not "Oh no, we cannot lose IQ points!"
Let me ask you this: If you were attempting to determine whether LessWrong is headed for cultural collapse, and you knew that the average IQ at LessWrong was decreasing, and you knew that you needed to supply the group with all related data, would you justify omitting that? You would have to include it if you want to be thorough, as it was related. That point is at the top because it's new - most of the other points have been presented before. I couldn't present the IQ data until it had been thoroughly analyzed.
I'm a psychology enthusiast with a special interest in developmental psychology, specifically in gifted adults. When I go to the trouble of thoroughly analyzing some data and sharing information that I gathered while pursuing a main interest of mine, I very much prefer respectful comments in return such as "I don't see the relevance of IQ in this context, would you mind explaining?" as opposed to being called "obsessed". I prefer it even more if the person double checks their own perceptions to clear up any confusion on their own before responding to me.
I have a passion for learning which is not pathological. The term "obsessed" is inappropriate and offensive. Try this: Gwern, one of LessWrong's most prominent and most appreciated members, also has a passion for learning. Check out his website. If you do not appreciate the thoroughness with which he pursues truth - a major element of LessWrong culture - then perhaps it's time to consider whether this is a compatible hang out spot.
If you are vain and think that doing well on an IQ test is a really important way of signalling intellect, then go ahead and keep doing exercises in Mensa practice books, though that would not make you more capable of critical thinking or logical argumentation.
Was your intent to insult me?
Replies from: None, army1987↑ comment by [deleted] · 2013-02-12T20:20:56.697Z · LW(p) · GW(p)
You haven't met someone with an IQ like 160 or 180. Those people tend to be very, very different so maybe you are only comparing people with much smaller IQ differences with each other.
To the extent that IQ tests are reliable, my IQ is actually measured to be 170 (no re-takes or prior training; assessed by a psychometrician). (Just supplying information here; please don't construe this as an act of defensiveness or showing off, because that is not my intention.) I was also not only comparing people with smaller IQ differences -- I have encountered people with 10+ points of IQ difference and yet who are not significantly different in terms of their abilities to contribute meaningfully to dialogues. But, of course, my sample size is not huge.
Was your intent to insult me?
No, but I am sorry that you feel that way. I can be socially inept.
Replies from: Epiphany↑ comment by Epiphany · 2013-02-12T20:47:23.817Z · LW(p) · GW(p)
To the extent that IQ tests are reliable, my IQ is actually measured to be 170 (no re-takes or prior training). (Just supplying information here; please don't construe this as an act of defensiveness.)
Well that was unexpected. I'm open-minded enough to consider that this is possibly the case.
FYI: Claims like this are likely to trigger a fit of "overconfident pessimism" (referring to Luke's article) in some of the members. IQ appears to be a consistent pessimism trigger.
Was your intent to insult me? No, but I am sorry that you feel that way. I can be socially inept.
Admitting that is big of you. Thanks for that. My subjective faith in humanity indicator has been incremented a tick in the upward direction.
I see you're new, so I'll inform you: There are a lot of people like us here, meaning, people who know better than to game an IQ test and then delude themselves with the "results".
I won't say there are no status games, but that you will find a lot of people that frown on them as much as you appear to in your last comment. I don't even believe in status.
It's really hard to leave the outside world outside. I keep perceiving irrational B.S. everywhere, even though I've been participating here since August. Not going to say that there's no irrational B.S. here or that I haven't adjusted at all but that my perceptions still haven't entirely adjusted.
It appears that you may have a similar issue of perceiving B.S. in comments where no such B.S. exists.
It's best to be aware of such a tendency if you have it, as this kind of response is, for obvious reasons, kind of alienating to others. Not blaming you for it (I have the same problem). Just trying to help.
Now that we've established that there was a misunderstanding here, would you like to start over by choosing and clarifying a point you want to make, or telling me that you've reinterpreted things? That would tie up this loose end of a conversation.
Out of curiosity, do you feel significantly different from those in the IQ 130 range?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-02-13T13:45:45.766Z · LW(p) · GW(p)
I'm open-minded enough to consider that this is possibly the case.
This sounds like identity-driven reasoning. (Antipattern: "Do I accept the claim X? I'm open-minded. Open-minded people would accept X. Therefore I accept X.") The conclusions you draw about something should be given by your understanding of that thing, not by your identity.
↑ comment by A1987dM (army1987) · 2013-02-13T13:26:17.083Z · LW(p) · GW(p)
About one in three are creative.
Isn't creativity a continuum? Such a sentence sounds as weird as “about one in three is tall” to me.
Replies from: Epiphany↑ comment by Epiphany · 2013-02-13T18:22:27.859Z · LW(p) · GW(p)
You have written me several comments today. One that was fairly constructive, one that was admittedly a "sorry could not resist" and now this. This comment makes me feel nit-picked at.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-02-13T19:14:31.182Z · LW(p) · GW(p)
I started implementing this policy, and while I'm there I sometimes also glance at aunts/cousins of the comment I'm considering replying to.
comment by Vaniver · 2012-12-09T21:32:01.882Z · LW(p) · GW(p)
I think that LW would be better with more good content. (Shocking!) I like plans that improve the amount of visible good content on LW, and am generally ambivalent towards plans that don't have that as an explicit goal.
My preferred explanation for why LW is less fun than it was before: Curiosity seeks to annihilate itself, and the low-hanging fruits have been picked. At one point, I was curious about what diet I should follow; I discovered intermittent fasting, tried it out, and it worked well for me. I am now far less curious about what diet I should follow. Similarly, LW may have gone through its rush of epiphanies, and now there is only slow, steady progress. The social value of LW (which is where Eternal September has its primary effects) is also moving towards meetups, which is probably better at fulfilling the social needs of meetup members but fractures the community and drains from the site.
Are there good topics out there that people haven't written posts on? I think so, and there are a few subjects that I know about and that I'm writing posts / sequences about. But they are little acorns, not big oaks. I believe that's necessary, but it will not look the same to readers.
Replies from: None↑ comment by [deleted] · 2012-12-10T15:11:11.476Z · LW(p) · GW(p)
Good points, but a bucket of picked fruit does not make a pie.
We've generated a lot of really valuable insight on this site, but right now it has no structure to it.
Maybe it's time to move from an article-writing phase to a knowledge-organization phase.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T02:53:57.370Z · LW(p) · GW(p)
I had been thinking that, too. Some people had mentioned argument mapping software, however I have heard some really harsh criticisms of those. Not sure if that's the right way.
Maybe a karma-infused wiki (alluding to Luke's recent post).
Replies from: Nonecomment by Vaniver · 2012-12-09T20:38:38.892Z · LW(p) · GW(p)
The bad news is that LessWrong's IQ average has decreased on each survey. It can be argued that it's not decreasing by a lot or we don't have enough data, but if the data is good, LessWrong has lost 52% of it's giftedness since March of 2009.
What? The inflated self-estimates have dramatically declined towards more likely numbers. Shouldn't we be celebrating a decrease in bias?
Edit: My analysis of the public survey data; in particular, the number of responders is a huge part of the estimate. If you assume every non-responder has, on average, an IQ of 100, the total average LW IQ is 112. Much of the work might be done by the selection effects of self-reporting.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T21:50:14.587Z · LW(p) · GW(p)
This might be virtuous doubt. Have you considered the opposite?
See Also:
Luke's article on overconfident pessimism.
My IQ related links in the OP.
Replies from: Vaniver↑ comment by Vaniver · 2012-12-09T22:14:17.036Z · LW(p) · GW(p)
Have you considered the opposite?
Yes, and I'm familiar with your IQ-related links in the OP*. But what's the opposite here? Let me make sure my position is clear: I agree that the people who post on LW are noticeably cleverer than the people that post elsewhere on the internet.
The narrow claim that I'm making is that the average self-reported IQ is almost definitely an overestimate of the real average IQ of people who post on LW, and a large change towards the likely true value in an unreliable number should not be cause for alarm. The primary three pieces of evidence I submit are:
On this survey, around a third of people self-reported their IQ, and it's reasonable to expect that there is a systematic bias, such that people with higher perceived IQs are more likely to share them. I haven't checked how many people self-reported on previous surveys, but it's probably similarly low.
When you use modern conversion numbers for average SAT scores, you get a reasonable 97th percentile for the average LWer. Yvain's estimate used a conversion chart from two decades ago; in case you aren't familiar with the history of psychometric testing, that's when the SAT had its right tail chopped off to make the racial gap in scores less obvious.
The correlation between the Raven's test and the self-reported IQ scores is dismal, especially the negative correlation for people without positive LW karma. The Raven's test is not designed to differentiate well between people who are more than 99th percentile (IQ 135), but the mean score of 127 (for users with positive karma) was 96th percentile, so I don't think that's as serious a concern.
* I rechecked the comment you linked to in the OP, and I think it was expanded since I read it first. I agree that more than half of people provided at least one IQ estimate, but I think that they should not be weighted uniformly; for example, using the self-reported IQ to validate the self-reported IQ seems like a bad idea! It might be interesting to see how SAT scores and age compare- we do have a lot of LWers who presumably took the SAT before it was dramatically altered, and with younger LWers we can compare scores out of 1600 to scores out of 2400. It's not clear to me how much more clarity this will give, though, and how much the average IQ of LW survey responders actually matters.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-10T00:19:43.156Z · LW(p) · GW(p)
What IQ would you correlate to the SAT numbers, considering?
As for the Raven's numbers, I am not sure where you're getting them from. I don't see a column when searching for "raven" in the 2012 spreadsheet, nor do I see "raven" on the survey result threads.
Replies from: Vaniver, Kindly↑ comment by Vaniver · 2012-12-10T01:30:28.557Z · LW(p) · GW(p)
What IQ would you correlate to the SAT numbers, considering?
SAT scores, combined with the year that it was taken in, give you a percentile measure of that person (compared to test-takers, which is different from the general population, but in a fairly predictable way), which you can then turn into an IQ-equivalent.
I say equivalent because there are a number of issues. First, intelligence testing has a perennial problem that absolute intelligence and relative intelligence are different things. Someone who is 95th percentile compared to high school students in 1962 is not the same as someone who is 95th percentile compared to high school students in 2012. It might also be more meaningful to say something like "the median LWer can store 10 numbers in working memory, compared to the general population's median of 7" instead of "the median LWer has a working memory that's 95th percentile." (I also haven't looked up recently how g-loaded the SAT is, and that could vary significantly over time.)
Second, one of the main benefits may not be that the median LWer is able to get into MENSA, but that the smartest LWers are cleverer than most people have had the chance to meet during the lives. This is something that IQ tests are not very good at measuring, especially if you try to maintain the normal distribution. Reliably telling the difference between someone who is 1 out of 1,000 (146) and someone who is 1 out of 10,000 (155) is too difficult for most current tests; how many people would you have to base your test off of to reliably tell that someone is one out of a million (171) from their raw score?
As for the Raven's numbers, I am not sure where you're getting them from. I don't see a column when searching for "raven" in the 2012 spreadsheet, nor do I see "raven" on the survey result threads.
iqtest.dk is based on Raven's Progressive Matrices; the corresponding column, CV in the public .xls, is called IQTest. I referred to the scores that way because Raven's measures a particular variety of intelligence. It's seen widespread adoption because it's highly g-loaded and culture fair, but a score on Raven's is subtly different from a total score on WAIS, for example.
comment by FiftyTwo · 2012-12-09T02:43:18.006Z · LW(p) · GW(p)
I'm curious, how do you propose spreading ideas or raising the sanity waterline without bringing in new people?
Replies from: DanArmak, Eugine_Nier, Epiphany↑ comment by DanArmak · 2012-12-09T17:58:01.091Z · LW(p) · GW(p)
If you want to spread ideas, don't bring outsiders in, send missionaries out.
Replies from: None↑ comment by Eugine_Nier · 2012-12-09T03:51:00.527Z · LW(p) · GW(p)
On the other hand if the new people dilute or overwhelm LW culture and lower the sanity waterline on LW, it won't be able to raise the sanity waterline in the rest of the world. It's a balancing act.
↑ comment by Epiphany · 2012-12-09T02:56:15.295Z · LW(p) · GW(p)
Firstly, it is not my view that we should not bring in new people. My view is that if we bring in too many new people at once, it will be intolerable for the old users and they will leave. That won't raise the sanity waterline as effectively as growing the site at a stable pace.
Secondly, the poll has an option "Send beginners to the Center for Applied Rationality" (spelled "Modern" not "Applied" in the poll because I was unaware that CFMR changed it's name to CFAR).
comment by RobertLumley · 2012-12-09T00:40:14.775Z · LW(p) · GW(p)
It's almost like as we grow in number we regress towards the mean. Shocking.
Replies from: jmmcd↑ comment by jmmcd · 2012-12-09T02:05:51.687Z · LW(p) · GW(p)
If trending towards the mean wasn't explicitly mentioned in the poll this would be a useful contribution. As it stands, you should pay a lol toll.
Replies from: RobertLumley, RobertLumley↑ comment by RobertLumley · 2012-12-09T07:07:49.995Z · LW(p) · GW(p)
My point is that as LW was founded largely as a way of drumming up support for SIAI. The fact that they want to and are making efforts to grow LW in number should make it utterly unsurprising that we are regressing to the mean.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T23:37:09.778Z · LW(p) · GW(p)
Then what do you make of this:
http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
Yes, when Eliezer made this place, he was frustrated that people didn't seem, to him, to understand the deal with AI / existential risk, but he also really cares about this place as a garden of it's own and I don't think he wants a large quantity of users as much as he wants quality thinking going on.
Nor does he want "support for SIAI" - he does not want undiscriminating skeptics or followers like Ayn Rand had who did what she did without thinking for themselves about it.
Considering those two constraints, I figure LW is not largely about getting support for SIAI, but largely about raising the sanity waterline. He believes that if the sanity waterline raises, people will be able to see the value of his ideas. He acknowledges, also, that you can't capture all the value you create, and I believe he said he expected most of the benefits from this site to be applied elsewhere, not necessarily benefiting SIAI.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-12-10T00:37:53.145Z · LW(p) · GW(p)
Official publications from the SI have said that LW was specifically about community building. This is very well established.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T02:48:55.657Z · LW(p) · GW(p)
Not sure why you're equating "community building" with "supporting SIAI". I'm sure they would not have started LW if they didn't think it would help with SIAI goals (they're probably too busy / too focused for that) but to me "I would not have started this without it needing to serve x purpose" does not mean "this is going to serve several purposes" has no value. It may be that they would not have started it without it serving y and z purposes as well (where y and z may be raising the sanity waterline, encouraging effective altruism and things like that.)
↑ comment by RobertLumley · 2012-12-09T07:05:10.274Z · LW(p) · GW(p)
The bad news is that LessWrong's IQ average has decreased on each survey.
comment by [deleted] · 2012-12-09T04:43:45.255Z · LW(p) · GW(p)
Option 1: Close the borders. It's unfortunate that the best sort might be kept out, while its guaranteed the rest will be kept out. The best can found / join other sites, and LW can establish immigration policies after a while.
Option 2. Builds. Freeze LW at a stage of development, then have a new build later. Call this one LW 2012, and nobody can join for six months, and we're intent on topics X Y and Z. Then for build 2013 there are some vacancies (based on karma?) for a period of time, and we're intent on topics X Q and R.
Option 3: Expiration date. No matter how good or bad it gets, on date N it closes shop. Then it is forked into the This-ists and the Those-ians with a few Whatever-ites that all say they carry the torch and everybody else is more wrong.
Replies from: Nornagest, NancyLebovitz, dbaupp, DanArmak↑ comment by Nornagest · 2012-12-09T05:10:06.467Z · LW(p) · GW(p)
In my experience, which admittedly comes from sites quite different from LW, an Internet project running on volunteer contributions that's decided to keep new members from productive roles has a useful lifetime of no more than one to two years. That's about how long it takes for everybody to voice their pet issues, settle their feuds, and move on with their lives; people may linger for years afterwards, but at that point the vital phase is over. This can be stretched somewhat if there's deep factional divisions within the founding population -- spite is a pretty good motivator -- but only at the cost of making the user experience a lot more political.
It's also worth bearing in mind that demand for membership in a forum like this one is continuous and fairly short-term. Accounts offer few easily quantifiable benefits to begin with, very few if the site's readable to non-members, so we can't rely on lasting ambitions to participate; people sign up because they view this forum as an attractive place to contribute, but the Internet offers no shortage of equivalent niches.
↑ comment by NancyLebovitz · 2012-12-10T14:04:45.144Z · LW(p) · GW(p)
Added for completeness (I'm not sure immigration restrictions are a good idea): Have an invitation system.
↑ comment by dbaupp · 2012-12-10T00:47:13.803Z · LW(p) · GW(p)
Option 1: Close the borders. It's unfortunate that the best sort might be kept out, while its guaranteed the rest will be kept out. The best can found / join other sites, and LW can establish immigration policies after a while.
This isn't so ridiculous in short bursts. I know that Hacker News disables registration if/when they get large media attention to avoid a swathe of new only-mildly-interested users. A similar thing could happen here. (It might be enough to have an admin switch that just puts a display: hidden
into the CSS for the "register" button; trivial inconveniences and all.)
comment by Epiphany · 2012-12-27T01:22:37.544Z · LW(p) · GW(p)
It has occurred to me to wonder whether the poll might be biased. I wanted to add a summary of things that protect LessWrong against endless September when I wrote this post. However, I couldn't think of even one. I figured my thread to debate whether we should have better protection would have turned up any compelling reasons to think LessWrong is protected but it didn't.
I became curious about this just now wondering whether there really isn't a single reason to think that LessWrong is protected, and I re-read all of the comments (though not the replies to the comments) to see if I had forgotten any. This comment by AndrewHickey was the closest thing I found to an argument that there is something protecting LessWrong:
If anything, LW is far more at risk of becoming an echo chamber than of an eternal September. Fora can also die just by becoming a closed group and not being open to new members, and given that there's a fairly steep learning curve before someone is accepted here ("read the Sequences!") it would, if anything, make more sense to be reducing barriers to entry rather than adding more.
The registration numbers showed that LessWrong is gaining members fast, so the echo chamber idea does not appear to be supported.
As for the "steep learning curve" idea, the 2012 Survey Results show that only 1/4 of the survey respondents have read the Sequences, and that 60% of those who have participated either have not read them or have not finished them. Considering that the majority of participants haven't finished the sequences, I think LessWrong's steep learning curve is more likely to add to the risk than to have any protective benefits because if most people are going "Your culture is TLDR, I'm commenting anyway." then they're going to be participating without all the cultural knowledge.
One reflex is to think that the current karma system will protect LessWrong against endless September but thinking about that strategy further, one realizes that there is a limit to how many new posts older users can read and vote on, so this would not help if there were enough new users or users closer to the mean to overwhelm their voting capacity.
As far as I can tell, there's currently nothing that is likely to protect LessWrong from eternal September.
comment by Oscar_Cunningham · 2012-12-09T10:11:42.276Z · LW(p) · GW(p)
I think we just need people to downvote more. Perhaps we could insist that you downvote one thing for every three things that you upvote?
Replies from: Decius↑ comment by Decius · 2012-12-09T21:45:51.500Z · LW(p) · GW(p)
Weight each member's upvotes in a manner determined by the proportion of their votes in each direction and their total karma.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-10T13:53:25.089Z · LW(p) · GW(p)
This takes output as input. Would you go with a self-consistent result, make it time-inconsistent, or cut it off at one tier?
A better solution, I think, would be to weight the karma change by the 'information' that this new vote provided if the order of votes was irrelevant - i.e. multiply it by min(1, log2(P)) with P being the fraction of the voter's votes that are of that vote type. So if Bob likes Alice's post when Bob likes everyone's posts, Alice doesn't get much from it. If Bob like's Alice's post when Bob likes half or fewer of all posts he votes on, Alice gets 1 full karma from it.
Replies from: Decius↑ comment by Decius · 2012-12-10T17:01:11.246Z · LW(p) · GW(p)
Time-inconsistent. Nothing you do after you upvote should change the results. This risks having karma determined almost exclusively by how many posts are upvoted by the top elite.
Perhaps a better solution could be found if we could establish all of the goals of the karma system. Why do we track users' total karma?
comment by ChristianKl · 2012-12-11T13:45:50.165Z · LW(p) · GW(p)
- Eliezer documented the arrival of poseurs (people who superficially copycat cultural behaviors - they are reported to over-run subcultures) which he termed "Undiscriminating Skeptics".
If I understand Eliezer right, when he says "Undiscriminating Skeptics" he means the people who always favor academic status quo ideas and thus reject ideas like cryonics and the many world hypothesis.
When I read LessWrong, I seldom see comments that critize others for being undiscriminating skeptics. If LessWrong participants want to keep out undiscriminating skeptics than they should speak up against the practice a lot more than they currently do.
comment by Epiphany · 2012-12-08T23:42:32.007Z · LW(p) · GW(p)
Endless September Poll:
I condensed the feedback I got in the last few threads into a summary of pros and cons of each solution idea if you would like to something for reference.
How concerned should we be about LessWrong's culture being impacted by:
...overwhelming user influx? [pollid:366]
...trending toward the mean? [pollid:367]
...some other cause? [pollid:368]
(Please explain the other causes in the comments.)
Which is the best solution for:
...overwhelming user influx?
(Assuming user is of right type/attitude, too many users for acculturation capacity.)
[pollid:369]
...trending toward the mean?
(Assuming user is of wrong type/attitude, regardless of acculturation capacity.)
[pollid:370]
...other cause of cultural collapse?
[pollid:371]
Note: Ideas that involve splitting the registered users into multiple forums were not included for the reasons explained here.
Note: "The Center for Modern Rationality" was renamed to "The Center for Applied Rationality".
Replies from: faul_sname, gjm, Alicorn, Armok_GoB, beoShaffer↑ comment by faul_sname · 2012-12-09T00:34:10.446Z · LW(p) · GW(p)
You're focusing on negative reinforcement for bad comments. What we need is positive reinforcement for good comments. Because there are so many ways for a comment to be bad, discouraging any given type of bad comment will do effectively nothing to encourage good comments.
"Don't write bad posts/comments" is not what we want. "Write good posts/comments" is what we want, and confusing the two means nothing will get done.
Replies from: Viliam_Bur, Epiphany↑ comment by Viliam_Bur · 2012-12-09T22:56:39.869Z · LW(p) · GW(p)
We need to discourage comments that are not-good. Not just plainly bad. Only... not adding value, but still taking time to read.
The time lost per one comment is trivial, but the time lost by reading thousand comments isn't. How long does it take LW to produce thousand comments? A few days at most.
This article alone has about 100 comments. Did you get 100 insights from reading them?
↑ comment by gjm · 2012-12-09T00:34:12.615Z · LW(p) · GW(p)
Why do the first three questions have four variations on the theme of "new users are likely to erode the culture" and nothing intermediate between that and "there is definitely no problem at all"?
Why ask for the "best solution" rather than asking "which of these do you think are good ideas"?
Replies from: FiftyTwo, Epiphany↑ comment by FiftyTwo · 2012-12-09T02:40:53.457Z · LW(p) · GW(p)
Also, why is there no option for "new users are a good thing?"
Maybe a diversity of viewpoints might be a good thing? How can you raise the sanity waterline by only talking to yourself?
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T19:35:00.214Z · LW(p) · GW(p)
The question is asking you:
"Assuming user is of right type/attitude, too many users for acculturation capacity."
Imagine this: There are currently 13,000 LessWrong users (well more since that figure was for a few months ago and there's been a Summit since then) and about 1,000 are active. Imagine LesWrong gets Slashdotted - some big publication does an article on us, and instead of portraying LessWrong as "Cold and Calculating" or something similar to Wired's wording describing the futurology Reddit where SingInst had posted about AI "A sub-reddit dedicated to preventing Skynet" they actually say something good like "LessWrong solves X Problem". Not infeasible since some of us do a lot of research and test our ideas.
Say so many new users join in the space of a month and there are now twice as many new active users as older active users.
This means 2/3 of LessWrong is clueless, posting annoying threads, and acting like newbies. Suddenly, it's not possible to have intelligent conversation about the topics you enjoy on LessWrong anymore without two people throwing strawman arguments at you and a third saying things that show obvious ignorance of the subject. You're getting downvoted for saying things that make sense, because new users don't get it, and the old users can't compensate for that with upvotes because there aren't enough of them.
THAT is the type of scenario the question is asking about.
I worded it as "too many new users for acculturation capacity" because I don't think new users are a bad thing. What I think is bad is when there are an overwhelming number of them such that the old users become alienated or find it impossible to have normal discussions on the forum.
Please do not confuse "too many new users for acculturation capacity" with "new users are a bad thing".
↑ comment by Epiphany · 2012-12-09T00:37:27.215Z · LW(p) · GW(p)
Why do the first three questions have four variations on the theme of "new users are likely to erode the culture" and nothing intermediate between that and "there is definitely no problem at all"?
Why do you not see the "eroded the culture" options as intermediate options? The way I see it is there are three sections of answers that suggest a different level of concern:
- There's a problem.
- There's some cultural erosion but it's not a problem (Otherwise you'd pick #1.)
- There's not a problem.
What intermediate options would you suggest?
Why ask for the "best solution" rather than asking "which of these do you think are good ideas"?
A. Because the poll code does not make check boxes where you select more than one. It makes radio buttons where you can select only one.
B. I don't have infinite time to code every single idea.
If more solutions are needed, we can do another vote and add the best one from that (assuming I have time). One thing at a time.
Replies from: Nornagest, gjm↑ comment by Nornagest · 2012-12-09T01:24:38.966Z · LW(p) · GW(p)
The option I wanted to see but didn't was something along the lines of "somewhat, but not because of cultural erosion".
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T01:55:34.393Z · LW(p) · GW(p)
Well, I did not imagine all the possibilities for what concerns you guys would have in order to choose verbiage sufficiently vague enough that those options would work as perfect catch-alls, but I did as for "other causes" in the comments, and I'm interested to see the concerns that people are adding like "EY stopped posting" and "We don't have enough good posters" which aren't about cultural erosion, but about a lapse in the stream of good content.
If you have concerns about the future of LessWrong not addressed so far in this discussion, please feel free to add them to the comments, however unrelated they are to the words used in my poll.
↑ comment by gjm · 2012-12-09T01:05:45.345Z · LW(p) · GW(p)
What intermediate options would you suggest?
I have no particular opinion on what exactly should be in the poll (and it's probably too late now to change it without making the results less meaningful than they'd be without the change). But the sort of thing that's conspicuously missing might be expressed thus: "It's possible that a huge influx of new users might make things worse in these ways, or that it's already doing so, and I'm certainly not prepared to state flatly that neither is the case, but I also don't see any grounds for calling it likely or for getting very worried about it at this point."
The poll doesn't have any answers that fit into your category 2. There's "very concerned" and "somewhat concerned", both of which I'd put into category 1, and then there's "not at all".
Check boxes: Oh, OK. I'd thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told "Polls must have at least two choices". If anyone with the power to change the code is reading this, I'd like to suggest that removing this check would both simplify the code and make the system more useful. An obvious alternative would be to add checkbox polls, but that seems like it would be more work.
[EDITED to add: Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn't by me.]
[EDITED again to add: I see I got downvoted too. I'd be grateful if someone who thinks this comment is unhelpful could explain why; even after rereading it, it still looks OK to me.]
Replies from: Epiphany, Nornagest↑ comment by Epiphany · 2012-12-09T01:29:24.533Z · LW(p) · GW(p)
it's probably too late now to change it without ...
Yes. I asked because my mind drew a blank on intermediate options between some problem and none. I interpreted some problem as being intermediate between problem and no problem.
"It's possible that a huge influx of new users might make things worse in these ways, or that it's already doing so, and I'm certainly not prepared to state flatly that neither is the case, but I also don't see any grounds for calling it likely or for getting very worried about it at this point."
Ok, so your suggested option would be (to make sure I understand) something like "I'm not convinced either way that there's a problem or that there's no problem).
Maybe what you wanted was more of a "What probability of a problem is there?" not "Is there a problem or not, is it severe or mild?"
Don't know how I would have combined probability, severity and urgency into the same question, but that would have been cool.
I'd thought there was a workaround by making a series of single-option multiple-choice polls
I considered that (before knowing about the two options requirement) but (in addition to the other two concerns) that would make the poll really long and full of repetition and I was trying to be as concise as possible because my instinct is to be verbose but I realize I'm doing a meta thread and that's not really appreciated on meta threads.
Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn't by me.
Oh, thank you. (:
↑ comment by Nornagest · 2012-12-09T01:27:42.292Z · LW(p) · GW(p)
Oh, OK. I'd thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told "Polls must have at least two choices".
It sounds like you could still work around it by making several yes/no agreement polls, although this would be clunky enough that I'd only recommend it for small question sets.
↑ comment by Alicorn · 2012-12-09T00:18:33.145Z · LW(p) · GW(p)
It's the Center for Applied Rationality, not Modern Rationality.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T00:27:24.672Z · LW(p) · GW(p)
No, actually, there is a "Center for Modern Rationality" which Eliezer started this year:
http://lesswrong.com/lw/bpi/center_for_modern_rationality_currently_hiring/
Here is where they selected the name:
http://lesswrong.com/lw/9lx/help_name_suggestions_needed_for_rationalityinst/5wb8
The reason I selected it for the poll is because they are talking about creating online training materials. It would be more effective to send someone to something online from a website than to send them somewhere IRL from a website as only half of us are in the same country.
Replies from: Alicorn↑ comment by Alicorn · 2012-12-09T00:32:15.917Z · LW(p) · GW(p)
No. You're wrong. They changed it, which you would know if you clicked my link.
Replies from: Luke_A_Somers, Epiphany↑ comment by Luke_A_Somers · 2012-12-10T13:45:39.924Z · LW(p) · GW(p)
I don't see how clicking the link you posted would have actually demonstrated her wrong.
Replies from: Alicorn↑ comment by Epiphany · 2012-12-09T00:56:44.850Z · LW(p) · GW(p)
I thought there were two centers for rationality, one being the "Center for Modern Rationality" and the other being the "Center for Applied Rationality". Adding a link to one of them didn't rule out the possibility of there being a second one.
Replies from: Alicorn↑ comment by Alicorn · 2012-12-09T01:23:22.238Z · LW(p) · GW(p)
So, you assigned a higher probability to there being two organizations from the same people on the same subject at around the same time with extremely similar names and my correction being mistaken in spite of my immersion in the community in real life... than to you having out-of-date information about the organization's name?
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T01:44:24.712Z · LW(p) · GW(p)
The possibility that the organization had changed it's name did not occur to me. I wish you would have just said "It changed it's name."
As for why I did not assume you knew better than me: The fact that the article was right there talking about the "Center for Modern Rationality" contradicted your information.
I have never met an infallible person, so in the event that I have information that contradicts yours, I will probably think that you're wrong.
It's nice when all the possibilities for why my information contradicts others occurs to me so that I can do something like go search for whether the name of an organization was changed, but that doesn't always happen.
If you knew that it used to be called "Center for Modern Rationality" and changed it's name to "Center for Applied Rationality" why did you not say "It changed it's name."?
I've noticed a pattern with you: Your responses are often missing some contextual information such that I respond in a way that contradicts you. I think you would find me less frustrating if you provided more context.
Replies from: katydee, Alicorn↑ comment by katydee · 2012-12-09T07:34:06.514Z · LW(p) · GW(p)
I think you would find me less frustrating if you provided more context.
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct than your own thoughts and updated accordingly.
Replies from: Vaniver, Epiphany, Nornagest↑ comment by Vaniver · 2012-12-09T20:27:35.012Z · LW(p) · GW(p)
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct than your own thoughts and updated accordingly.
Established users can be wrong about many things, including domain-specific concepts or facts.
A more general heuristic that I do endorse, from Cromwell:
I beseech you, in the bowels of Christ, think it possible you may be mistaken.
↑ comment by Epiphany · 2012-12-09T22:39:03.649Z · LW(p) · GW(p)
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct
Agreed. That's easier. However, sometimes the easier way is not the correct way.
In a world where the authoritative "facts" can be wrong more often than they're right, scientists often take a roughly superstitious approach to science and the educational system isn't even optimized for the purpose of educating what reason do I have to believe that any authority figure or expert or established user is more likely to be correct?
I wish I could trust other's information. I have wished that my entire life. It is frequently exhausting and damn hard to question this much of what people say. But I want to be correct, not merely pleasant, and that's life.
Eliezer intended for us to question authority. I'd have done it anyway because I started doing that ages ago. But he said in no uncertain terms that this is what he wants:
In Two More Things to Unlearn from School he warns his readers that "It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism."
In Cached Thoughts he tells you to question what HE says. "Now that you've read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you'll think, "Cached thoughts." My belief is now there in your mind, waiting to complete the pattern. But is it true? Don't let your mind complete the pattern! Think!"
Perhaps there is a way to be more pleasant while still questioning everything. If you can think of something, I will consider it.
Replies from: katydee, saturn↑ comment by katydee · 2012-12-10T03:52:52.405Z · LW(p) · GW(p)
I'm not saying that a hypothetical vague "you" shouldn't question things. I'm saying that you specifically, User: Epiphany, seem to not be very well-calibrated in this respect and should update towards questioning things less until you have a better feel for LessWrong discussion norms and epistemic standards.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-10T20:21:44.479Z · LW(p) · GW(p)
I'm not saying that a hypothetical vague "you" shouldn't question things.
Neither was I:
what reason do I have to believe that any authority figure or expert or established user is more likely to be correct?
I'm saying that you specifically, User: Epiphany, seem to not be very well-calibrated in this respect and should update towards questioning things less until you have a better feel for LessWrong discussion norms and epistemic standards.
So, trust you guys more while I'm still trying to figure out how much to trust you? Not going to happen, sorry.
Replies from: katydee↑ comment by katydee · 2012-12-10T21:14:15.027Z · LW(p) · GW(p)
So, trust you guys more while I'm still trying to figure out how much to trust you? Not going to happen, sorry.
So you're trying to figure out how much to trust "us," but you're only willing to update in the negative direction?
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T03:03:44.937Z · LW(p) · GW(p)
Perhaps the perception you're having is caused by the fact that you did not know how cynical I was when I started. My trust has increased quite a bit. If I appear not to trust Alicorn very much, this is because I've seen what appears to be an unusually high number of mistakes. I realize that this may be due to biased sample (I haven't read thousands of Alicorn's posts, maybe a dozen or so). But I'm not going to update with information I don't have, and I don't see it as a good use of time to go reading lots and lots of posts by Alicorn and whoever else trying to figure out how much to trust them. I will have a realistic idea of her eventually.
↑ comment by saturn · 2012-12-10T06:43:24.794Z · LW(p) · GW(p)
I wish I could trust other's information.
You might think about the reasons people have for saying the things they say. Why do people make false statements? The most common reasons probably fall under intentional deception ("lying"), indifference toward telling the truth ("bullshitting"), having been deceived by another, motivated cognition, confabulation, or mistake. As you've noticed, scientists and educators can face situations where complete integrity and honesty comes into conflict with their own career objectives, but there's no apparent incentive for anyone to distort the truth about the name of the Center for Applied Rationality. There's also no apparent motivation for Alicorn to bullshit or confabulate; if she isn't quite sure she remembers the name, she doesn't have anything to lose by simply moving on without commenting, nor does she have much to gain by getting away with posting the wrong name. That leaves the possibility that she has the wrong name by an unintended mistake. But different people's chances of making a mistake are not necessarily equal. By being more directly involved with the organization, Alicorn has had many more opportunities to be corrected about the name than you have. That makes it much more likely that you are the one making the mistake, as turned out to be the case.
Perhaps there is a way to be more pleasant while still questioning everything. If you can think of something, I will consider it.
You could phrase your questions as questions rather than statements. You could also take extra care to confirm your facts before you preface a statement with "no, actually".
Replies from: Epiphany↑ comment by Epiphany · 2012-12-11T03:34:15.379Z · LW(p) · GW(p)
there's no apparent incentive for anyone to distort the truth about the name of the Center for Applied Rationality. There's also no apparent motivation for Alicorn to bullshit or confabulate
I know. But it's possible for her to be unaware of the existence of CFMR, had there been two orgs. If you read the entire disagreement, you'll notice that what it came down to is that it did not occur to me that CFMR might have changed it's name. Therefore, denial that it existed appeared to be in direct conflict with the evidence. The evidence being two articles where people were creating CFMR.
Alicorn has had many more opportunities to be corrected about the name than you have.
I was surprised she didn't seem to know about it, but then again, if she doesn't read every single post on here, it's possible she didn't know. I don't know how much she knows, or who she specifically talks to, or how often she talks to them, or whether she might have been out sick for a month or what might have happened. For something that small, I am not going to go to great lengths to analyze her every potential motive for being correct or incorrect. My assessment was simple for that reason.
As for wanting to trust people more, I've been thinking about ways to go about that, but I doubt I will do it by trying to rule out every possible reason for them to have been wrong. That's a long list, and it's dependent upon my imperfect ability to think of all the reasons that a person might be wrong. I'm more likely to go about it from a totally different angle: How many scientists are there? What things do most of them agree on? How many of those have been proven false? Okay, that's an estimated X percent chance that what most scientists believe is actually true based on sample set of (whatever) size.
You could phrase your questions as questions rather than statements.
This is a good suggestion, and I normally do.
You could also take extra care to confirm your facts before you preface a statement with "no, actually".
I did confirm my fact with two articles. That is why it became a "no actually" instead of a question.
Replies from: Alicorn↑ comment by Nornagest · 2012-12-09T08:52:11.085Z · LW(p) · GW(p)
This seems like a risky heuristic to apply generally, given the volume of domain-specific contrarianism floating around here. My own version is more along the lines of "trust, but verify".
Replies from: drethelin↑ comment by drethelin · 2012-12-09T09:26:29.737Z · LW(p) · GW(p)
It's a specific problem Epiphany has that she assumes her own internal monologue of what's true is far more reliable than any evidence or statements to the contrary.
Replies from: Decius, Epiphany↑ comment by Decius · 2012-12-09T21:50:27.878Z · LW(p) · GW(p)
That's not a problem unless it's false. Almost all evidence and statements to the contrary are less reliable than my belief regarding what's true.
That's a very expensive state to maintain, since I got that way by altering my internal description of what's true to match the most reliable evidence that I can find...
Replies from: Epiphany↑ comment by Epiphany · 2012-12-09T22:59:23.548Z · LW(p) · GW(p)
I don't think I am right about everything, but I relate to this. I am not perfectly rational. But I decided to tear apart and challenge all my cached thoughts around half my life ago (well over a decade before Eliezer wrote about cached thoughts of course, but it's a convenient term for me now) and ever since then, I have not been able to see authorities the same way...
I think it would be ideal if we were all to strive to do enough hard work that we've successfully altered our internal description of what's true to match the most reliable evidence on so many different topics as to be able to see fatal flaws in the authoritative views more often than not.
Considering the implications of the first three links in this post that accomplishment may not be an unrealistic one and sadly, I don't say this because I think we're all so incredibly smart, but because the world is so incredibly broken.
Did you start questioning early as well?
Replies from: Decius↑ comment by Decius · 2012-12-10T00:24:19.648Z · LW(p) · GW(p)
I've never accepted that belief in the authority on any subject could pay rent. The biggest advantage experts have to me is when they can quickly point me to the evidence that I can evaluate fastest to arrive at the correct conclusion; rather than trust Aristotle that heavier items fall faster, I can duplicate any number of experiments that show that any two objects with equal specific air resistance fall at exactly the same speed.
Downside: It is more expensive to evaluate the merits of the evidence than the credentials of the expert.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-10T00:35:03.631Z · LW(p) · GW(p)
The biggest advantage experts have to me is when they can quickly point me to the evidence that I can evaluate fastest to arrive at the correct conclusion
I relate to this.
Downside: It is more expensive to evaluate the merits of the evidence than the credentials of the expert.
There simply isn't enough time to evaluate everything. When it's really important, I'll go to a significant amount of trouble. If not, I use heuristics like "how likely is it that something as easy to test as this made it's way into the school curriculum and is also wrong?" if I have too little time or the subject is of little importance, I may decide the authoritative opinion is more likely to be right than my absolutely not thought out at all opinion, but that's not the same as trusting authority. That's more like slapping duct tape on, to me.
Replies from: Decius↑ comment by Decius · 2012-12-10T02:12:23.442Z · LW(p) · GW(p)
Slightly wrong heuristic. Go with "What proportion of things in the curriculum that are this easy to test have been wrong when tested?" The answer is disturbing. Things like 'Glass is a slow-flowing liquid'.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-10T04:11:48.213Z · LW(p) · GW(p)
Actually 'Glass is a slow-flowing liquid' would take decades to test, wouldn't it? I think you took a different meaning of "easy to test". I meant something along the lines of "A thing that just about anyone can do in a matter of minutes without spending much money."
Unless you can think of a fast way to test the glass is a liquid theory?
Replies from: None↑ comment by [deleted] · 2012-12-10T06:16:32.365Z · LW(p) · GW(p)
Unless you can think of a fast way to test the glass is a liquid theory?
Look at old windows that have been in for decades. Do they pile up on the bottom like caramel? No. Myth busted.
More interesting than simple refutation though is "taboo liquid". Go look at non-newtonian fluids and see all the cool things that matter can do. For example, Ice and rock flow like a liquid on a large enough scale (glaciers, planetary mantle convection).
Replies from: Nornagest, Epiphany, Decius↑ comment by Nornagest · 2012-12-10T10:24:30.001Z · LW(p) · GW(p)
Look at old windows that have been in for decades. Do they pile up on the bottom like caramel? No. Myth busted.
I actually believed that myth for ages because the panes in my childhood house were thicker on the bottom than on the top, causing visible distortion. Turns out that making perfectly flat sheets of glass was difficult at the time it was built, and that for whatever reason they'd been put in thick side down.
↑ comment by Epiphany · 2012-12-10T20:25:24.630Z · LW(p) · GW(p)
Oh. Yeah. Good point. Obviously I wasn't thinking too hard about this. Thank you.
Wait, so they put the glass is a liquid theory into school curriculum and it was this easy to test?
I don't recall that in my own school curriculum. I'll be thinking about whether to reduce my trust for my own schooling experience. It can't go much further down after reading John Taylor Gatto, but if the remaining trust that is there is unfounded, I might as well kill it, too.
↑ comment by Epiphany · 2012-12-09T22:26:02.020Z · LW(p) · GW(p)
Give three examples.
Replies from: drethelin↑ comment by drethelin · 2012-12-10T08:14:54.708Z · LW(p) · GW(p)
http://lesswrong.com/lw/efv/elitism_isnt_necessary_for_refining_rationality/
This is the first one that comes to mind. I might post others as I find them, but to be honest I'm too lazy to go through your logs or my IRC logs to find the examples
Replies from: Epiphany↑ comment by Epiphany · 2012-12-16T03:31:18.745Z · LW(p) · GW(p)
That is an example of me not being aware of how others use a word, not an example of me believing I am correct when others disagree with me and then being wrong. In fact, I think that LessWrong and I agree for the most part on that subject. We're just using the word elitism differently.
Do you have even a single example of me continuing to think I am correct about something where a matter of truth (not wording) is concerned even after compelling evidence to the contrary is presented?
↑ comment by Armok_GoB · 2012-12-13T00:39:02.474Z · LW(p) · GW(p)
Proposed solution: add lots of subdivisions with different requirements.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-13T22:25:53.480Z · LW(p) · GW(p)
I had a couple of ideas like this myself and I chose to cull them before doing this poll for these reasons:
The problem with splitting the discussions is that then we'd end up with people having the same discussions in multiple different places. The different posts would not have all the information, so you'd have to read several times as much in if you wanted to get it all. That would reduce the efficiency of the LessWrong discussions to a point where most would probably find it maddening and unacceptable.
We could demand that users stick to a limited number of subjects within their subdivision, but then discussion would be so limited that user experience would not resemble participation in a subculture. Or, more likely, it just wouldn't be enforced thoroughly enough to stop people from talking about what they want, and the dreaded plethora of duplicated discussions would still result.
The best alternative to this as far as I'm aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
Replies from: wedrifid, Armok_GoB↑ comment by wedrifid · 2012-12-13T22:49:09.902Z · LW(p) · GW(p)
The best alternative to this as far as I'm aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
That seems like an inefficient use of CFAR training (and so an inefficient use of whatever resources that would have to be used to pay CFAR for such training). I'd prefer to just cull those disruptively bad at rational thinking entirely. Some people just cannot be saved (in a way that gives an acceptable cost/benefit ratio). I'd prefer to save whatever attention or resources I was willing to allocate to people-improvement for those that already show clear signs of having thinking potential.
Replies from: Armok_GoB, Epiphany↑ comment by Armok_GoB · 2012-12-14T02:13:09.952Z · LW(p) · GW(p)
I am among those absolutely hardest to save, having an actual mental illness. Yet this place is the only thing saving me from utter oblivion and madness. Here is where I have met my only real friends ever. Here is the only thing that gives me any sense of meaning, reason to survive, or glimmer of hope. I care fanatically about it.
Many of the rules that have been proposed. Or for that matter even the amount of degradation that has ALREADY occurred... If that had been the case a few years ago, I wouldn't exist, this body would either be rotting in the ground, or literally occupied by an inhuman monster bent on the destruction of all living things.
Replies from: Epiphany↑ comment by Epiphany · 2012-12-14T06:59:04.142Z · LW(p) · GW(p)
I'm fascinated. (I'm a psychology enthusiast who refuses to get a psychology degree because I find many of the flaws with the psychology industry unacceptable). I am very interested in knowing how LessWrong has been saving you from utter oblivion and madness. Would you mind explaining it? Would it be alright with you if I ask you which mental illness?
Would you please also describe the degradation that has occurred at LW?
Replies from: Armok_GoB↑ comment by Armok_GoB · 2012-12-14T22:39:03.558Z · LW(p) · GW(p)
I'd rather not talk about it in detail, but it boils down to LW in general promoting sanity and connects smart people in general. That extra sanity can be used to cancel out insanity, not just creating super-sanes.
Degradation: Lowered frequency of insightful and useful content, increased frequency of low quality content.
↑ comment by Epiphany · 2012-12-14T06:55:29.673Z · LW(p) · GW(p)
I have to admit I am not sure whether to be more persuaded by you or Armok. I suppose what it would come down to is a cost/benefit calculation that takes into account the amount of destruction saved by the worst as well as the amount of benefit produced by the best. Brilliant people can have quite an impact indeed, but they are rare and it is easier to destroy than to create, so it is not readily apparent to me which group it would be more beneficial to focus on, or if both, in what amount.
Practically speaking, though, CFAR has stated that they have plans to make web apps to help with rationality training and training materials for high schoolers. It seems to me that they have an interest in targeting the mainstream, not just the best thinkers.
I'm glad that someone is doing this, but I also have to wonder if that will mean more forum referrals to LW from the mainstream...
↑ comment by Armok_GoB · 2012-12-14T02:06:31.833Z · LW(p) · GW(p)
Ctrl+C, Ctrl+V, problem solved.
Replies from: Epiphany, Eugine_Nier↑ comment by Epiphany · 2012-12-14T07:24:29.190Z · LW(p) · GW(p)
If you're suggesting that duplicated discussions can be solved with paste, then you are also suggesting that we not make separate areas.
Think about it.
I suppose you might be suggesting that we copy the OP and not the comments. Often the comments have more content than the OP, and often that content is useful, informative and relevant. So, in the comments we'd then have duplicated information that varied between the two OP copies.
So, we could copy the comments over to the other area... but then they're not separate...
Not seeing how this is a solution. If you have some different clever way to apply Ctrl+C, Ctrl+V then please let me know.
Replies from: Armok_GoB↑ comment by Eugine_Nier · 2012-12-15T21:03:33.647Z · LW(p) · GW(p)
This creates a trivial inconvenience.
Replies from: Armok_GoB↑ comment by beoShaffer · 2012-12-09T01:47:33.329Z · LW(p) · GW(p)
...some other cause?
I assign non-neglible probability to some cause that I not am not specifically aware of (sorta, but not exactly an outside context problem) having a negative impact on LW's culture.
comment by Oligopsony · 2012-12-09T00:28:36.602Z · LW(p) · GW(p)
lol