Should you try to do good work on LW?

post by cousin_it · 2012-07-05T12:36:41.277Z · LW · GW · Legacy · 32 comments

I used to advocate trying to do good work on LW. Now I'm not sure, let me explain why.

It's certainly true that good work stays valuable no matter where you're doing it. Unfortunately, the standards of "good work" are largely defined by where you're doing it. If you're in academia, your work is good or bad by scientific standards. If you're on LW, your work is good or bad compared to other LW posts. Internalizing that standard may harm you if you're capable of more.

When you come to a place like Project Euler and solve some problems, or come to OpenStreetMap and upload some GPS tracks, or come to academia and publish a paper, that makes you a participant and you know exactly where you stand, relative to others. But LW is not a task-focused community and is unlikely to ever become one. LW evolved from the basic activity "let's comment on something Eliezer wrote". We inherited our standard of quality from that. As a result, when someone posts their work here, that doesn't necessarily help them improve.

For example, Yvain is a great contributor to LW and has the potential to be a star writer, but it seems to me that writing on LW doesn't test his limits, compared to trying new audiences. Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that). Of course there have been many examples of seemingly good work posted to LW. Homestuck fandom also has a lot of nice-looking art, but it doesn't get fandoms of its own.

In conclusion, if you want to do important work, cross-post it if you must, but don't do it for LW exclusively. Big fish in a small pond always looks kinda sad.

32 comments

Comments sorted by top scores.

comment by RobertLumley · 2012-07-05T14:07:16.963Z · LW(p) · GW(p)

It seems important to note, however, that doing good work on LW is superior to not doing good work at all.

Edit:

In conclusion, if you want to do important work, cross-post it if you must, but don't do it for LW exclusively. Big fish in a small pond always looks kinda sad.

Most places that would be cross-posted with LW would be even smaller ponds. So I'm not sure this supports your point here.

comment by gwern · 2012-07-05T16:06:35.623Z · LW(p) · GW(p)

Yvain is a great contributor to LW and has the potential to be a star writer, but it seems to me that writing on LW doesn't test his limits, compared to trying new audiences. Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that).

But would either of you have worked on them at all outside LW's confines? Ceteris paribus, it'd be better to work in as high-status rigorous a community as possible, but ceteris is never paribus.

Replies from: cousin_it
comment by cousin_it · 2012-07-05T16:56:50.667Z · LW(p) · GW(p)

Good point, I agree with RobertLumley and you that doing good work on LW is better than doing nothing. But if you're already doing good work by LW standards (and you are!), it probably makes sense to move beyond.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-07-07T17:28:38.542Z · LW(p) · GW(p)

Thank you for the compliments. I don't know much about mathematics, but if you've really proven a new insight into Godel's theorem that sounds very impressive and you should certainly try to get it published formally. But I'm not sure what you're suggesting I should do. I mean, I've been talking to Luke about maybe helping with some SIAI literature, but that's probably in the same category as here.

My career plan is medicine. I blog as a hobby. One day I would like to write about medicine, but I need a job and some experience before I can do that credibly. If you or anyone else have anything else specific and interesting/important you think I should be (or even could be) doing, please let me know. Bonus points if it involves getting paid.

comment by Viliam_Bur · 2012-07-07T15:09:25.008Z · LW(p) · GW(p)

This is too abstract for me. What exactly means "good work"? Let's say that I wrote some article, or just plan to write an article, and I ask myself -- is LW the right place to publish it?

Well, it depends on the topic, on the style of writing, and what kind of audience / reaction do I want. There are more things to consider than just "good" or "bad". Is it a Nobel price winning material? Then I send it to a prestigious scientific journal. Is it just a new idea I want feedback on? Then I guess whether I get better feedback on LW or somewhere else; again, the choices depend on topic. Something personal? I put it on my blog. Etc. Use your judgement.

It is also possible to write to a scientific journal and post a hyperlink on LW, or to write a concept on LW, collect feedback, improve the article and send it somewhere else.

Yes, if the work is optimized for LW, it is not optimized for . Different audiences require different styles. Maybe LW could have a subsection where the articles must be written using the scientific lingo (preferably in a two-column, single-spaced format), or maybe we are OK with drafts. I prefer legibility, journals prefer brevity, other people may prefer something else.

comment by David_Gerard · 2012-07-05T20:33:21.072Z · LW(p) · GW(p)

Yvain's stuff is highly linkable elsewhere. His article is the go-to link for typical mind fallacy, for example.

Replies from: private_messaging, private_messaging
comment by private_messaging · 2012-07-06T15:20:53.837Z · LW(p) · GW(p)

It starts awesome, with imagination stuff, but then goes down addressing the local PUA crap. The comments, some are very insightful on the imagination and such, but the top is about the PUA crap. I actually recall I wanted to link it few years back, before I started posting much, but searched for some other link because I did not want the PUA crap.

Honestly, it would have been a lot better if Yvain started his own blog, and over time built up reader base. But few people have, i dunno, arrogance to do that (I recall he wrote that he underestimates himself, that may be why) and so we are stuck with primarily the people that overestimate themselves blogging, starting communities, etc.

Replies from: David_Gerard
comment by David_Gerard · 2012-07-06T16:17:15.784Z · LW(p) · GW(p)

He has one! http://squid314.livejournal.com/

Replies from: private_messaging
comment by private_messaging · 2012-07-06T17:05:11.859Z · LW(p) · GW(p)

Well then he should write for his blog, and sometimes have it cross posted here, rather than write for LW audience. Do you seriously need to add extra misogynistic nonsense and discuss as evidence what the borderline sociopathic PUA community thinks about women, into an otherwise good post referencing a highly interesting study by Galton? Do you really need to go this far to please online white nerdy sexually frustrated male trash?

Replies from: Yvain, rhollerith_dot_com
comment by Scott Alexander (Yvain) · 2012-07-07T17:20:00.978Z · LW(p) · GW(p)

I've been avoiding this thread so far because I'm kind of uncomfortable with compliments, but luckily it's descended into insults and I'm pretty okay with those.

Yes, I have a blog. I write blog posts much more often than I write Less Wrong posts (although they're much lower quality and scatterbrained in all senses of the word). Sometimes I want to say something about rationality, and since I happen to know of this site that's totally all about rationality with a readership hundreds of times greater than my blog, I post it here instead of (or in addition to) my blog. I promise you I didn't just add the PUA reference for "a Less Wrong audience"; in fact, knowing what I know all these months later, I would have specifically avoided even mentioning it for exactly the reason that's happening right now.

I have written about 150 posts for Less Wrong, and about 1200 in my blog. Of those, I can think of three that tangentially reference pick-up artistry as an example of something, and zero that are entirely about PUA or which express explicit support for it. According to my blog tagging system, three posts is slightly more than "cartography" or "fishmen" (two posts each), but still well below "demons" (fourteen posts). I don't think it's unreasonable to mention a movement with some really interesting psychology behind it about 50% more than I mention hypothetical half-man half-fish entities, or a quarter as often as I mention malevolent disembodied spirits.

More importantly, now that I'm talking to you...why is your username "private_messaging"?

Replies from: private_messaging, private_messaging
comment by private_messaging · 2012-07-07T18:36:06.371Z · LW(p) · GW(p)

Originally made this account to message some people privately.

Can you explain why the first thing to update after the Galton's amazing study into imagination, was your opinion on women in general as determined by PUA's opinion on women vs women opinion on women (the balance of conflicting opinions)? Also, btw, it is in itself a great example of biased cognition: you run across some fact, and you update selectively; the fact should lower your weight for anyone's evaluation of anyone, but instead it just lowers the weight for women's evaluation of women.

Also, while I am sure that you did not consciously add it just for LW audience, if you were writing for a more general audience it does seem reasonable to assume - given that you are generally a good writer - that you would not include this sort of 'example' of application of the findings of Galton.

Replies from: Yvain, lavalamp
comment by Scott Alexander (Yvain) · 2012-07-08T16:21:12.469Z · LW(p) · GW(p)

Replied in accordance with your username to prevent this from becoming an Endless Back-and-Forth Internet Argument Thread.

comment by lavalamp · 2012-07-07T19:14:17.094Z · LW(p) · GW(p)

Quote from the article in question:

And lest I sound chauvinistic, the same is certainly true of men. I hear a lot of bad things said about men (especially with reference to what they want romantically) that I wouldn't dream of applying to myself, my close friends, or to any man I know. But they're so common and so well-supported that I have excellent reason to believe they're true.

Does that really sound like someone who is doing a biased, partial update?

Replies from: private_messaging
comment by private_messaging · 2012-07-08T01:43:37.724Z · LW(p) · GW(p)

The PUA's opinion on women was, nonetheless, not discounted for the typical mind fallacy. (Maybe the idea is that typical mind fallacy doesn't work across genders or something, which would be rather interesting hypothesis, but, alas, unsupported)

comment by private_messaging · 2012-07-08T01:54:03.656Z · LW(p) · GW(p)

Yes, I have a blog. I write blog posts much more often than I write Less Wrong posts (although they're much lower quality and scatterbrained in all senses of the word).

Write higher quality, or make 2 sections, 1 good, 1 random. Write for general audience, i.e. no awful LW jargon and LW terminology misuse ('rational' actually means something, and so does 'bayesian'). Cross post here. Come on, you said before, in calibrate your self assessments , that you have relatively low opinion on yourself.

Sometimes I want to say something about rationality, and since I happen to know of this site that's totally all about rationality with a readership hundreds of times greater than my blog,

it's Eliezer's former blog, hurr, durr, it's people who didn't cringe too hard on stuff like http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ . He got those readers how? He split off from Hansen. I do have high opinion of you overall. Much higher than I have of EY.

comment by RHollerith (rhollerith_dot_com) · 2012-07-06T18:57:32.635Z · LW(p) · GW(p)

Downvoted.

I hope I would downvote any comment containing the judgement-words "nonsense", "sociopathic" and "trash" (referring to a subset of the LW readership) regardless of the position being advocated. The book Non-violent Communication advises making observations and expressing feelings, but avoiding rendering judgments. A "judgement" can be defined as a phrase or statement that can be expected to diminish the status or moral standing of a person or group.

Parenthetically, it has been proposed that one of the ways online forums unravel over time is that a few people who like making strong judgments show up and get into long conversations with each other, which tends to discourage participants for whom the strong judgments distract from their reasons for participating.

Replies from: private_messaging
comment by private_messaging · 2012-07-06T19:38:44.383Z · LW(p) · GW(p)

The judgements are very often instrumentally useful. Also, I do happen to see this subset of the users on internet every bit as negatively as I said, and so do many other people who usually don't tell anything about it, and in so much as I do not think that seeing it more positively would be instrumentally useful, it'd not be quite honest to just see it this way and never say.

edit: also, a backgrounder. I am a game developer. We see various misogynistic crap every damn day (if looking into any online communication system). Also, as on the PUA, they see people (women) as objects, seek sexual promiscuity and shallow relationships, are deliberately manipulative, etc, etc. scoring more than enough points on a sociopathy traits list for a diagnosis (really only not scoring the positive points like charisma). What they think about women being taken as likely true is clearly a very poor example of evidence for a post about typical mind fallacy.

comment by private_messaging · 2012-07-07T06:17:51.754Z · LW(p) · GW(p)

For interest of the discussion, here is the article in question

It actually is a perfect example of how LW is interested in science:

There is the fact that some people have no mental imagery, but live totally normal lives. That's amazing! They're more different than you usually imagine scifi aliens to be! And yet there is no obvious difference. It is awesome. How does that even work? Do they have mental imagery somewhere inside but no reflection on it? Etc, etc etc.

And the first thing that was done with this awesome fact here, was 'update' in the direction of trusting more the PUA community's opinion on women, rather than women themselves, and that was done by author. That's not even a sufficiently complete update, because the PUA community - especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it - is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.

This, cousin_it, is the case example why you shouldn't be writing good work for LW. Some time back you were on verge of something cool - perhaps even proving that defining the real world 'utility' is incredibly computationally expensive for UDT. Instead, well, yeah, there's the local 'consensus' on the AI behaviour and you explore for the potential confirmations of it.

Replies from: komponisto, paulfchristiano, fubarobfusco
comment by komponisto · 2012-07-07T18:52:29.227Z · LW(p) · GW(p)

the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with... [an] unscientific approach to data collection

A classic Arson, Murder, and Jaywalking right there.

Replies from: Rhwawn
comment by Rhwawn · 2012-07-07T23:59:20.508Z · LW(p) · GW(p)

I don't know, given the harm bad data collection can do, I'm not sure being a clinical sociopath is much worse.

Replies from: private_messaging
comment by private_messaging · 2012-07-08T01:32:11.073Z · LW(p) · GW(p)

What ever data on physiology nazis collected correctly, we are relying on today. Even when very bad guys collect data properly the data is usable. When it's on-line bragging by people fascinated with 'negs'... not so much. It is a required condition that data is badly collected; the guys trying to be sociopaths does not suffice.

comment by paulfchristiano · 2012-07-07T19:30:12.608Z · LW(p) · GW(p)

Some time back you were on verge of something cool - perhaps even proving that defining the real world 'utility' is incredibly computationally expensive for UDT. Instead, well, yeah, there's the local 'consensus' on the AI behaviour and you explore for the potential confirmations of it.

You seem to be saying: "you were close to realizing this problem was unsolvable, but instead you decided to spend your time exploring possible solutions."

Generally, you seem to be continually frustrated about something to do with wireheading, but you've never really made your position clear, and I can't tell where it is coming from. Yes, it is easy to build systems which tear themselves to pieces, literally or internally. Do you have any more substantive observation? We see a path to building systems which have values over the real world. It is full of difficulties, but the wireheading problems seem understood and approachable / resolved. Can you clarify what you are talking about, in the context of UDT?

Replies from: private_messaging
comment by private_messaging · 2012-07-08T02:08:10.351Z · LW(p) · GW(p)

We see a path to building systems which have values over the real world.

The path he sees has values over internal model, but internal model is perfect AND it is faster than the real world, which stretches it a fair lot if you ask me. It's not really a path, he's simply using "sufficiently advanced model is indistinguishable from the real thing". And we still can't define what paperclips are if we don't know the exact model that will be used, as the definition is only meaningful in context of a model.

The objection I have is that it is a: unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases), b: very difficult or impossible to define values over the real world, and c: values over real world are necessary for the doomsday scenario. If this can be narrowed down, then there's precisely the bit of AI architecture that has to be avoided.

We humans are messy creatures. It is very plausible (in light of potential irreducibility of 'values over real world') that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies (when the model-prediction of the senses does not match the senses), resulting in learned preference not to lose correspondence between model and world, in place of straightforward "I value real paperclips therefore I value having a good model of the world" which looks suspiciously simple and poorly matches the observations (no matter how much you tell yourself you value real paperclips, you may procrastinate).

edit: and if I don't make my position clear, it looks so because I am opposed to fuzzy ill defined woo where the distinction between models and worlds is poorly defined and the intelligence is a monolithic blob. It's hard to define an objection to an ill defined idea which always off-shoots some anthropomorphic idea (e.g. wireheading gets replaced with real world goal to have a physical wire in a physical head that is to be kept alive with the wire).

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-07-09T09:37:54.061Z · LW(p) · GW(p)

It is very plausible [...] that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies [...], resulting in learned preference not to lose correspondence between model and world

Generally correct; we learn to value good models, because they are more useful than bad models. We want rewards, therefore we want to have good models, therefore we are interested in the world out there. (For a reductionist, there must be a mechanism explaining why and how we care about the world.)

Technically, sometimes the most correct model is not the most rewarded model. For example it may be better to believe a lie and be socially rewarded by members of my tribe who share the belief, than to have a true belief that gets me killed by them. There may be other situations, not necessarily social, where the perfect knowledge is out of reach, and a better approximation may be in the "valley of bad rationality".

it is unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases) [...] there's precisely the bit of AI architecture that has to be avoided.

In other words, make an AI that only cares about what is inside the box, and it will not try to get out of the box.

That assumes that you will feed the AI all the necessary data, and verify that the data is correct and complete, because the AI will be just as happy with any kind of data. If you give an incorrect information to AI, the AI will not care about it, because it has no definition of "incorrect"; even in situations where AI is smarter than you and could have noticed an error that you didn't notice. In other words, you are responsible for giving AI the correct model, and the AI will not help you with this, because AI does not care about correctness of the model.

Replies from: private_messaging
comment by private_messaging · 2012-07-09T11:20:32.299Z · LW(p) · GW(p)

You put it backwards.... making AI that cares about truly real stuff as the prime drive is likely impossible and certainly we don't know how to do that nor need to. edit: i.e. You don't have to sit and work and work and work and find how to make some positronic mind not care about the real world. You get this by simply omitting some mission-impossible work. Specifying what you want, in some form, is unavoidable.

Regarding verification, you can have the AI search for code that predicts the input data the best, and then if you are falsifying the data the code will include a model of your falsifications.

comment by fubarobfusco · 2012-07-07T08:05:29.643Z · LW(p) · GW(p)

And the first thing that was done with this awesome fact here, was 'update' in the direction of trusting more the PUA community's opinion on women, rather than women themselves, and that was done by author. That's not even a sufficiently complete update, because the PUA community - especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it - is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.

This is a really good point ...

This, cousin_it, is the case example why you shouldn't be writing good work for LW.

... which utterly fails to establish the claim that you attempt to use it for.

Replies from: private_messaging
comment by private_messaging · 2012-07-07T09:55:48.250Z · LW(p) · GW(p)

... which utterly fails to establish the claim that you attempt to use it for.

Context, man, context. cousin_it's misgivings are about the low local standards. This article is precisely a good example of such low local standards - and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.

edit: also on the 'good point': this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It's just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.

comment by John_Maxwell (John_Maxwell_IV) · 2012-07-05T18:36:38.688Z · LW(p) · GW(p)

I'm not sure I know of a better place to do philosophy (as Paul Graham defines it) than LW:

let's try to answer the question

Of all the useful things we can say, which are the most general?

...

One drawback of this approach is that it won't produce the sort of writing that gets you tenure.

(PG's example of a useful general idea is that of a controlled experiment.)

What specific new audiences do you think Yvain should try out?

comment by private_messaging · 2012-07-05T16:07:45.805Z · LW(p) · GW(p)

Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that).

Furthermore non mathematicians tend to hold work to some different standards. Focussing, for instance, on the verbal match of your verbal description of what goes on (the filler between equations), with the verbal descriptions in the cited papers, which tends to get irritating in the advanced and/or confusing subjects. Additionally the mathematicians would easily find something tailored to non-mathematicians overly verbose but very insufficiently formal, which drops their expected utility from reading the text. (Instead of having the description of what you are actually doing with equations, designed to help understand your intent, you end up having, in parallel, a handwaved argument with the same conclusion)

comment by Bruno_Coelho · 2012-07-06T03:06:41.902Z · LW(p) · GW(p)

Some veterans who do good work here could do in another place, if it's valuable enough. In that sense, the initial project was create people with strength to intervene effectively in the world. This community already have people with that level.

comment by Manfred · 2012-07-05T23:43:23.885Z · LW(p) · GW(p)

Big fish in a small pond always looks kinda sad.

Well, it's often helpful for the smaller fish - perhaps the fish and pond metaphor is a bit misleading in this way. And, since we're here for many reasons, not just one ("size"), people may be big fish along some dimensions and smaller fish along others.

comment by Vaniver · 2012-09-14T13:23:03.508Z · LW(p) · GW(p)

Homestuck fandom also has a lot of nice-looking art, but it doesn't get fandoms of its own

I've been thinking about this for a while, and while I originally agreed with it I don't think I do anymore. Prequel has its own fandom now, and I think has significantly pushed the boundaries of the medium with its recent updates. Similarly, Fallout: Equestria has a fandom of its own; HPMOR has its own fandom, and though I hate to mention it in the same breath as those, so does 50 Shades of Grey.