On the importance of Less Wrong, or another single conversational locus
post by AnnaSalamon · 2016-11-27T17:13:08.956Z · LW · GW · Legacy · 365 commentsContents
365 comments
-
The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.
-
Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle. This sounds like hubris, but it is at this point at least partially a matter of track record.[1]
-
To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]
-
One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).
-
One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.
-
We have lately ceased to have a "single conversation" in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)
365 comments
Comments sorted by top scores.
comment by Alexandros · 2016-11-27T10:40:52.900Z · LW(p) · GW(p)
Hi Anna,
Please consider a few gremlins that are weighing down LW currently:
Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.
the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.
the "original content"/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a "community blog". In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.
The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).
Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it's really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don't dillute the focus.
In the spirit of the above, I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.
Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.
I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.
Replies from: AnnaSalamon, SatvikBeri, nshepperd, Kaj_Sotala, rayalez, Error, SatvikBeri, eagain, John_Maxwell_IV, roland, FourFire, ciphergoth, Lumifer, sleepingthinker, plethora, NatashaRostova↑ comment by AnnaSalamon · 2016-11-27T22:29:20.096Z · LW(p) · GW(p)
Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)
Anyone want to join me in this, or else make a counterproposal?
Replies from: SatvikBeri, sarahconstantin, Alexandros, ingres, John_Maxwell_IV, casebash, RyanCarey, moridinamael, Viliam, philh, ChristianKl, hairyfigment↑ comment by SatvikBeri · 2016-11-27T22:42:41.246Z · LW(p) · GW(p)
Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he's up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.
Replies from: alyssavance↑ comment by alyssavance · 2016-11-30T01:12:31.584Z · LW(p) · GW(p)
Seconding Anna and Satvik
↑ comment by sarahconstantin · 2016-11-27T22:50:11.658Z · LW(p) · GW(p)
I also vote for Vaniver as BDFL.
↑ comment by Alexandros · 2016-11-29T10:55:56.959Z · LW(p) · GW(p)
Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of "who" but I wonder how much weight there will be behind this person. Where would the BDFL's authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.
Replies from: AnnaSalamon, Lumifer↑ comment by AnnaSalamon · 2016-11-29T18:16:33.261Z · LW(p) · GW(p)
I'm empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).
I like the idea of granting domain ownership if we in fact go down the BDFL route.
Replies from: Alexandros↑ comment by Alexandros · 2016-11-30T04:22:57.964Z · LW(p) · GW(p)
that's awesome. I'm starting to hope something may come of this effort.
↑ comment by Lumifer · 2016-11-29T18:00:50.957Z · LW(p) · GW(p)
An additional point is that you you can only grant the DFL part. The B part cannot be granted but can only be hoped for.
Replies from: Alexandros↑ comment by Alexandros · 2016-12-02T08:54:23.063Z · LW(p) · GW(p)
An additional additional point is that the dictator can indeed quit and is not forced to kill themselves to get out of it. So it's actually not FL. And in fact, it's arguably not even a dictatorship, as it depends on the consent of the governed. Yes, BDFL is intentionally outrageous to make a point. What's yours?
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2016-12-02T12:39:51.280Z · LW(p) · GW(p)
And in fact, it's arguably not even a dictatorship, as it depends on the consent of the governed.
The person who owns the website doesn't need consent of the people who visit the website to make changes to the website.
↑ comment by Lumifer · 2016-12-02T15:59:02.599Z · LW(p) · GW(p)
intentionally outrageous to make a point
Funny how I didn't notice anyone become outraged.
And, of course, BDFL's powers do NOT depend on the consent of the governed -- it's just that the governed have the ability to exit.
As to the point, it's merely reminding of the standard trade-off with dictator-like rulers. They are like a little girl:
When she was good
She was very, very good
And when she was bad she was horrid.
↑ comment by namespace (ingres) · 2016-11-28T21:10:59.770Z · LW(p) · GW(p)
I'm concerned that we're only voting for Vaniver because he's well known, but I'll throw in a tentative vote for him.
Who are our other options?
Replies from: btrettel, Viliam↑ comment by btrettel · 2016-11-30T16:19:20.099Z · LW(p) · GW(p)
I'll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the "villians" of decision making in a book on decision making he suggested to me, Decisive.) Plus, I scanned this thread and I haven't seen Vaniver say he is okay with such a role.
Replies from: Vaniver↑ comment by Vaniver · 2016-11-30T17:45:08.772Z · LW(p) · GW(p)
I think Vaniver would agree that considering other candidates too would be a wise choice.
I do agree; one of the reasons why I haven't accepted yet is to give other people time to see this, think about it, and come up with other options.
(I considered setting up a way for people to anonymously suggest others, but ended up thinking that it would be difficult to find a way to make it credibly anonymous if I were the person that set it up, and username2 already exists.)
↑ comment by Viliam · 2016-11-28T21:49:16.269Z · LW(p) · GW(p)
I'm concerned that we're only voting for Vaniver because he's well known
Also because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T05:28:55.254Z · LW(p) · GW(p)
Do we know anyone who actually has experience doing product management? (Or has the sort of resume that the best companies like to see when they hire for product management roles. Which is not necessarily what you might expect.)
Replies from: SatvikBeri, Alexandros↑ comment by SatvikBeri · 2016-11-28T05:39:46.071Z · LW(p) · GW(p)
I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.
That said, I don't think I was great at it, and suspect most of the lessons I learned are easily transferred.
Edit: I actually suspect that I've learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T05:51:15.631Z · LW(p) · GW(p)
OK, I vote for Satvik as the person to choose who the BDFL is :D
Replies from: Gurkenglas↑ comment by Gurkenglas · 2016-11-29T20:13:10.527Z · LW(p) · GW(p)
↑ comment by Alexandros · 2016-11-30T04:24:31.717Z · LW(p) · GW(p)
I've done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.
↑ comment by moridinamael · 2016-11-30T15:02:11.728Z · LW(p) · GW(p)
I concur with placing Vaniver in charge. Mainly, we need a leader and a decision maker empowered to execute on suggestions.
↑ comment by ChristianKl · 2016-11-29T18:37:49.704Z · LW(p) · GW(p)
Having a BDFL would be great. Vaniver seems to be a good candidate.
↑ comment by hairyfigment · 2016-12-01T23:47:41.599Z · LW(p) · GW(p)
I have reservations about this, especially the weird 'for life' part.
↑ comment by SatvikBeri · 2016-11-27T17:18:43.105Z · LW(p) · GW(p)
On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:
- In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
- Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
- A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
- Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
- Votes from highly-voted users count for more.
- Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
- A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a problem I or other people were struggling with"
- No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
- Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I've made, not just the top level
- Built-in argument mapping tools for comments
- Shadowbanning, a la Hacker News
- Initially restricted growth, e.g. by invitation only
↑ comment by casebash · 2016-11-28T00:12:50.679Z · LW(p) · GW(p)
"Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone"." - this seems complex and better done via a comment
Replies from: berekuk, btrettel↑ comment by berekuk · 2016-12-01T01:51:18.238Z · LW(p) · GW(p)
For the Russian LessWrong slack chat we agreed on the following emoji semantics:
- :+1: means "I want to see more messages like this"
- :-1: means "I want to see less messages like this"
- :plus: means "I agree with a position expressed here"
- :minus: means "I disagree"
- :same: means "it's the same for me" and is used for impressions, subjective experiences and preferences, but without approval connotations
- :delta: means "I have changed my mind/updated"
We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.
It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.
Replies from: MathieuRoy, oooo↑ comment by Mati_Roy (MathieuRoy) · 2020-10-04T18:33:10.551Z · LW(p) · GW(p)
Shared here: What reacts do you to be able to give to posts? (emoticons, cognicons, and more) [LW(p) · GW(p)]
↑ comment by oooo · 2016-12-06T02:24:47.362Z · LW(p) · GW(p)
It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.
This Slack-specific emoji capability is akin to Facebook Reactions; namely a wider array of aggregated post/comment actions.
↑ comment by btrettel · 2016-11-30T16:31:24.305Z · LW(p) · GW(p)
Some sort of emoticon could work, like what Facebook does.
Personally, I find the lack of feedback from an upvote or downvote to be discouraging. I understand that many people don't want to take the time to provide a quick comment, but personally I think that's silly as a 10 second comment could help a lot in many cases. If there is a possibility for a 1 second feedback method to allow a little more information than up or down, I think it's worth trying.
Replies from: Sniffnoy↑ comment by btrettel · 2016-11-30T16:25:48.215Z · LW(p) · GW(p)
Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
This would be a top recommendation of mine as well. There are quite a few prediction tracking websites now: PredictionBook, Metaculus, and Good Judgement Open come to mind immediately, and that's not considering the various prediction markets too.
I've started writing a command line prediction tracker which will integrate with these sites and some others (eventually, at least). PredictionBook and Metaculus both seem to have APIs which would make the integration rather easy. So integration with LessWrong should not be particularly difficult. (The API for Metaculus is not documented best I can tell, but by snooping around the code you can figure things out...)
↑ comment by gucciCharles · 2016-12-13T08:57:55.308Z · LW(p) · GW(p)
On that topic how you upvote? I've never been able to figure it out. I can't find any upvote button. Does anyone know where the button is?
Replies from: arundelo↑ comment by arundelo · 2016-12-13T18:30:17.050Z · LW(p) · GW(p)
It's a thumbs-up that is in the lower left corner of a comment or post (next to a thumbs-down). It looks like the top of these two thumbs-ups (or the bottom one after you've clicked it):
If you don't see it, it may be that they've turned off voting for new or low-karma accounts.
Replies from: gucciCharles↑ comment by gucciCharles · 2016-12-17T06:30:28.365Z · LW(p) · GW(p)
Ya, that must be it. I've been on here for like 3 years (not with this account though) but only after the diaspora. Really excited that things are getting posted again. One major issue with such a system is that I now feel pressure to post popular content. A major feature of this community is that nothing is dismissed out of hand. You can propose anything you want so long as it's supported by a sophisticated argument. The problem with only giving voting privileges to >x karma accounts is that people, like myself, will feel a pressure to post things that are generally accepted.
Now to be clear I'm not opposed to such a filter. I've personally noticed that for example, slatestarcodex doesn't have the same consistently high quality comments as lesswrong. For example people will have comments like "what's falsification?"etc. So I acknowledge that such a filter might be useful. At the same time however I'm pointing out one potential flaw with such a filter, that it lends itself to creating an echo-chamber.
↑ comment by ESRogs · 2016-11-28T22:19:44.809Z · LW(p) · GW(p)
Built-in argument mapping tools for comments
Could you say more about what you have in mind here?
Replies from: Venryx↑ comment by Venryx · 2017-08-13T02:08:46.341Z · LW(p) · GW(p)
Maybe something like this? https://debatemap.live (note: I'm the developer of it)
↑ comment by nshepperd · 2016-11-27T19:06:01.168Z · LW(p) · GW(p)
I think you're right that wherever we go next needs to be a clear schelling point. But I disagree on some details.
I do think it's important to have someone clearly "running the place". A BDFL, if you like.
Please no. The comments on SSC are for me a case study in exactly why we don't want to discuss politics.
Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. "Auto-aggregation" would be bad however.
Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri's suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.
I don't believe that the basilisk is the primary reason for LW's brand rust. As I see it, we squandered our "capital outlay" of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.
↑ comment by Sniffnoy · 2016-11-30T08:39:31.479Z · LW(p) · GW(p)
doing essentially nothing about a large influx of new users interested only in "debating philosophy" who do not even read the sequences (Eternal November).
This is important. One of the great things about LW is/was the "LW consensus", so that we don't constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the "LW consensus", but then, I don't think anyone entirely did except Eliezer himself. When I say "the basics", I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying "But what if nothing is real?", we don't have to debate them. That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say "read the sequences", but seriously, it saved a lot of trouble.
There were occasional interesting and original objections to the basics. I can't find it now but there was an interesting series of posts responding to this post of mine on Savage's theorem; this response argued for the proposition that no, we shouldn't use probability (something that others had often asserted, but with much less reason). It is indeed possible to come up with intelligent objections to what we consider the basics here. But most of the objections that came up were just unoriginal and uninformed, and could, in fact, correctly be answered with "read the sequences".
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-04T13:12:34.150Z · LW(p) · GW(p)
That's the sort of thing it's useful to just downvote (or otherwise discourage, if we're making a new system), no matter how nicely it may be said, because no productive discussion can come of it.
When it's useful it's useful, when it's damaging it's damaging, It's damaging when the sequences don't actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It's just too easy to silently downvote, or write "read the sequences". In an alternative universe there is a LW where people don't RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that's where the damage is coming from.
Unfortunately, although all of that is fixable, it cannot be fixed without "debating philosophy".
ETA
Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That's a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don't revise doctrine.
Replies from: Alexei↑ comment by Alexei · 2016-12-05T23:19:19.454Z · LW(p) · GW(p)
Good point. I actually think this can be fixed with software. StackExchange features are part of the answer.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-06T08:54:26.789Z · LW(p) · GW(p)
I'm not sure so what you mean. Developing Sequences 0.1 can be done with the help of technology, but it can't be done without community effort, and without a rethink of the status of the sequences.
↑ comment by gwillen · 2016-11-27T22:59:00.713Z · LW(p) · GW(p)
I think the basilisk is at least a very significant contributor to LW's brand rust. In fact, guilt by association with the basilisk via LW is the reason I don't like to tell people I went to a CFAR workshop (because rationality -> "those basilisk people, right?")
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T03:26:11.134Z · LW(p) · GW(p)
Reputations seem to be very fragile on the Internet. I wonder if there's anything we could do about that? The one crazy idea I had was (rot13'd so you'll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.
Replies from: ingres↑ comment by namespace (ingres) · 2016-11-28T21:22:05.275Z · LW(p) · GW(p)
Ooh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh'yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.
↑ comment by TheAncientGeek · 2016-11-28T14:05:23.674Z · LW(p) · GW(p)
"debating philosophy
As opposed to what? Memorising the One true Philosophy?
Replies from: Vaniver↑ comment by Vaniver · 2016-11-28T17:07:44.760Z · LW(p) · GW(p)
As opposed to what? Memorising the One true Philosophy?
The quotes signify that they're using that specifically as a label; in context, it looks like they're pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There's a sort of philosophy that wants to endlessly hash out the big questions, and there's another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-11-28T18:16:14.768Z · LW(p) · GW(p)
How many problems has the second sort solved?
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
Replies from: Vaniver, eagain↑ comment by Vaniver · 2016-11-28T20:04:10.981Z · LW(p) · GW(p)
How many problems has the second sort solved?
Too many for me to quickly count?
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured.
I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn't always useful to manufacture difficulty as an opportunity to display intelligence.
Replies from: Kaj_Sotala, TheAncientGeek↑ comment by Kaj_Sotala · 2016-11-29T10:44:47.863Z · LW(p) · GW(p)
Too many for me to quickly count?
Name three, then. :)
Replies from: Vaniver, MugaSofer↑ comment by Vaniver · 2016-11-29T16:18:16.111Z · LW(p) · GW(p)
What I have in mind there is basically 'approaching philosophy like a scientist', and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more 'philosophical' than not:
How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.
Replies from: WalterL, TheAncientGeek↑ comment by TheAncientGeek · 2016-12-04T13:01:10.423Z · LW(p) · GW(p)
Approaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:-
How to determine causality from observational data;
What causality is is the correct question/.
where the perception that humans have free will comes from;
Whether humans have the power of free will is the correct question.
where human moral intuitions come from.
Whether human moral intuitions are correct is the correct question.
Replies from: Vaniver↑ comment by Vaniver · 2016-12-04T21:46:56.319Z · LW(p) · GW(p)
What causality is is the correct question/.
Oh, if you count that one as a question, then let's call that one solved too.
Whether humans have the power of free will is the correct question.
Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.
Whether human moral intuitions are correct is the correct question.
Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-05T19:34:00.162Z · LW(p) · GW(p)
What causality is is the correct question/.
Oh, if you count that one as a question, then let's call that one solved too.
Solved where?
Whether humans have the power of free will is the correct question.
Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.
How can philosophers be systematically wrong about the nature of their questions? And what makes you right?
Of course, inasmuch as you agree with Y., you are going to agree that the only question to be answered is where the perception comes for, but this is about truth, not opinion: the important point is that he never demonstrated that.
Whether human moral intuitions are correct is the correct question.
Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.
if moral intuitions come from God, that might underpin correctness, but things are much less straightforward in naturalistic explanations.
Replies from: Vaniver↑ comment by Vaniver · 2016-12-15T01:29:01.181Z · LW(p) · GW(p)
Solved where?
On one level, by the study of dynamical systems and the invention of differential equations.
On a level closer to what you meant when you asked the question, most of the confusing things about 'causality' are actually confusing things about the way our high-level models of the world interact with the world itself.
The problem of free will is a useful example of this. People draw this picture that looks like [universe] -> [me] -> [my future actions], and get confused, because it looks like either determinism (the idea that [universe] -> [my future actions] ) isn't correct or the intuitive sense that I can meaningfully choose my future actions (the idea that [me] -> [my future actions] ) isn't correct.
But the actual picture is something like [universe: [me] -> [my future actions] ]. That is, I am a higher-level concept in the universe, and my future actions are a higher-level concept in the universe, and the relationship between the two of them is also a higher-level concept in the universe. Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn't a real conflict between them. (The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism; if I had perfect self-knowledge, I wouldn't have any uncertainty about my future actions, but I don't have perfect self-knowledge. It also comes from the relative importance of decision-making as a 'natural concept' in the whole 'being a human' business.)
And so when philosophers ask questions like "When the cue ball knocks the nine ball into the corner pocket, what are the terms of this causal relation?" (from SEP), it seems to me like what they're mostly doing is getting confused about the various levels of their models, and mistaking properties of their models for properties of the territory.
That is, in the territory, the wavefunction of the universe updates according to dynamical equations, and that's that. It's only by going to higher level models that things like 'cause' and 'effect' start to become meaningful, and different modeling choices lead to different forms of cause and effect.
Now, there's an underlying question of how my map came to believe the statement about the territory that begins the previous paragraph, and that is indeed an interesting question with a long answer. There are also lots of subtle points, about stuff like that it's interesting that we don't really need an idea of counterfactuals to describe the universe and the dynamical equations but we do need an idea of counterfactuals to describe higher-level models of the universe that involve causality. But as far as I can tell, you don't get the main point right by talking about causal relata and you don't get much out of talking about the subtle points until you get the main point right.
To elaborate a bit on that, hopefully in a way that makes it somewhat clearer why I find it aggravating or difficult to talk about why my approach on philosophy is better, typically I see a crisp and correct model that, if accepted, obsoletes other claims almost accidentally. If you accept the [universe: [me] -> [my future actions] ] model of free will, for example, then nearly everything written about why determinism is correct / incorrect or free will exists / doesn't exists is just missing the point and is implicitly addressed by getting the point right, and explicitly addressing it looks like repeating the point over and over again.
This is also where the sense that they're wrong about questions is coming from; compare to Babbage being surprised when a MP asked if his calculator would give the right output if given the wrong inputs. If they're asking X, then something else is going wrong upstream, and fixing that seems better than answering that question.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-15T13:39:02.692Z · LW(p) · GW(p)
What causality is is the correct question/.
Oh, if you count that one as a question, then let's call that one solved too.
Solved where?
On one level, by the study of dynamical systems and the invention of differential equations.
Nope. On most of the detailed questions a philosopher might want to ask about causality , physics comes down firmly on both sides. Physics is not monolothic.
Does causality imply determinism? (In)determinism is an open question in physics. Note that "differential equations" are used in both classical (deterministic by most accounts) and quantum (indeterminstic by most accounts) physics.
Must causes precede effects? Perhaps not, if timeless physics, or the theory of closed timelike curves, is correct.
Is causality fundamental? It is in causal dynamic triangulation, and a few other things. otherwise not.
Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn't a real conflict between them.
Which may be true or false depending on whatever "meaningfully" means. If "meaningful" means choosing between more than one possible future, as required by libertarian free will, then determinism definitely excludes meaningful choice, since it excludes the existence of more than one possible future.
The main problem here is vagueness: you didn't define "free will" or "meaningful". Philosophers have known for a long time that people who think free will is compatible with determinism are defining it one way, and people who think it is not are defining it another way. If you had shown that the libertarian version of free will is compatible with determinism, you would have shown something momentous , but you actually haven't shown anything because you haven't defined "free will" or "meaningful".
Incidentally, you have also smuggled in the idea that the universe actually is, categorically, deterministic. (Compatibilism is usually phrased hypothetically). As noted, that is actually an open question.
The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism;
Explaining the feeling of having free will, is a third definition, something different yet again. You don't see much about in mainstream philosophical literature because the compatibility between a false impression of X and the non-existence of X is too obvious to be worth pointing out -- not because it is some great insight that philosophers have never had because they are too dumb.
Having a false impression of X is the least meaningful version of X, surely!
That is, in the territory, the wavefunction of the universe updates according to dynamical equations, and that's that. It's only by going to higher level models that things like 'cause' and 'effect' start to become meaningful, and different modeling choices lead to different forms of cause and effect.
So is causality entirely high level or does it have a fundamental basis?
To elaborate a bit on that, hopefully in a way that makes it somewhat clearer why I find it aggravating or difficult to talk about why my approach on philosophy
I find it aggravating to keep pointing out to people that they haven't in any way noticed the real problem. It seems to you that you have solved the problem of free will just because you are using concepts in such a vague way that you can;t get a handle on the real problem.
Replies from: Viliam, Vaniver, entirelyuseless↑ comment by Viliam · 2016-12-16T09:31:29.132Z · LW(p) · GW(p)
(In)determinism is an open question in physics. Note that "differential equations" are used in both classical (deterministic by most accounts) and quantum (indeterminstic by most accounts) physics.
For the human level, it is irrelevant whether quantum physics is lawfully deterministic or lawfully following a quantum random number generator. It is still atoms boucing according to equations, except that in one case those equations include a computation of a random number. If every atom is secretly holding a coin that it flips whenever it bounces off another atom, from the human level it makes no difference.
People are often mesmerized by the word "indeterministic", because they interpret it as "that means magic is possible, and my thoughts actually could be changing the physical events directly". But that absolutely doesn't follow. It the atoms flips a coin whenever it bounces off another atom, that is still completely unrelated to the content of my thoughts.
Quantum experiments that show how particles follow some statistical patterns when moving through two slits, still don't show any connection between the movement of the particle and the human thought. So this is all a huge red herring.
If you don't understand how whether the atom is flipping a truly random coin when bouncing off another atom, or whether it only follows a computation that doesn't include a random coin is completely irrelevant for debating human "free will", then you are simply confused about the topic.
Maybe this will help:
Imagine that a master has two slaves. The first slave receives a command "today, you will pick cotton the whole day". The second slave receives a command "today in the morning, your foreman will flip a coin -- if it lands head, you will pick cotton the whole day; if it lands tails, you will clean the stables the whole day". Is the second slave any more "free" than the first one? (Just because until the foreman flips the coin he is unable to predict what he will be doing today? How is that relevant to freedom? If the foreman instead of a coin uses a quantum device and sends an electron through two slits, does that make the difference?)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-16T12:52:13.999Z · LW(p) · GW(p)
People are often mesmerized by the word "indeterministic", because they interpret it as "that means magic is possible, and my thoughts actually could be changing the physical events directly".
Perhaps laypeople are that confused, but what we are talking about is Yudkwosky versus professional philosophy.
Philosophers have come up with a class of theory called "naturalistic libertarian free will", which is based on appealing to physical indeterminism to provide a basis for free will, without appeals to magic. (eg Robert Kane's).
But that absolutely doesn't follow. It the atoms flips a coin whenever it bounces off another atom, that is still completely unrelated to the content of my thoughts.
You speak as though your thoughts are distinct from the physical behaviour of your brain...but you don't actually believe. Plugging in your actual belief that thoughts are just a high-level description of fine-grained neural processing, then the question of Fw becomes the following:
"How can a physical information-processing system behave in a way that is, seen from the outside indeterminstic (unpredictable in principle) and also, within reasonable limits, rational, intelligent and agentive.
(ie from the outside we might want to preserve the validity of "X did Y because they thought it was a good idea" but only as a high-level descritption, and without thoughts appearing in the fundamental ontology).
That is the problem that naturalistic FW addresses.
If you don't understand how whether the atom is flipping a truly random coin when bouncing off another atom, or whether it only follows a computation that doesn't include a random coin is completely irrelevant for debating human "free will", then you are simply confused about the topic.
Do the reading I've done before calling me confused. You guys would sound a lot more rational f you could get into the habit of saying "I know of no good argument for Y" instead of "Y is wrong and anyone who believes it is an idiot".
Imagine that a master has two slaves. The first slave receives a command "today, you will pick cotton the whole day". The second slave receives a command "today in the morning, your foreman will flip a coin -- if it lands head, you will pick cotton the whole day; if it lands tails, you will clean the stables the whole day". Is the second slave any more "free" than the first one? (Just because until the foreman flips the coin he is unable to predict what he will be doing today? How is that relevant to freedom? If the foreman instead of a coin uses a quantum device and sends an electron through two slits, does that make the difference?)
The usual fallacy: you are assuming that the coin flip is in the driving seat, but actually no part of brain has to act on any particular indeterminstic impulse. If an algorithm contains indeterminsitc function calls embedded in determinstic code, you can't strip out the deterministic code and still be able to predict what it does.
Replies from: Viliam, entirelyuseless↑ comment by Viliam · 2016-12-16T14:24:10.569Z · LW(p) · GW(p)
You speak as though your thoughts are distinct from the physical behaviour of your brain...but you don't actually believe.
More like: my thoughts are implemented by the interaction of the atoms in my brain, but there is no meaningful relation between the content of my thoughts, and how the atoms in my brain flipped their coins.
Somewhat related to this part in "The Generalized Anti-Zombie Principle":
[...] is it a reasonable stipulation to say that flipping the switch does not affect you in any [in-principle experimentally detectable] way? All the particles in the switch are interacting with the particles composing your body and brain. There are gravitational effects—tiny, but real and [in-principle experimentally detectable]. The gravitational pull from a one-gram switch ten meters away is around 6 * 10-16 m/s2. That's around half a neutron diameter per second per second, far below thermal noise, but way above the Planck level.
My point is that technically there is an interaction between the content of my thoughts and how the individual atoms in my brain flip their coins (because the "concent of my thoughts" is implemented by positions and movements of various atoms in my brain), but there is still no meaningful correlation. It's not like thinking "I want to eat the chocolate cake now" systematically shifts the related atoms in my brain to the left side, and thinking "I want to keep the chocolate cake for tomorrow" systematically shifts the related atoms in my brain to the right side.
If the atoms in my brains would receive different results from flipping their coins, could it change the content of my thoughts? Sure. Some thought impulses carried by those atoms could arrive a few nanoseconds sooner, some of them a few nanoseconds later, some of them could be microscopically stronger or microscopically weaker. According to chaos theory, at some moment later, an imaginary butterfly in my mind could flap its wings differently, and it could make the difference between whether my desire to eat the cake wins over the plan to put it in the fridge, if the desires are sufficiently balanced. On the other hand, the greater imbalance between these two desires (and the shorter time interval for changes to chaotically propagate through the system), the smaller chance of the imaginary butterfly to change the outcome.
But my point is, again, that there is no meaningful correlation between the coin flips and the resulting thoughts and actions. Suppose you have two magical buttons: if you press one of them, you can make all my cake-decision-related atoms receive a head on their coins, if you press the other, you can make them all receive tails. You wouldn't even know which one to press. Maybe neither would produce the desired butterfly.
The conclusion is that while technically how the atoms flip their coins has some relation with the content of my thoughts, the relation is meaningless. Expecting it to somehow explain the "free will" means searching for the answer in the wrong place, simply because that's where the magical quantum streetlight is.
"How can a physical information-processing system behave in a way that is, seen from the outside indeterminstic (unpredictable in principle) and also, within reasonable limits, rational, intelligent and agentive.
The aspects that are "unpredictable in principle" are irrelevant to whether it seems rational and agentive.
A stone rolling down the hill is technically speaking "unpredictable in principle", because there is the "Heisenberg's uncertainty" about the exact position and momentum of its particles, and yet it doesn't seem rational nor agentive. If this argument does not give "free will" to stones, it shouldn't be used as an explanation of "free will" in humans, because it is not valid in general.
Replies from: TheAncientGeek, entirelyuseless↑ comment by TheAncientGeek · 2016-12-16T14:42:17.717Z · LW(p) · GW(p)
More like: my thoughts are implemented by the interaction of the atoms in my brain, but there is no meaningful relation between the content of my thoughts, and how the atoms in my brain flipped their coins.
There is a relationship between your brain state and your thoughts, which is that your thoughts are entirely constituted by, and predictable from, your brain state. Moreover, the temporal sequence of your thoughts is constituted by and predictable from you the evolution of your brain state, whether it is determinsitic or indeterministc.
I see no grounds for saying that your thoughts lack a "meaningful" connection to your brain states in the indeterministic case only, ... but then I don't know that you mean by "meaningful". Care to taboo it for me?
My point is that technically there is an interaction between the content of my thoughts and how the individual atoms in my brain flip their coins (because the "concent of my thoughts" is implemented by positions and movements of various atoms in my brain), but there is still no meaningful correlation. It's not like thinking "I want to eat the chocolate cake now" systematically shifts the related atoms in my brain to the left side, and thinking "I want to keep the chocolate cake for tomorrow" systematically shifts the related atoms in my brain to the right side.
No. Its more like identity. You seem, to be saying that your thoughts aren't non -physical things are causing physical brain states. That's something. Specifically, it is a refutation of interactionist dualism...but, as such it doesn't have that much to do with free will, as usually defined. If all libertarian theories were a subset of interactionist theories, you would be on to something,, but they are not.
The conclusion is that while technically how the atoms flip their coins has some relation with the content of my thoughts, the relation is meaningless.
Taboo meaningless, please.
Expecting it to somehow explain the "free will" means searching for the answer in the wrong place, simply because that's where the magical quantum streetlight is.
Saying it is the wrong answer because it is the wrong answer is pointless. You need to find out what naturalistic libertarianism actually says, and then refute. It.
The aspects that are "unpredictable in principle" are irrelevant to whether it seems rational and agentive.
So much the better for naturalistic libertarianism , then. One of the standard counterargument to it is that the more free you are , the less rational you would be.
A stone rolling down the hill is technically speaking "unpredictable in principle", because there is the "Heisenberg's uncertainty" about the exact position and momentum of its particles, and yet it doesn't seem rational nor agentive.
Which would refute the claim that indeteminism alone is a sufficient condition for rationality and agency. But that claim is not made naturalistic libertarianism. Would it kill you to do some homework?
↑ comment by entirelyuseless · 2016-12-16T14:40:37.680Z · LW(p) · GW(p)
If this argument does not give "free will" to stones, it shouldn't be used as an explanation of "free will" in humans, because it is not valid in general.
This is like saying that if physics does not result in consciousness in stones, we shouldn't admit that it results in consciousness in humans.
I have no particular reason to think that we have libertarian free will. But we do make choices, and if those choices are indeterminate, then we have libertarian free will. If those choices are indeterminate, it will in fact be because of the indeterminacy of the underlying matter.
If your argument is correct, something more is needed for libertarian free will besides choices which are indeterminate. What is that extra component that you are positing as necessary for free will?
Replies from: Viliam↑ comment by Viliam · 2016-12-16T15:24:39.934Z · LW(p) · GW(p)
This is like saying that if physics does not result in consciousness in stones, we shouldn't admit that it results in consciousness in humans.
My point exactly. If physics does not result in consciousness in stones, then "physics" is not an explanation of consciousness in humans.
And neither is "quantum physics" an explanation of free will in humans (as long as we use any definition of "free will" which does not also apply to stones).
What is that extra component that you are positing as necessary for free will?
Well, the philosophers are supposed to have some superior insights, so I am waiting for someone to communicate them clearly. Preferably without invoking quantum physics in the explanation.
My guess is that "free will" belongs to the realm of psychology. We can talk about when we mean when we feel that other people (or animals, or hypothetical machines) have "free will", and what we mean when we feel that we have "free will". That's all there is about "free will". Start with the experiences that caused us to create the expression "free will" in the first place, and follow the chain of causality backwards (what in the world caused us to have these experiences? how exactly does that work?). Don't have a bottom line of "X, in principle" first.
So... what would make me feel that someone or something has a free will? I guess "not completely predictable", "not completely random", "seems to follow some goals" and "can somewhat adapt to changes in its environment" are among the key components, but maybe I forgot something just as important.
But whether something seems predictable or unpredictable to me, that is a fact about my ability to predict, not about the observed thing. I mean, if something is "unpredictable in principle", that would of course explain my inability to predict it. But there are also other reasonable explanations for my inability to predict -- some of them so obvious that they are probably low-status to mention -- such as me not having enough information, or not having enough computing power. I don't see the atoms in other people's brains, I couldn't compute their movements fast enough anyway, so I can't predict other people's thoughts or actions precisely enough. Thus, other people are "not completely predictable" to me.
I see no need to posit that this unpredictability exists "in principle", in the territory. That assumption is not necessary for explaining my inability to predict. If there is no reason why something should exist in the territory, we should avoid talking about it like it necessarily exists there. The quantum physics is a red herring here. My inability to predict systems reaches far beyond what the Heisenberg's uncertainty would make me concede. The vast majority of my inability to predict complex systems such as human brains -- and therefore the vast majority of my perception of "free will" -- is completely unrelated to quantum physics. (Saying that the quantum noise is the only thing that prevents me from reading the contents of your brain and simulating them in real time would be completely delusional. Probably no respected philosopher holds this position explicitly, but all that hand-waving about "quantum physics" is pointing suggestively in this direction. I am saying it's a wrong direction.)
And how I believe in my own "free will"? Similarly, I can't sufficiently observe and predict the working of my own brain either. (Again, the quantum noise is the least of my problems here.)
Replies from: entirelyuseless, TheAncientGeek, entirelyuseless↑ comment by entirelyuseless · 2016-12-16T15:49:19.774Z · LW(p) · GW(p)
Adding to my previous comment, to explain the point about stones more fully:
I understand libertarian free will to mean, "the ability to make choices, in such a way that those choices are not completely deterministic in advance."
We know from experience that people have the ability to make choices. We do not know from experience if they are deterministic in advance or not. And personally I do not know or care.
Your objection about the second part seems to be, "if the second part of the definition is satisfied, but only by reason of something which also exists in stones, that says nothing special about people."
I agree, it says nothing special about people. That does not prevent the definition from being satisfied. And it is not satisfied by stones, since stones do not have the first part, whether or not they have the second.
↑ comment by TheAncientGeek · 2016-12-16T17:03:01.368Z · LW(p) · GW(p)
My point exactly. If physics does not result in consciousness in stones, then "physics" is not an explanation of consciousness in humans.
Generic physics doesn't even even account for toasters. You need to plug in structure.
And neither is "quantum physics" an explanation of free will in humans (as long as we use any definition of "free will" which does not also apply to stones).
An explanation all in itself. or a potential part of an explanation , including other things, such as structure.
My guess is that "free will" belongs to the realm of psychology. We can talk about when we mean when we feel that other people (or animals, or hypothetical machines) have "free will", and what we mean when we feel that we have "free will". That's all there is about "free will". Start with the experiences that caused us to create the expression "free will" in the first place, and follow the chain of causality backwards (what in the world caused us to have these experiences? how exactly does that work?). Don't have a bottom line of "X, in principle" first
Tracing the feeling back might result in a mechanism that produces a false impression of freedom, or a mechanism that results in freedom. What you are suggesting leaves the question open.
I see no need to posit that this unpredictability exists "in principle", in the territory.
Who do yo think is doing that? The claim is hypothetical..that if indeterminism exists in the territory, then it could provide the basis for non-illusory FW. And if we investigate that, we can resolve the question you left open above.
↑ comment by entirelyuseless · 2016-12-16T15:38:22.555Z · LW(p) · GW(p)
This is all fine, for how you understand the idea of free will. And I personally agree that it does not matter whether the world is unpredictable in principle or not. I am just saying that people who talk about libertarian free will, define it as being able to make choices, without those choices being deterministic. And that definition would be satisfied in a situation where people make choices, as they actually do, and their choices are not deterministic because of quantum mechanics (which may or may not be the case -- as I said, I do not care.) And notice that this definition of free will would not be satisfied by stones, even if they are not deterministic, because they do not have the choice part.
In the previous comment, you seemed to be denying that this would satisfy the definition, which would mean that you would have to define libertarian free will in an idiosyncratic sense.
↑ comment by entirelyuseless · 2016-12-16T14:17:48.904Z · LW(p) · GW(p)
Yes. Viliam is assuming that if you actions correspond to an non-deterministic physics, it is "randomness" rather than you who are responsible for your actions. But what would the world look like if you were responsible for your actions? Just because they are indeterminate (in this view) does not mean that there cannot be statistics about them. If you ask someone whether he wants chocolate or vanilla ice cream enough times, you will be able to say what percentage of the time they want vanilla.
Which is just the way it is if the world results from non-deterministic physics as well. In other worlds the world looks exactly the same. That is because it is the same thing. So there is no reason for Viliam's conclusion that it is not really you doing it; unless you were already planning to draw that conclusion no matter the facts turned out to be.
↑ comment by Vaniver · 2016-12-15T18:59:04.287Z · LW(p) · GW(p)
I find it aggravating to keep pointing out to people that they haven't in any way noticed the real problem. It seems to you that you have solved the problem of free will just because you are using concepts in such a vague way that you can;t get a handle on the real problem.
What process do you use to determine which problem is more 'real'? That seems like our core disagreement, and we can probably discuss that more fruitfully.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-15T20:09:09.945Z · LW(p) · GW(p)
The real problem is the problem as discussed in the literature.
Replies from: Vaniver↑ comment by Vaniver · 2016-12-15T21:09:43.577Z · LW(p) · GW(p)
So, implicitly, "the more professional philosophers care about a problem, the more real it is"?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-15T21:36:41.361Z · LW(p) · GW(p)
The more you diverge from discussing the problem in the literature, the less you are really solving the age old problem of X, Y or Z, as opposed to a substitute of your own invention.
Of course there is also a sense in which some age old problem could be a pseudo problem -- but the above reasoning still applies. To really show that a problem is a pseudo problem, you need to show that about the problem as stated and not, again, your own proxy.
Replies from: Vaniver, Lumifer↑ comment by Vaniver · 2016-12-16T00:56:01.038Z · LW(p) · GW(p)
To really show that a problem is a pseudo problem, you need to show that about the problem as stated and not, again, your own proxy.
I see, but it seems to me that people are interested in age old problems for three main reasons: 1) they have some conflicting beliefs, concepts, or intuitions, 2) they want to accomplish some goal that this problem is a part of, or 3) they want to contribute to the age old tradition of wrestling with problems.
My main claim is that I don't care much about the third reason, but do care about the first two. And so if we have an answer for where an intuition comes from, this can often satisfy the first reason. If we have the ability to code up something that works, this can satisfy the second reason.
To give perhaps a cleaner example, consider Epistemology and the Psychology of Human Judgment, in which a philosopher and a psychologist say, basically, "for some weird reason epistemology as a field of philosophy is mostly ignoring modern developments in psychology, and so is focusing its attention on the definition of 'justified' and 'true' instead of trying to actually improve human decision-making or knowledge acquisition. This is what it would look like to focus on the latter."
↑ comment by Lumifer · 2016-12-15T22:05:13.527Z · LW(p) · GW(p)
but the above reasoning still applies
No, it does not. If you do not care about that age-old problem, you don't have an obligation to show anything about it. You can just ignore the pseudo problem and deal with the actual problem you're interested in.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-15T22:13:48.405Z · LW(p) · GW(p)
All this is posited on having made a claim to have solved problem an existing problem. Read back.
↑ comment by entirelyuseless · 2016-12-15T14:47:18.534Z · LW(p) · GW(p)
Vaniver was saying that causality is entirely high level.
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
The true meaning of cause is just "what has something else coming from it, namely when it can help to explain the thing that comes from it." This cannot be reduced to something else, because the thing it was supposedly reduced to would be what causality is from, and would help to explain it, leading to a contradiction.
Replies from: Vaniver, TheAncientGeek↑ comment by Vaniver · 2016-12-15T18:27:28.868Z · LW(p) · GW(p)
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
Disagreed, because this looks like a type error to me. Molecular chemistry describes the interactions of atoms, but the interactions of atoms are not themselves made of atoms. (That is, a covalent bond is a different kind of thing than an atom is.)
Causality is what it looks like when you consider running a dynamical system forward from various starting points, and noting how the future behavior of the system is different from different points. This is deeply similar to the concept of 'running a dynamical system' in the first place, and so you might not want to draw a distinction between the two of them.
My point is that our human view of causality typically involves human-sized objects in it, whereas the update rules of the universe operate on a level much smaller than human-sized, and so the connection between the two is mostly opaque to us.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-12-16T14:49:59.693Z · LW(p) · GW(p)
I'm not sure I understand what you are saying, and I am very sure that you either did not understand what I was saying, or else you misinterpreted it.
I was using "cause" in a very general sense, where it is almost, but not quite, equivalent to anything that can be helpful in explaining something. The one extra element that is needed is that, in some way, the effect comes "from" the cause. In the situation you are calling causality, it is true that you can say "the future behavior comes from the present situation and is somehow explained by it," so there is a kind of causality there. But that is only one kind of causality, and there are plenty of other kinds. For example "is made out of" is a way of being an effect: if something is made out of something else, the thing that is made is "from" the stuff it is made out of, and the stuff helps to explain the existence of the thing.
My point is that if you use this general sense of cause, which I do because I consider it the most useful way to use the word, then you cannot completely reduce causality to something else, but it is in some respect irreducible. This is because "reducing" a thing is finding a kind of cause.
Replies from: Vaniver↑ comment by Vaniver · 2016-12-16T18:15:46.015Z · LW(p) · GW(p)
It looks to me like you're saying something along the lines of 'wait, reverse reductionism is a core part of causation because the properties of the higher level model are caused by the properties of the lower level model.' I think it makes sense to differentiate between reductionism (and doing it in reverse) and temporal causation, though they are linked.
I agree with the point that if someone is trying to figure out the word "because" you haven't fully explained it until you've unpacked each of its meanings into something crisp, and that saying "because means temporal causation" is a mistake because it obscures those other meanings. But I also think it's a mistake to not carve out temporal causation and discuss that independent of the other sorts of causation.
↑ comment by TheAncientGeek · 2016-12-15T14:55:41.758Z · LW(p) · GW(p)
Vaniver was saying that causality is entirely high level.
Maybe. But Yudkowsky sometimes writes as though it is fundamental.
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
It would mean causality is constituted by the low level. Nowadays, causation means efficient causation, not material causation.
This cannot be reduced to something else, because the thing it was supposedly reduced to would be what causality is from, and would help to explain it, leading to a contradiction.
As before ...efficient causation is narrower than anything that can explain anything.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-12-15T16:00:36.265Z · LW(p) · GW(p)
I agree, it would not be a contradiction to think that you could explain efficient causality using material causality (although you still might be wrong.) But you could not explain material causality in the same way.
↑ comment by MugaSofer · 2016-11-29T12:02:42.304Z · LW(p) · GW(p)
Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.
Replies from: g_pepper, TheAncientGeek↑ comment by TheAncientGeek · 2016-11-29T13:59:44.553Z · LW(p) · GW(p)
Is that a joke?
↑ comment by TheAncientGeek · 2016-11-29T15:16:48.607Z · LW(p) · GW(p)
to Too many for me to quickly count?
The last time I counted I came up with two and a half.
↑ comment by eagain · 2017-01-23T20:25:07.132Z · LW(p) · GW(p)
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
I've considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can't be solved in such a way right now, can't be solved at all right now. Adding more "hashing out of big questions" doesn't seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-01-23T22:22:27.499Z · LW(p) · GW(p)
Can you tell which problems can never be solved?
Replies from: eagain↑ comment by eagain · 2017-02-02T05:13:16.359Z · LW(p) · GW(p)
Only an ill-posed problem can never be solved, in principle.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-02-03T13:40:53.419Z · LW(p) · GW(p)
Is there a clear, algorithmic way of determining which problems are ill posed?
Replies from: Cloakless↑ comment by Kaj_Sotala · 2016-11-28T18:11:28.772Z · LW(p) · GW(p)
BDFL
For the benefit of anyone else who'd need to Google: Benevolent Dictator For Life
↑ comment by rayalez · 2016-11-27T22:08:19.624Z · LW(p) · GW(p)
I am working on a project with this purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
It is based on the open source platform that I'm building:
https://github.com/raymestalez/nexus
This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.
This platform is in active development, and I'm very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet - I will be happy to add it. Let me know what you think!
↑ comment by Error · 2016-11-27T16:20:37.231Z · LW(p) · GW(p)
Strong writers enjoy their independence.
This is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though -- if more than a few can be convinced to do it.
Replies from: sdr, atucker, ciphergoth↑ comment by sdr · 2016-11-28T06:25:28.878Z · LW(p) · GW(p)
Speaking as a writer for different communities, there are 2 problems with this:
Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.
"An audience of your own": if a reasonable reader can reasonably assume, that "all good content will also be cross-posted to LW anyways", that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.
The HN "link aggregator" model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.
↑ comment by atucker · 2016-11-27T23:50:03.916Z · LW(p) · GW(p)
"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.
Replies from: Jacobian, sarahconstantin↑ comment by Jacob Falkovich (Jacobian) · 2016-11-29T18:41:14.022Z · LW(p) · GW(p)
My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn't necessarily consider it a bad thing because it meant that almost every post was gold.
In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don't have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn't a good fit for G Wiley's budding rationalist community blog, let alone old LW.
I guess what I'm saying is that there's a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) -> cross posting -> links with centralized discussion -> blogroll (loosest). Any point on the scale could work, but it's important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they're in or out.
↑ comment by sarahconstantin · 2016-11-28T15:10:48.400Z · LW(p) · GW(p)
I have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we're going to have to figure out how to address.
↑ comment by Paul Crowley (ciphergoth) · 2016-11-27T18:44:50.911Z · LW(p) · GW(p)
The big downside of this is that it divides the discussion.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2016-11-27T21:42:24.828Z · LW(p) · GW(p)
But what's so bad about divided discussion? In some ways it helps by increasing the surface area to which the relevant ideas are exposed.
↑ comment by SatvikBeri · 2016-11-27T16:59:00.436Z · LW(p) · GW(p)
On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.
Replies from: Vaniver↑ comment by Vaniver · 2016-11-27T17:17:21.514Z · LW(p) · GW(p)
I think a good estimate is close to $10k. Expect to pay about $100/hr for developer time, and something like 100 hours of work to get from where we are to where we want to be doesn't seem like a crazy estimate. Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.
If you can find volunteers who want to do this, we would love code contributions, and you can point them towards here to see what needs to be worked on.
Replies from: Viliam, WalterL, alyssavance, skeptical_lurker↑ comment by Viliam · 2016-11-27T21:50:29.478Z · LW(p) · GW(p)
I think you are underestimating this, and a better estimate is "$100k or more". With an emphasis on the "or more" part.
Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.
Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can't find anyone able and/or willing to accept the deal.
The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxes), and one month feels like too short time to understand the mess of the Reddit code, and implement everything that needs to be done. And the next time, if you need another upgrade, and the same person isn't available, you need another person to spend the same time to understand the Reddit code.
I believe in long term it would be better to rewrite the code from scratch, but that's definitely going to take more than one month.
Replies from: 9eB1, Vaniver↑ comment by 9eB1 · 2016-11-28T18:37:48.253Z · LW(p) · GW(p)
At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn't work, and setting up new user accounts for testing purposes was a huge pain.
I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn't there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren't quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.
Replies from: Vaniver, Viliam↑ comment by Vaniver · 2016-11-28T19:51:43.451Z · LW(p) · GW(p)
The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
Thanks for trying to work on that one!
setting up new user accounts for testing purposes was a huge pain.
This seems like the sort of thing that we should be able to include with whatever makes the admin account that's already there; I was watching someone running a test yesterday and while I showed them the way to award accounts karma, I didn't know of a way to force the karma cache to invalidate, and so they had to wait ~15 minutes to be able to actually make a post with their new test account.
These sorts of usability improvements--a pull request that just adds comments for a section of code you spent a few hours understanding, an improvement to the setup script that makes the dev environment better, are sorely needed and greatly appreciated. In particular, don't feel at all bad about changing the goal from "I'm going to close out issue X" to "I'm going to make it not as painful to have test accounts," since those sorts of improvements will lead to probably more than one issue getting closed out.
↑ comment by Viliam · 2016-11-30T10:01:01.202Z · LW(p) · GW(p)
Maybe it would be easier to make contributions that rely on the code as little as possible -- scripts running on separate pages, that woud (1) verify that the person running them is a moderator, and (2) connect to the LW database (these two parts would be common for all such scripts, so have them as two functions in a shared library) -- and then have a separate simple user interface for doing whatever needs to be done.
For example, make a script called "expose_downvotes" that displays a text field where the moderator can copy the comment permalink, and after clicking "OK" a list of usernames who downvoted the specific comment is displayed (preferably with hyperlinks to their user profiles). For the user's convenience, the comment id is automatically extracted from the permalink.
Then the moderator would simply open this script in a second browser tab, copy link location from the "Permalink" icon at the bottom of a comment, click "OK", done.
Compared with the solutions integrated into LW web page, this solutions is only slightly more complicated for the moderator, but probably much more simple for the developer to write. Most likely, the moderator will have the page bookmarked, so it's just "open bookmark in a new tab, switch to old tab, right-click on the comment icon, copy URL, switch to new tab, click on the text field, Ctrl+V, click OK". Still hundred times more simple (and thousand times faster!) than calling tech support, even assuming their full cooperation.
Each such script could be on a separate page. And they could all be linked together by having another function in the shared library which adds a header containing hyperlinks to all such scripts.
↑ comment by Vaniver · 2016-11-27T22:28:58.636Z · LW(p) · GW(p)
Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem.
I had difficulties finding people without mentioning a price; I'm pretty sure the defect was in where and how I was looking for people.
I also agree that it makes more sense to have a small number of programmers make extensive changes, rather than having a large number of people become familiar with how to deal with LW's code.
I believe in long term it would be better to rewrite the code from scratch, but that's definitely going to take more than one month.
I will point out there's no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links. The main reason we haven't been approaching it that way is that it's harder to make small moves and test their results; either you switch over, or you don't, and no potential replacement was obviously superior.
Replies from: ananda, Viliam↑ comment by ananda · 2016-11-29T17:31:53.075Z · LW(p) · GW(p)
I'm new and came here from Sarah Constantin's blog. I'd like to build a new infrastructure for LW, from scratch. I'm in a somewhat unique position to do so because I'm (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.
Here is how I envision the basic architecture of this project:
- A server that manages all business logic (i.e. posting, moderation, analytics) and interfaces with the frontend (2) and database (3).
- A standalone, modular frontend (probably built with React, maybe reusing components provided by Telescope) that is modern, beautiful, and easily extensible/composable from a dev perspective.
- A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc). The first concern is security, all others predicated on that.
I will kickstart all three parts and bring them to a good place. After this threshold, I will need help with the frontend - this is not my forte and will be better executed by someone passionate about it.
I'm not asking for any compensation for my work. My incentive is to create a project that is actually immediately useful to someone; open-sourcing it and extending that usability is also nice. I also sympathize with the LW community and the goals laid out in this post.
I considered another approach: reverse-engineer HackerNews and use that as the foundation to be adapted to LW's unique needs. If this approach would be of greater utility to LW, I'd be happy to take it.
Replies from: Vaniver, Gram_Stone, ChristianKl, Drea, arunkhanna00↑ comment by Gram_Stone · 2016-11-29T17:41:21.630Z · LW(p) · GW(p)
If you don't get a proper response, it may be worthwhile to make this into its own post, if you have the karma. (Open thread is another option.)
↑ comment by ChristianKl · 2016-12-11T21:24:37.698Z · LW(p) · GW(p)
I considered another approach: reverse-engineer HackerNews and use that as the foundation to be adapted to LW's unique needs
Currently HackerNews and LW both run on the Reddit code base. On of the problems is that Reddit didn't design their software to be easily adopted to new projects. That means it's not easily possible to update the code with new versions.
A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc).
A lot of the data will be votes.
Replies from: whpearson↑ comment by whpearson · 2016-12-11T22:58:57.010Z · LW(p) · GW(p)
Nitpick: Hackernews isn't reddit derived. It is some written in arc. And not open source.
↑ comment by Drea · 2016-12-11T20:12:35.108Z · LW(p) · GW(p)
I see various people volunteering for different roles. I'd be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance.
Like ananda, I'm happy to do this as an open-contribution project rather than paid. I'll reach out to Vaniver via email.
↑ comment by arunkhanna00 · 2016-12-07T05:34:20.351Z · LW(p) · GW(p)
I have some front-end experience and would love to help you(I'm a student). Email me at my username @gmail.com
↑ comment by Viliam · 2016-11-27T22:57:09.116Z · LW(p) · GW(p)
Well, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not -- there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let's say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed.
I am not saying that paying me for this job is a rational thing to do; let's just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.)
Maybe it was a mistake that I didn't mention this option sooner... but hearing all the talk about "some volunteers doing it for free in their free time" made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can't change the past.)
I certainly couldn't do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patches.
And there is also a risk that I am overestimating my abilities here. I never did a project of this scale alone. I mean, I feel quite confident that I could do it in a given time frame, but maybe there would be problems with performance, or some kind of black swan.
I will point out there's no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links.
I would probably try to solve it as a separate step. First, make the new website, as good as possible. Second, import the old content, and redirect the links. Only worry about the import when the new site works as expected.
Or maybe don't even import the old stuff, and keep the old website frozen. Just static pages, without ability to edit anything. All we lose is the ability to vote or comment on a years-old content. At the moment of transition, open officially the new website, block the ability to post new articles on the old one, but still allow people to post comments on the old one for the following three months. At the end, all old links will work, read-only.
↑ comment by WalterL · 2016-12-01T20:39:45.599Z · LW(p) · GW(p)
Not trolling here, genuine question.
How is the LW codebase so awful? What makes it so much more complicated than just a typical blog, + karma? I feel like I must be missing something.
From a UI perspective it is text boxes and buttons. The data structure that you need to track doesn't SEEM too complicated (Users have names, karma totals, passwords and roles? What am I not taking into account?
Replies from: Vaniver, Lumifer↑ comment by Vaniver · 2016-12-01T21:20:27.964Z · LW(p) · GW(p)
How is the LW codebase so awful?
Age, mostly. My understanding is Reddit was one of the first of its kind, and so when building it they didn't have a good sense of what they were actually making. One of the benefits of switching to something new is not just that it's using technology people are more likely to be using in their day jobs, but also that the data arrangement is more aligned with how the data is actually used and thought about.
Replies from: jackk↑ comment by alyssavance · 2016-11-27T17:37:29.503Z · LW(p) · GW(p)
If the money is there, why not just pay a freelancer via Gigster or Toptal?
Replies from: Vaniver↑ comment by Vaniver · 2016-11-27T17:55:46.804Z · LW(p) · GW(p)
Historically, the answers have been things like a desire to keep it in the community (given the number of software devs floating around), the hope that volunteer effort would come through, and me not having much experience with sites like those and thus relatively low affordance for that option. But I think if we pay for another major wave of changes, we'll hire a freelancer through one of those sites.
(Right now we're discussing how much we're willing to pay for various changes that could be made, and once I have that list I think it'll be easy to contact freelancers, see if they're cheap enough, and then get done the things that make sense to do.)
[edit] I missed one--until I started doing some coordination work, there wasn't shared knowledge of what sort of changes should actually be bought. The people who felt like they had the authority to design changes didn't feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn't feel like they had the authority to design changes, and both of them had more important things to be working on.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T03:23:15.377Z · LW(p) · GW(p)
The people who felt like they had the authority to design changes didn't feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn't feel like they had the authority to design changes, and both of them had more important things to be working on.
This sort of leadership vacuum seems to be a common problem in the LW community. Feels to me like people can err more on the side of assuming they have the authority to do things.
Replies from: SatvikBeri↑ comment by SatvikBeri · 2016-11-28T03:50:08.734Z · LW(p) · GW(p)
Yeah, a good default is the UNODIR pattern ("I will do X at Y time unless otherwise directed")
↑ comment by skeptical_lurker · 2016-12-01T00:17:39.774Z · LW(p) · GW(p)
I can code in python, but I have no web dev experience - I could work out what algorithms are needed, but I'm not sure I would know how to implement them, at least not off the bat.
Still, I'd be willing to work on it for less then $100 per hour.
Replies from: Vaniver↑ comment by Vaniver · 2016-12-01T18:39:53.551Z · LW(p) · GW(p)
Thanks for the offer!
Still, I'd be willing to work on it for less then $100 per hour.
If you're working for $x an hour, do you think you would take fewer that 100/x times as long as someone who is experienced at web dev?
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2016-12-01T19:27:49.760Z · LW(p) · GW(p)
If you're working for $x an hour, do you think you would take fewer that 100/x times as long as someone who is experienced at web dev?
Fair pay would be $x an hour given that it takes me 100/x times as long as someone who is experienced at web dev. However in reality estimates of how long the work will take seem to vary wildly - for instance you and Viliam disagree by an order of magnitude.
The more efficient system might be for me to work with someone who does have some web dev experience, if there is someone else working on this.
↑ comment by eagain · 2017-01-23T19:57:47.203Z · LW(p) · GW(p)
Hi. I used to have an LW account and post sometimes, and when the site kinda died down I deleted the account. I'm posting back now.
We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW.
Please do not start discussing politics without enforcing a real-names policy and taking strong measures against groupthink, bullying, and most especially brigading from outside. The basic problem with discussing politics on the internet is that the normal link between a single human being and a single political voice is broken. You end up with a homogeneous "consensus" in the "community" that reflects whoever is willing to spend more effort on spam and disinformation. You wanted something like a particularly high-minded Parliament, you got 4chan.
I have strong opinions about politics and also desire to discuss the topic, which is indeed boiling to a crisis point, in a more rationalist way. However, I also moderate several subreddits, and whenever politics intersects with one of our subs, we have to start banning people every few hours to keep from being brigaded to death.
I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses: allow talking about global warming in the context of civilization-scale risks, allow talking about science funding and state appropriation of scientific output in the context of AI risk and AI progress, allow talking about fiscal multipliers to state spending in the context of effective altruism.
Don't go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet nowadays that talks politics, doesn't moderate a tight ship, and allows open registration.
And in general, the watchword for a rationality community ought to be that most of the time, contrarians are wrong, and in fact boring as well. Rationality should be distinguished from intellectual contrarianism -- this is a mistake we made last time, and suffered for.
Replies from: Lumifer, gjm, Elo↑ comment by Lumifer · 2017-01-23T20:42:54.984Z · LW(p) · GW(p)
enforcing a real-names policy
Ha-ha
I have strong opinions about politics and also desire to discuss the topic
You seem to have a desire to discuss the topic only in a tightly controlled environment where you get to establish the framework and set the rules.
Replies from: gjm↑ comment by gjm · 2017-01-24T02:52:13.732Z · LW(p) · GW(p)
I didn't see anything in eagain's comment that demanded that he[1] get to establish the framework and set the rules.
(It is easy, and cheap, to portray any suggestion that there should be rules as an attempt to get to set them. Human nature being what it is, this will at least sometimes be at least partly right. I don't see that that means that having rules isn't sometimes a damn good idea.)
[1] Apologies if I guessed wrong.
Replies from: Lumifer↑ comment by Lumifer · 2017-01-24T03:13:49.206Z · LW(p) · GW(p)
Eagain knows which ideas are "deeply bad" and he's quite certain they need to be excluded from the conversation.
Replies from: eagain, gjm↑ comment by eagain · 2017-02-02T05:14:30.398Z · LW(p) · GW(p)
I didn't say excluded from the conversation. I said exposed to the bright, glaring sunlight of factual rigor.
Replies from: Lumifer, TheAncientGeek↑ comment by Lumifer · 2017-02-02T16:29:57.268Z · LW(p) · GW(p)
I said exposed to the bright, glaring sunlight of factual rigor.
These words do not appear anywhere in your comment. Instead you said:
I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses ... Don't go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet
"Don't go beyond that" seems to mean not allowing those politics and the bad-idea raiders. "Not allowing" does not mean "expose to sunlight", it means "exclude".
Replies from: snewmark↑ comment by snewmark · 2017-02-02T18:00:12.498Z · LW(p) · GW(p)
I'm not sure if this what eagain was alluding to, but this does seem advisable; Do not permit (continuous) debates of recognizably bad ideas.
I admit this is difficult to enforce, but stating that rule will, in my opinion, color the intended purpose of this website.
Replies from: Lumifer↑ comment by TheAncientGeek · 2017-02-02T13:18:32.043Z · LW(p) · GW(p)
Which isnt being done because of what...? Widespread stupidity?
↑ comment by gjm · 2017-01-24T03:16:19.269Z · LW(p) · GW(p)
Perhaps he does. It wouldn't exactly be an uncommon trait. However, there is a gap between thinking that some particular ideas are very bad and we'd be better off without them, and insisting on setting the rules of debate oneself, and it is not honest to claim that someone is doing the latter merely because you are sure they must be doing the former.
Replies from: Lumifer↑ comment by Lumifer · 2017-01-24T03:27:12.233Z · LW(p) · GW(p)
This thread is about setting the rules for discussions, isn't it? Eagain is talking in the context of specifying in which framework discussing politics can be made to work on LW.
Replies from: gjm↑ comment by gjm · 2017-01-24T03:41:26.208Z · LW(p) · GW(p)
Yup. That is (I repeat) not the same thing as insisting that he get to establish the framework and set the rules.
(It seems to me that with at least equal justice someone could complain that you are determined to establish the framework and set the rules; it's just that you prefer no framework and no rules. I don't know whether that actually is your preference, but it seems to me that there's as much evidence for it as there is for some of what you are saying about eagain's mental state.)
Replies from: Lumifer↑ comment by Lumifer · 2017-01-24T05:21:33.330Z · LW(p) · GW(p)
And yet I'm not telling LW how to set up discussions...
Replies from: gjm↑ comment by gjm · 2017-01-24T11:47:56.012Z · LW(p) · GW(p)
Aren't you? I mean, you're not making concrete proposals yourself, of course; I don't think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people's. But looking at the things you object to and the things you don't, it seems to me that you're taking a position on how LW's discussions should be just as much as eagain is; you're just expressing it by objecting to things that diverge from it, rather than by stating it explicitly.
Replies from: entirelyuseless, eagain, Lumifer↑ comment by entirelyuseless · 2017-01-24T14:38:45.207Z · LW(p) · GW(p)
Lumifer seems to object to things because he finds it enjoyable to object to things, and this is a good explanation for why he objects to things rather than making his own proposals. But this means that he is not necessarily taking a position on how discussion should be, since he would be likely to object to both a proposal and its opposite, just because it would still be fun to object.
Replies from: gjm↑ comment by eagain · 2017-02-04T16:06:47.237Z · LW(p) · GW(p)
I don't think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people's.
Hmm. That sounds like a nice rule: anyone who spends all their posting efforts on objecting to other people's ideas without putting forth anything constructive of their own shall be banned, or at least downvoted into oblivion.
Replies from: gjm↑ comment by gjm · 2017-01-24T02:49:25.735Z · LW(p) · GW(p)
You end up with a homogeneous "consensus" in the "community" that reflects whoever is willing to spend more effort on spam and disinformation.
I remark that this is not a million miles from what Eugine_Nier tried to do, and unfortunately he was not entirely unsuccessful. (Though he didn't get nearly as far as producing a homogeneous consensus in favour of his ideas.)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T03:17:52.573Z · LW(p) · GW(p)
Re: #2, it seems like most of the politics discussion places online quickly become dominated by one view or another. If you wanted to solve this problem, one idea is
Start an apolitical discussion board.
Gather lots of members. Try to make your members a representative cross-section of smart people.
Start discussing politics, but with strong norms in place to guard against the failure mode where people whose view is in the minority leave the board.
I explained here why I think reducing political polarization through this sort of project could be high-impact.
Re: #3, I explain why I think this is wrong in this post. "Strong writers enjoy their independence" - I'm not sure what you're pointing at with this. I see lots of people who seem like strong writers writing for Medium.com or doing newspaper columns or even contributing to Less Wrong (back in the day).
(I largely agree otherwise.)
↑ comment by FourFire · 2016-11-27T19:29:53.602Z · LW(p) · GW(p)
I agree completely.
Politics has most certainly damaged the potential of SSC. Notably, far fewer useful insights have resulted from the site and readership than was the case with LessWrong at it's peak, but that is how Yvain wanted it I suppose. The comment section has, according to my understanding become a haven for NRx and other types considered unsavoury by much of the rationalist community, and the quality of the discussion is substantially lower in general than it could have been.
Sure.
Codebase, just start over, but carry over the useful ideas implemented, such as disincentivizing flamewars by making responses to downvoted comments cost karma, zero initial karma awarded for posting, and any other rational discussion fostering mechanics which have become apparent since then.
I agree, make this site read only, use it and the wiki as a knowledge base, and start over somewhere else.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T03:30:38.818Z · LW(p) · GW(p)
disincentivizing flamewars by making responses to downvoted comments cost karma
I think Hacker News has a better solution to that problem (if you reply to someone who replied to you, your reply gets delayed--the deeper the thread, the longer the delay).
Replies from: SatvikBeri↑ comment by SatvikBeri · 2016-11-28T03:45:39.609Z · LW(p) · GW(p)
I wonder if the correct answer is essentially to fork Hacker News, rather than Reddit (Hacker News isn't open source, but I'm thinking about a site that takes Hacker News's decisions as the default, unless there seems to be a good reason for something different.)
Replies from: John_Maxwell_IV, John_Maxwell_IV, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T04:46:19.235Z · LW(p) · GW(p)
Well, there's a vanilla version of HN that comes with the Arc distribution. It doesn't look like any of the files in the Arc distribution have been modified since Aug 4, 2009. I just got it running on my machine (only took a minute) and submitted a link. Unsure what features are missing. Relevant HN discussion.
If someone knows Paul Graham, we might be able to get a more recent version of the code, minus spam prevention features & such? BTW, I believe Y Combinator is hiring hackers. (Consider applying!)
Arc isn't really used for anything besides Hacker News. But it's designed to enable "exploratory programming". That seems ideal if you wanted to do a lot of hands-on experimentation with features to facilitate quality online discussion. (My other comment explains why there might be low-hanging fruit here.)
Replies from: SatvikBeri↑ comment by SatvikBeri · 2016-11-28T05:52:49.828Z · LW(p) · GW(p)
Hacker News was rewritten in something other than Arc ~2-3 years ago IIRC, and it was only after that that they managed to add a lot of the interesting moderation features.
There are probably better technologies to build an HN clone in today–Clojure seems strictly better than Arc, for instance–the parts of HN that are interesting to copy are the various discussion and moderation features, and my sense of what they are mostly comes from having observed the site and seeing comments here and there over the years.
Replies from: toner↑ comment by toner · 2016-11-29T09:12:06.230Z · LW(p) · GW(p)
Here is some alternative code for building an HN clone: https://github.com/jcs/lobsters (see https://lobste.rs/about for differences to HN).
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T04:10:32.186Z · LW(p) · GW(p)
Yes, I think Hacker News is plausibly the best general-purpose online discussion forum right now. It would not surprise me if it's possible to do much better, though. As far as I can tell, most online discussion software is designed to maximize ad revenue (or some proxy like user growth/user engagement) rather than quality discussions. Hacker News is an exception because the entire site is essentially a giant advertisement to get people applying for Y Combinator, and higher-quality discussions make it a better-quality advertisement.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-12-04T17:42:21.976Z · LW(p) · GW(p)
Relevant: http://danluu.com/hn-comments/
↑ comment by Paul Crowley (ciphergoth) · 2016-11-27T18:49:06.997Z · LW(p) · GW(p)
This is the platform Alexandros is talking about: http://www.telescopeapp.org/
↑ comment by Lumifer · 2016-11-30T15:40:02.170Z · LW(p) · GW(p)
If I were NRx, I would feel very amused at the idea of LW people coming to believe that they need to invite an all-powerful dictator to save them from decay and ruin... :-D
Replies from: skeptical_lurker, Alexandros↑ comment by skeptical_lurker · 2016-12-01T00:23:05.968Z · LW(p) · GW(p)
What's hilariously ironic is that our problem immigrants are Eugine's sockpuppets, when Eugine is NRx and anti-immigrant.
That Eugine is so much of a problem is actually evidence in favour of some of his politics.
Replies from: Viliam, hairyfigment↑ comment by hairyfigment · 2016-12-01T23:44:06.949Z · LW(p) · GW(p)
You're talking about someone using the easiest method of disruption available to individuals, combined with individual voter fraud.
This is difficult to stop because of the site's code, which I think the single owner of the site chose.
↑ comment by Alexandros · 2016-12-02T08:55:54.476Z · LW(p) · GW(p)
LW has a BDFL already. He's just not very interested and (many) people don't believe he's able to restore the website. We didn't "come to believe" anything.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2016-12-02T11:50:35.989Z · LW(p) · GW(p)
No, EY effectively doesn't act as a BDFL. He doesn't have the effective power to ban contributors. The last time I asked him to delete a post he said that he can't for site political reasons.
The site is also owned by MIRI and not EY directly.
↑ comment by Lumifer · 2016-12-02T15:49:32.829Z · LW(p) · GW(p)
LW has a BDFL already.
Lessee... He isn't so much benevolent as he is absent. I don't see him exercising any dictatorial powers and as to "for life", we are clearly proposing that this ain't so.
So it seems you're just wrong. An "absentee owner/founder" is a better tag.
↑ comment by sleepingthinker · 2017-02-03T19:29:22.240Z · LW(p) · GW(p)
As a newbie, I have to say that I am finding it really hard to navigate around the place. I am really interested in rational thinking and the ways people can improve it, as well as persuation techniques to try to get people to think rationally about issues, since most of them fall to cognitive biases and bad illogical thinking.
I have found that writing about these concepts for myself really help in clarifying things, but sometimes miss a discussion on these topics, so that's why I came here.
For me, some things that could help improve this site:
1) better organization and making it clearer to navigate
2) a set of easy to read newbie texts
3) ability to share interesting posts from other places and discussing them
↑ comment by plethora · 2016-12-06T08:47:30.330Z · LW(p) · GW(p)
I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.
I didn't delete my account a year ago because the site runs on a fork of Reddit rather than HN (and I recall that people posted links to outside articles all the time; what benefit would a HN-style aggregator add over either what we have now or our Reddit fork plus Reddit's ability to post links to external sites?); I deleted it because the things people posted here weren't good.
I think if you want to unify the community, what needs to be done is the creation of more good content and less bad content. We're sitting around and talking about the best way to nominate people for a committee to design a strategy to create an algorithm to tell us where we should go for lunch today when there's a Five Guys across the street. These discussions were going on the last time I checked in on LW, IIRC, and there doesn't seem to have been much progress made.
I haven't seen anyone link to a LW post written after I deleted since I deleted. I suspect this has less to do with aggregators or BDFL nomination committees and more to do with the fact that a long time ago people used to post good things here and then they stopped.
Then again, better CSS wouldn't hurt. This place looks like Reddit. Nobody wants to link to a place that looks like Reddit.
↑ comment by NatashaRostova · 2016-11-28T22:02:53.220Z · LW(p) · GW(p)
Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence.
That's true. LW isn't bringing back yvain/Scott or other similar figures. However, it is a cool training ground/incubator for aspiring writers. As of now I'm a 'no one.' I'd like to try to see if I can become 'some one.' SSC comments don't foster this. LW is a cool place to try, it's not like anyone is currently reading my own site/blog.
comment by PeerGynt · 2016-11-27T09:02:00.374Z · LW(p) · GW(p)
This is not going to happen unless you deal with the troll
Replies from: Viliam, Vaniver↑ comment by Viliam · 2016-11-27T22:30:40.267Z · LW(p) · GW(p)
I really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really couldn't agree more.
↑ comment by Vaniver · 2016-11-28T06:05:59.240Z · LW(p) · GW(p)
There's an issue that I expect will be closed sometime this week that I think will round out the suite of technical tools that will give moderators the edge over trolls. Of course, people are intelligent and can adapt, so I'm not going to hang up a Mission Accomplished banner just yet.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2016-11-29T01:49:53.129Z · LW(p) · GW(p)
I predict that whatever is in this drop will not suffice. It will require at minimum someone who has both significant time to devote to the project, and the necessary privileges to push changes to production.
comment by sarahconstantin · 2016-11-27T10:14:51.374Z · LW(p) · GW(p)
I applaud this and am already participating by crossposting from my blog and discussing.
One thing that I like about using LW as a home base is that everyone knows what it is, for good and for ill. This has the practical benefit of not needing further software development before we can get started on the hard problem of attracting high-quality users. It also has the signaling benefit of indicating clearly that we're "embracing our roots", including reclaiming the negative stereotypes of LessWrongers. (Nitpicky, nerdy, utopian, etc.)
I am unusual in this community in taking "the passions" really seriously, rather than identifying as being too rational to be caught up in them. One of my more eccentric positions has long been that we ought to be a tribe. For all but a few unusual individuals, humans really want to belong to groups. If the group of people who explicitly value reason is the one group that refuses to have "civic pride" or similar community-spirited emotions, then this is not good news for reason. Pride in who we are as a community, pride in our distinctive characteristics, seems to be a necessity, in a cluster of people who aspire to do better than the general public; it's important to have ways to socially reinforce and maintain that higher standard.
Having a website of "our" own is useful for practical purposes, but it also has the value of reinforcing an online locus for the community, which defines, unifies, and distinguishes us. Ideally, our defining "place" will also be a good website where good discussion happens. I think this is a better outcome than group membership being defined by "what parties in Berkeley you get invited to" or "whose FB-friends list you're on" or the other informal social means that have been used as stopgap proxy measures for ingroupiness. People are going to choose demarcations. Why not try to steer the form of those demarcations towards something like "virtue"?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2016-11-30T19:52:14.336Z · LW(p) · GW(p)
One of my more eccentric positions has long been that we ought to be a tribe.
Oof, is this really an eccentric position? FWIW, I am extremely convinced that the rationalist community ought to be a tribe, and one of the biggest updates I made at the CFAR reunion was seeing what felt to me like evidence that we were becoming more functional along tribey directions that I really wanted.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-04T13:19:25.760Z · LW(p) · GW(p)
WIW, I am extremely convinced that the rationalist community ought to be a tribe,
Why?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2016-12-06T03:25:41.808Z · LW(p) · GW(p)
In short, because I think tribes are the natural environments in which humans live, and that ignoring that fact produces unhappy and dysfunctional humans.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-12-06T08:51:25.352Z · LW(p) · GW(p)
There's a logic gap there. You are assuming that rationalists don't have pre-existing tribes, that they won't be in any tribe if they are not in the rationalist tribe. And you are assuming that rationalists need to be in a rationality tribe in order to be rational ... arguably, it works the other way..tribalism enhances group think bias, and so lowers the rationality level, on the whole.
comment by alyssavance · 2016-11-27T10:39:26.176Z · LW(p) · GW(p)
I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:
"A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating." (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)
That's obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there's no one person with both the technical and moral authority to:
- set the rules that all participants have to abide by, and enforce them
- decide principles for what's on-topic and what's off-topic
- receive reports of trolls, and warn or ban them
- respond to complaints about the site not working well
- decide what the site features should be, and implement the high-priority ones
Pretty much any successful subreddit, even smallish ones, will have a team of admins who handle this stuff, and who can be trusted to look at things that pop up within a day or so (at least collectively). The highest intellectual-quality subreddit I know of, /r/AskHistorians, has extremely active and rigorous moderation, to the extent that a majority of comments are often deleted. Since we aren't on Reddit itself, I don't think we need to go quite that far, but there has to be something in place.
Replies from: Viliam, ciphergoth↑ comment by Viliam · 2016-11-27T22:05:37.155Z · LW(p) · GW(p)
a proactive, responsive admin/moderation team
Which needs to be backed up by a responsive tech support team. Without the support of the tech support, the moderators are only able to do the following:
1) remove individual comments; and
2) ban individual users.
It seems like a lot of power, but for example when you deal with someone like Eugine, it is completely useless. All you can do is play whack-a-mole with banning his obvious sockpuppet accounts. You can't even revert the downvotes made by those accounts. You can't detect the sockpuppets that don't post comments (but are used to upvote the comments made by the active sockpuppets, which then quickly use their karma to mod-bomb the users Eugine doesn't like). So, all you can do is to delete the mod-bombing accounts after the damage was done. What's the point? It will cost Eugine about 10 seconds to create a new one.
(And then Eugine will post some paranoid rant about how you have some super shady moderator powers, and a few local useful idiots will go like "yeah, maybe the mods are too poweful, we need to stop them", and you keep banging your head against the wall in frustration, wishing you actually had a fraction of those power Eugine accuses you of having.)
As the situation is now, the moderators are completely powerless to prevent or even reduce Eugine's brigading, and the tech support doesn't give a fuck, and will cite privacy concerns when you ask them for more direct access to the database. At least that is my experience, as a former moderator. Appointing a new moderator, or even hundred new moderators, would not change anything about this, unless they get a direct access to the data, or a more supportive tech support.
EDIT:
And before the problem is fixed, what good will it do to send new users here? First, Eugine will automatically downvote all women. Second, Eugine will downvote anyone who disagrees with him. It's fucking motivating to write for a website where an obsessed user can de facto single-handedly remove all your content and/or moderate the whole discussion about it. And everyone is just looking away and pretending that this doesn't happen, and the real problem is... whatever else.
Come on, if LW is unable to enforce a ban of a single person blatantly abusing the rules and harrassing many users who actually contributed or wanted to contribute some quality content... the solution certainly isn't to keep telling more people to come and contribute. Let's finally talk about the elephant in the room.
(Mentioning the elephant in the room will get your comment immediately downvoted to -10 though. Just saying.)
Replies from: alyssavance, SatvikBeri, atucker, PeerGynt↑ comment by alyssavance · 2016-11-27T22:32:02.905Z · LW(p) · GW(p)
Was including tech support under "admin/moderation" - obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.
Replies from: Viliam↑ comment by Viliam · 2016-11-27T22:59:05.124Z · LW(p) · GW(p)
That's okay, I just posted to explain the details, to prevent people from inventing solutions that predictably couldn't change anything, such as: appoint new or more moderators. (I am not saying more help wouldn't be welcome, it's just that without better access to data, they also couldn't achieve much.)
↑ comment by SatvikBeri · 2016-11-27T22:26:11.146Z · LW(p) · GW(p)
Wow, that is a pretty big issue. Thank you for mentioning this.
Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don't realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.
↑ comment by atucker · 2016-11-27T23:57:03.898Z · LW(p) · GW(p)
What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.
Replies from: Viliam↑ comment by Viliam · 2016-11-28T00:47:42.545Z · LW(p) · GW(p)
When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.
Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision "user xyz123 is abusing the voting mechanism" or "user xyz123 is a sockpuppet for user abc789", describe my case to other mods, and only after getting their agreement I would learn who the "user xyz123" actually is.
(But of course, getting the database without anonymization -- if that would be faster -- would be equally good; I could just anonymize it after I get it.)
Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.
What specific computations would I run there? Well, that's kinda the point that I don't know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of "security by obscurity", to avoid Eugine adjusting to my algorithms. (For example, if I would define karma-assassination as "a user X downvoted all comments by user Y" and make the information public, Eugine could simply downvote all comments but one, to avoid detection. Similarly, if sockpuppeting would be defined as "a user X posts no comments, and only upvotes everything but user Y", Eugine could make X post exactly one comment, and upvote one random comment by someone else. The only way to make this game harder for the opponent is not to make the heuristics public. They would be merely explained to other moderators.)
So I would try different definitions of "karma assassination" and different definitions of "sockpuppets", see what the algorithm reports, and whether looking at the reported data again matches my original intuition. (Maybe the algorithm reports too much, because e.g. if a user posted only one comment on LW, then downvoting his comment was detected as "downvoting all comments from a given user", although I obviously didn't have that in mind. Or maybe there was a spammer, and someone downvoted all his comments perfectly legitimately.)
Then the next step would be, as long as I believe I have a correct algorithm, to set up a script for monitoring the database, and reporting me the kind of behavior that matches the heuristic automatically. This is because I believe that investigating things reported by users is already too late, and introduces biases. Some people will not report karma assassination, because they will mistake it for genuine dislike by the community; especially the new users intimidated by the website. On the other hand, some people will report every single organic downvote, even if they well deserved it. I have seen both cases during my role. It's better if an algorithm reports suspicious behavior. (The existing data would be used to define and test heuristics about what "suspicious behavior" is.)
That would have been what I wanted. However, Vaniver may have completely different ideas, and I am not speaking for him. Now it's already too late for me; I have a new job and a small baby, not enough free time to spend examining patterns of LW data. Two years ago, I would have the time.
(Another thing is, the voting model has a few obvious security holes. I would need some changes in the voting mechanism implemented, preferably without having a long public debate about how exactly the current situation can be abused to take over the website by a simple script. If I had a free weekend, I could write a script that would nuke the whole website. If Eugine has at least average programming skills, he can do this too; and if we start an arms race against him, he may be motivated to do it as a final revenge.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2016-11-28T00:51:50.591Z · LW(p) · GW(p)
It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven't thought about it much.
Replies from: Viliam, Raemon, sarahconstantin, Kaj_Sotala↑ comment by Viliam · 2016-11-28T09:47:21.347Z · LW(p) · GW(p)
If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.
It can happen without full awareness... the user will simply notice that X upvotes them often and Y downvotes them often... they will start liking X and disliking Y... they will start getting pleasant feelings when looking at comments written by X ("my friend is writing here, I feel good") and unpleasant feelings when looking at comments written by Y ("oh no, my nemesis again")... and that will be reflected by how they vote.
And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.
Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.
From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other people would be suspected of being vengeful about downvotes, people would be more willing to downvote the former, which creates incentives for passively aggressive behavior. (I am talking about a situation where everyone suspects that X downvotes those who downvoted him, but X can plausibly deny doing that, claiming he genuinely disliked all the stuff he downvoted, and you can either have an infinite debate about it with X acting outraged about unfair accusations, or just let it slide but still everyone knows that downvoting X is bad for their own karma.)
tl;dr -- the same reasons why the elections are secret
EDIT:
After reading Raemon's comment I am less sure about what I wrote here. I still believe that public upvotes and downvotes can cause unnecessary drama, but maybe that would still be an improvement over the situation when a reasonable comment gets 10 downvotes from sockpuppet accounts, or someone gets one downvote for each comment including those written years ago, and it is not clearly visible what exactly is happening unless moderators get involved (and sometimes not even then).
On the other hand, I believe that some content (too stupid, or aggressive) should be removed from the debate. Maybe not deleted completely, but at least hidden by default (such as currently the comments with karma -5 or less). But I agree that this should not apply to not-completely-insane comments posted by newbies in good faith. Such comments should be merely sorted to the bottom of the page. What should be removed is violations of community norms, and "spamming" (i.e. trying to win a debate by quantity of comments that don't bring new points, merely inflate the visibility of the already expressed ones).
At this moment I am imagining some kind of hybrid system, where upvotes (either private or public, no clear opinion on this yet) would be given freely, but downvotes could only be given for specific reasons (they would be equivalent to flagging) and in case of abuse the user could lose the ability to downvote (i.e. the downvotes would be either public, or at least visible to moderators).
And here is a quick fix idea: as the first step, make downvotes public for moderators. That would at least allow them to quickly detect and remove Eugine's sockpuppets. -- For example, moderator could have a new button below each comment, which would display the list of downvoters (with hyperlinks to their user pages). Also, make a script that reverts all votes given by a user, and make it easily accessible from the "banned users" admin page (i.e. it can only be applied to already banned users). To help other moderators spot possible abuse, the name of the moderator who started the script for a user could be displayed on the same admin page. (For extra precaution, the "revert all votes" button could be made inaccessible for the moderator who banned the user, so at least two moderators must participate at a vote purge.)
↑ comment by Raemon · 2016-11-28T02:32:22.621Z · LW(p) · GW(p)
It's not actually obvious to me that downvotes are even especially useful. I understand what purpose they're supposed to serve, but I'm not sure they actually serve it.
It seems like if we removed them, a major tool available to trolls is just gone.
I think downvoting is also fairly punishing for newcomers - I've heard a few people mention they avoided Less Wrong due to worry about downvoting.
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.
Replies from: SatvikBeri, scarcegreengrass, Viliam↑ comment by SatvikBeri · 2016-11-28T03:06:25.557Z · LW(p) · GW(p)
Alternative, go with the Hacker News model of only enabling downvotes after you've accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.
↑ comment by scarcegreengrass · 2016-11-28T19:17:24.364Z · LW(p) · GW(p)
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off 'comment score below threshold' hiding, etc)
↑ comment by Viliam · 2016-11-28T10:16:15.705Z · LW(p) · GW(p)
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook.
Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.
Actual spam could just be reported rather than downvoted
There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying "this comment is unnecessarily rude", which would also effectively halve the number of likes for the purpose of comment sorting.)
↑ comment by sarahconstantin · 2016-11-28T15:15:09.181Z · LW(p) · GW(p)
I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your "fans" vs "enemies" are, you will inevitably try to play to your "fans" and develop dislike for your "enemies." I think this is the primary thing that makes social media bad.
My current cutoff for what counts as a "social media" site (I have resolved to never use social media again) is "is there a like mechanic where I can see who liked me?" If votes on LW were public, by that rule, I'd have to quit.
Replies from: Kaj_Sotala, Vladimir_Nesov↑ comment by Kaj_Sotala · 2016-11-28T16:06:06.196Z · LW(p) · GW(p)
you will inevitably try to play to your "fans"
Could you elaborate on what you mean by this? "Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted" also sounds like "playing to your fans" to me - in both cases you're responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren't entirely sure of who your target audience consists of.
↑ comment by Vladimir_Nesov · 2016-11-28T15:54:55.620Z · LW(p) · GW(p)
My current cutoff for what counts as a "social media" site (I have resolved to never use social media again) is "is there a like mechanic where I can see who liked me?" If votes on LW were public, by that rule, I'd have to quit.
Do you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn't matter if there is a setting that makes the likes invisible to you in particular? It's a tradeoff, some things may get worse, others may get better. I don't have a clear sense of this tradeoff.
↑ comment by Kaj_Sotala · 2016-11-28T10:33:54.668Z · LW(p) · GW(p)
Imagine that you're a new person who's a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you.
The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have.
OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.
↑ comment by PeerGynt · 2016-11-28T05:19:54.561Z · LW(p) · GW(p)
the tech support doesn't give a fuck, and will cite privacy concerns when you ask them for more direct access to the database
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? Why do they then get their contract renewed? Are they taking orders from some secret deep owners of LW that outrank the moderators ?
Replies from: RyanCarey↑ comment by RyanCarey · 2016-11-28T08:28:12.737Z · LW(p) · GW(p)
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? ...Why do they then get their contract renewed?
The tech support is Trike Apps, who have freely donated a huge amount of programmer time toward building and maintaining LessWrong.
Replies from: Viliam↑ comment by Viliam · 2016-11-28T12:40:34.600Z · LW(p) · GW(p)
Yeah, it's a bit of "don't look a gift horse in the mouth" situation. When someone donates a lot of time and money to you, and suddenly becomes evasive or stubborn about some issue that is critical to be solved properly... what are you going to do? It's not like you can threaten to fire them, right?
In hindsight, I did a few big mistakes there. I didn't call Eliezer to have an open debate about what exactly is and isn't in my competence; that is, in case of different opinions about what should be done, who really has the last word. Instead I gave up too soon, when one my ideas was rejected I tried to find an alternative solution, only to have it rejected again... or to finally succeed at something, and then see that Eugine improved his game, and now I am going to have another round of negotiation... until I gradually developed a huge "ugh field" around the whole topic... and wasted a lot of time... and then other people took the role and had to start from the beginning again.
↑ comment by Paul Crowley (ciphergoth) · 2016-11-27T19:02:44.277Z · LW(p) · GW(p)
If we built it, would they come? You make a strong case that the workforce wasn't made able to do the job; if that were fixed, would the workforce show up?
comment by Alexei · 2016-11-27T06:42:19.687Z · LW(p) · GW(p)
I strongly agree with this sentiment, and currently Arbital's course is to address this problem. I realize there have been several discussions on LW about bringing LW back / doing LW 2.0, and Arbital has often come up. Up until two weeks ago we were focusing on "Arbital as the platform for intuitive math explanations", but that proved to be harder to scale than we thought. We now pivoted to a more discussion-oriented truth-seeking north star, which was our long-term goal all along. We are going to need innovation and experimentation both on the software and the community levels, but I'm looking forward to the challenge. :)
Replies from: AnnaSalamon, John_Maxwell_IV, malcolmocean, casebash↑ comment by AnnaSalamon · 2016-11-27T07:01:11.106Z · LW(p) · GW(p)
I am extremely excited about this. I suspect we should proceed trying to reboot Less Wrong, without waiting, while also attempting to aid Arbital in any ways that can help (test users, etc.).
Replies from: RyanCarey↑ comment by RyanCarey · 2016-11-27T21:45:15.138Z · LW(p) · GW(p)
If half-hearted attempts are doomed (plausible), or more generally we're operating in a region where expected returns on invested effort are superlinear (plausible), then it might be best to commit hard to projects (>1 full-time programmer) sequentially.
Replies from: Mqrius↑ comment by Mqrius · 2016-12-05T08:05:15.565Z · LW(p) · GW(p)
Does that take into account, for example, Arbital seeming less promising to people / getting less engagement, because all the users have just sunk energy into trying to get by on a revived LW?
There's an intuition pump I could make that I haven't fully fleshed out yet, that goes something like, If both Arbital and Lesswrong get worked on, then whichever seems more promising or better to use will gain more traction and end out on top in a very natural way, without having to go through an explicit failure of the other one.
There's caveats/responses to that as well of course — it just doesn't seem 100% clear cut to me.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T05:21:05.928Z · LW(p) · GW(p)
We now pivoted to a more discussion-oriented truth-seeking north star, which was our long-term goal all along.
Exciting stuff!
Are you planning to engage with the LW community to figure out what features to implement?
I know that Eliezer was heavily involved with Arbital's product management. But I think it's a mistake to make him the BDFL for LW 2.0, because LW 1.0 failed, and this was plausibly due to actions he took. Beware the halo effect: someone can simultaneously be a great blogger and a lousy product manager/forum moderator. I think we should let someone else like Vaniver have a try.
If you're planning to engage with the community (which I would strongly recommend--ignoring their userbase is the kind of thing failed startups do), I suggest waiting a bit and then creating a new thread about this, to simulate the effect of a sticky.
Replies from: Alexei↑ comment by Alexei · 2016-11-30T16:33:03.865Z · LW(p) · GW(p)
Are you planning to engage with the LW community to figure out what features to implement?
Eric R and I read all the comments in this thread. We've also met with multiple people in person to discuss exactly what the platform should look like. So the broad answer is "yes", but if you have a specific mode of engagement in mind, then it might be "no".
I know that Eliezer was heavily involved with Arbital's product management.
He is an adviser. There are no advocates to make him a BDFL as far as I know.
I suggest waiting a bit and then creating a new thread about this, to simulate the effect of a sticky.
I expect we'll have a public beta ready in two weeks. I plan to write a blog post of my own to explain Arbital in more details.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-12-01T09:22:50.220Z · LW(p) · GW(p)
Sounds great!
if you have a specific mode of engagement in mind, then it might be "no".
Well, if you created a new thread called "Eric and I are taking suggestions for Arbital", I imagine you might get a lot more relevant ideas and feedback :)
↑ comment by MalcolmOcean (malcolmocean) · 2016-11-27T13:32:22.695Z · LW(p) · GW(p)
I'm very excited to have an Arbital-shaped discussion and writing platform. I've been thinking for awhile that I want some of my online writing to become less blog-like, more wiki-like, but I don't actually want to use a wiki because... yeah. Wikis.
Arbital seems way better. Is it at the point now where I could start posting some writing/models to it?
Replies from: Alexei↑ comment by casebash · 2016-11-27T18:00:56.165Z · LW(p) · GW(p)
If Arbital provides a solution, then that would be great, but I think it is best to have multiple projects operating at the same time.
Replies from: Alexei↑ comment by Alexei · 2016-11-28T00:28:49.569Z · LW(p) · GW(p)
Why?
Replies from: casebash, Drea↑ comment by casebash · 2016-11-28T14:13:37.531Z · LW(p) · GW(p)
Gives us two changes to succeed.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2016-11-30T19:50:02.896Z · LW(p) · GW(p)
But also weakens both options' ability to be a Schelling point.
↑ comment by Drea · 2016-12-11T19:59:36.728Z · LW(p) · GW(p)
I can see value in having LW as a prototype or scratch pad, making simple modifications of existing discussion platforms (e.g. improved moderator powers as discussed above). Then Arbital can do the harder work of building a collaborative truth-seeking platform, adding in features to, for example, support Double Crux, fine-typed voting, or evidence (rather than comments).
Perhaps in the end there's a symbiosis, where the LW is for discussion, and when a topic comes up that needs truth-seeking it's moved to Arbital. That free's Arbital from having to include a solved problem in it's code base.
comment by Vladimir_Nesov · 2016-11-27T17:37:31.565Z · LW(p) · GW(p)
Successful conversations usually happen as a result of selection circumstances that make it more likely that interesting people participate. Early LessWrong was interesting because of the posts, then there was a phase when many were still learning, and so were motivated to participate, to tutor one another, and to post more. But most don't want to stay in school forever, so activity faded, and the steady stream of new readers has different characteristics.
It's possible to maintain a high quality blog roll, or an edited stream of posts. But with comments, the problem is that there are too many of them, and bad comments start bad conversations that should be prevented rather than stopped, thus pre-moderation, which slows things down. Controlling their quality individually would require a lot of moderators, who must themselves be assessed for quality of their moderation decisions, which is not always revealed by the moderators' own posts. It would also require the absence of drama around moderation decisions, which might be even harder. Unfortunately, many of these natural steps have bad side effects or are hard to manage, so should be avoided when possible. I expect the problem can be solved either by clever algorithms that predict quality of votes, or by focusing more on moderating people (both as voters and as commenters), instead of moderating comments.
On Stack Exchange, there is a threshold for commenting (not just asking or answering), a threshold for voting, and a separate place ("meta" forum) for discussing moderation decisions. Here's my guess at a feature set sufficient for maintaining good conversations when the participants didn't happen to be selected for generating good content by other circumstances:
- All votes are tagged by the voters, it's possible to roll back the effect of all votes by any user.
- There are three tiers of users: moderators, full members, and regular users. The number of moderators is a significant fraction of the number of full members, so there probably should be a few admins who are outside this system.
- Full members can reply to comments without pre-moderation, while regular users can only leave top-level comments and require pre-moderation. There must be a norm against regular users posting top-level comments to reply to another comment. This is the goal of the whole system, to enable good conversations between full members, while allowing new users to signal quality of their contributions without interfering with the ongoing conversations.
- Full members and moderators are selected and demoted based on voting by moderators (both upvoting and downvoting, kept separate). The voting is an ongoing process (like for comments, posts) and weighs recent votes more (so that changes in behavior can be addressed). The moderators vote on users, not just on their comments or posts. Each user has two separate ratings, one that can make them a full member, and the other that can make them a moderator, provided they are a full member.
- Moderators see who votes how, both on users and comments, and can use these observations to decide who to vote for/against being a moderator. By default, when a user becomes a full member, they also become a moderator, but can then be demoted to just a full member if other moderators don't like how they vote. All votes by demoted moderators and the effects of those votes, including on membership status of other users, are automatically retracted.
- A separate meta forum for moderators, and a norm against discussing changes in membership status etc. on the main site.
This seems hopelessly overcomplicated, but the existence of Stack Exchange is encouraging.
comment by Raemon · 2016-11-28T16:12:55.100Z · LW(p) · GW(p)
Quick note: Having finally gotten used to using discussion as the primary forum, I totally missed this post as a "promoted" post and would not have seen it if it hadn't been linked on Facebook, ironically enough.
I realize this was an important post that deserved to be promoted in any objective sense, but am not sure promoting things is the best way to do that by this point.
Replies from: TheAltar, Vaniver↑ comment by TheAltar · 2016-11-28T23:25:44.529Z · LW(p) · GW(p)
Having the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise.
The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.
↑ comment by Vaniver · 2016-11-28T16:58:27.810Z · LW(p) · GW(p)
I realize this was an important post that deserved to be promoted in any objective sense, but am not sure promoting things is the best way to do that by this point.
Promoting posts gets them into the RSS feed. Making it possible to promote Discussion posts, or having promoted posts appear in Discussion also, or some other similar approach seems worthwhile.
Replies from: CronoDAS, Raemon↑ comment by CronoDAS · 2016-11-28T21:56:21.777Z · LW(p) · GW(p)
I follow the DIscussion RSS feed but stopped following the Main RSS feed after Main shut down.
Replies from: Vaniver↑ comment by Vaniver · 2016-11-28T23:02:20.481Z · LW(p) · GW(p)
According to Feedly, 96 users are following the discussion RSS and 11k are following the Main RSS.
(Feedly is probably not the only place I should be checking to compare those two, but the effect size seems pretty huge. The main problem is missing people who actually check the website every day, but go to discussion/new instead of all/new.)
Replies from: Raemon, CronoDAS↑ comment by Raemon · 2016-11-29T20:48:53.226Z · LW(p) · GW(p)
Hmm. Maybe for short term solutions (until we figure out a way to get promote individual discussion posts while keeping them in discussion), maybe for posts like this:
a) create a stub post on Main, which mostly says "we have an important thing to say, check it out in discussion"
b) maybe also make a post on Main saying "Main is now deprecated. Apart from major announcements, all stuff will be in Discussion now. Consider updating your RSS. We're also seeing a lot of old timers return to post these days, check it out". etc.
Replies from: Vaniver↑ comment by Vaniver · 2016-11-29T21:37:19.753Z · LW(p) · GW(p)
Consider updating your RSS
I don't think this will happen with a sufficiently large number of people to make that a good option. I think my current best plan is to keep the sitewide RSS as having only promoted posts, but including promoted posts in Discussion. We can also advertise the Discussion RSS a bit more heavily, but I don't know how many people will want to do that relative to just checking LW.
↑ comment by Raemon · 2016-11-28T17:20:20.050Z · LW(p) · GW(p)
Gotcha. Agreed. Do you have any sense of how big a change that is?
Sometime after Solstice I can hopefully dedicate more time to hacking on Less Wrong.
Replies from: Vaniver↑ comment by Vaniver · 2016-11-28T19:38:14.644Z · LW(p) · GW(p)
Do you have any sense of how big a change that is?
I haven't looked at the code that generates the subreddit pages, so not really. It seems like it'd likely be a one-line change in an eligibility function somewhere, but finding that line seems rough.
comment by SatvikBeri · 2016-11-27T06:07:50.056Z · LW(p) · GW(p)
I think this is completely correct, and have been thinking along similar lines lately.
The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.
The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.
Replies from: AnnaSalamon, AnnaSalamon, Evan_Gaensbauer↑ comment by AnnaSalamon · 2016-11-27T06:25:03.600Z · LW(p) · GW(p)
Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all "arguments you want the community to be aware of" to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise "single point of coordination-marked" posts to LW.)
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.
Replies from: John_Maxwell_IV, SatvikBeri, SatvikBeri↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-27T13:02:16.833Z · LW(p) · GW(p)
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.
Participating in online discussions tends to reduce one's attention span. There's the variable reinforcement factor. There's also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)
These effects are so strong that if I stay away from the internet for a few days ("internet fast"), my attention span increases dramatically. And if I've posted comments online yesterday, it's hard for me to focus today--there's always something in the back of my mind that wants to check & see if anyone's responded. I need to refrain from making new comments for several days before I can really focus.
Lots of people have noticed that online discussions sap their productivity this way. And due to the affect heuristic, they downgrade the importance & usefulness of online discussions in general. I think this inspired Patri's Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting... so if video games are a distracting waste of time, Less Wrong must also be, right?
Except that doesn't follow. Online content can be really valuable to read. Bloggers don't have an incentive to pad their ideas the way book authors do. And they write simply instead of unnecessarily obfuscating like academics. (Some related discussion.)
Participating in discussions online is often high leverage. The ratio of readers to participants in online discussions can be quite high. Some numbers from the LW-sphere that back this up:
In 2010, Kevin created a thread where he asked lurkers to say hi. The thread generated 617 comments.
77% of respondents to the Less Wrong survey have never posted a comment. (And this is a population of readers who were sufficiently engaged to take the survey!)
Here's a relatively obscure comment of mine that was voted to +2. But it was read by at least 135 logged-in users. Since 54+% of the LW readership has never registered an account, this obscure comment was likely read by 270+ people. A similar case study--deeply threaded comment posted 4 days after a top-level post, read by at least 22 logged-in users.
Based on this line of reasoning, I'm currently working on the problem of preserving focus while participating in online discussions. I've got some ideas, but I'd love to hear thoughts from anyone who wants to spend a minute brainstorming.
Replies from: adamzerner, Evan_Gaensbauer, gworley, John_Maxwell_IV↑ comment by Adam Zerner (adamzerner) · 2016-11-29T07:53:55.498Z · LW(p) · GW(p)
Regarding the idea that online discussion hurts attention span and productivity, I agree for the reasons you say. The book Deep Work (my review) talks more about it. I'm not too familiar with the actual research, but my mind seems to recall that the research supports this idea. Time Well Spent is a movement that deals with this topic and has some good content/resources.
I think it's important to separate internet time from non-internet time. The author talks about this in Deep Work. He recommends that internet time be scheduled in advance, that way you're not internetting mindlessly out of impulse. If willpower is an issue, try Self Control, or going somewhere without internet. I sometimes find it useful to lock my phone in the mailbox downstairs.
I'm no expert, but suspect that LW could do a better job designing for Time Well Spent.
- Remove things on the sidebar like "Recent Posts" and "Recent Comments" (first item on Time Well Spent checklist). They tempt you to click around and stay on longer. If you want to see new posts or comments, you could deliberately choose to click on a link that takes you to a new webpage that shows you those things, rather than always having them shoved in your face.
- Give users the option of "only be able to see things in your inbox once per day". That way, you're not tempted to constantly be checking it. (second item on checklist; letting users disconnect)
- I think it'd be cool to let people display their productivity goals on their profile. Eg. "I check LW Tuesday and Thursday nights, and Sunday mornings. I intend to be working during these hours." That way perhaps you won't feel obligated to respond to people when you should be working. Furthermore, there's the social reward/punishment aspect of it - "Hey! You posted this comment at 4:30 on a Wednesday - weren't you supposed to be working then?"
These are just some initial thoughts. I know that we can come up with much more.
Tangential comment: a big thought of mine has always been that LW (and online forums in general) lead to the same conversation threads being repeated. Ie. the topic of "how to reduce internet distractions" surely has been discussed here before. It'd be cool if there was a central place for that discussion, it was organized well into some type of community wiki. I envision there being much less "duplication" this way. I also envision a lot more time being spent on "organizing current thoughts" as opposed to "thinking new thoughts". (These thoughts are very rough and not well composed.)
↑ comment by Evan_Gaensbauer · 2016-11-28T05:23:53.548Z · LW(p) · GW(p)
I think this inspired Patri's Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting... so if video games are a distracting waste of time, Less Wrong must also be, right?
I've been thinking about Patri's post for a long time, because I've found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They're focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It's easier to tell if what you're doing works sooner.
That's often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It's just taken longer to tell if that's succeeded.
Patri's post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that's value added to advancing AI safety that wouldn't have existed if LW never started. CFAR didn't exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn't get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion.
What I've read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.
↑ comment by Gordon Seidoh Worley (gworley) · 2016-11-27T21:39:10.292Z · LW(p) · GW(p)
I tend to find discussions in comments unhelpful, but enjoy discussions spread out over responding posts. If someone takes the time to write something of length and quality sufficient that they are willing to write it as a top-level post to their blog/etc. then it's more often worth reading to me. My time is valuable, comments are cheap, so I rather read things the author invested thought in writing.
(I recognize the irony that I'm participating in this discussion right now, but this particular discussion seems an unusually good chance to spread my thinking on this topic.)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-27T14:42:35.234Z · LW(p) · GW(p)
If anyone wants to collaborate in tackling the focus problem, send me a personal message with info on how to contact you. Maybe we can get some kind of randomized trial going.
↑ comment by SatvikBeri · 2016-11-27T10:09:13.089Z · LW(p) · GW(p)
This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be.
I agree that there should be much more active enforcement of good norms than heavy-handed moderation (banning etc.), but I have a cached thought that lack of such moderation was a significant part of why I lost interest in lesswrong.com, though I don't remember specific examples.
In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.
Completely agree. One particularly important mechanism, IMO, is that brains tend to pay substantially more attention to things they perceive other humans caring about. I know I write substantially better code when someone I respect will be reviewing it in detail, and that I have trouble rousing the same motivation without that.
↑ comment by SatvikBeri · 2016-11-27T16:33:57.282Z · LW(p) · GW(p)
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen.
I suspect one of the reasons people have moved discussions to their own blogs or walls is because they feel like they actually can affect the norms there. Unofficial status works (cf. Eliezer, Yvain) but is not very scalable–it requires people willing to spend a lot of time writing content as well as thinking about, discussing, and advocating for community norms. I think you, Ben, Sarah etc. committing to posting here makes a lesswrong revival more likely to succeed, and would place even higher odds if 1 or more people committed to spending a significant amount of time on work such as:
- Clarifying what type of content is encouraged on less wrong, and what belongs in discussion vs. main
- Writing up a set of discussion norms that people can link to when saying "please do X"
- Talking to people and observing the state of the community in order to improve the norms
- Regularly reaching out to other writers/cross-posting relevant content, along with the seeds of a discussion
- Actually ban trolls
- Manage some ongoing development to improve site features
↑ comment by Vaniver · 2016-11-27T17:50:12.555Z · LW(p) · GW(p)
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen.
One idea that I had, that I still think is good, is essentially something like the Sunshine Regiment. The minimal elements are:
A bat-signal where you can flag a comment for attention by someone in the Sunshine Regiment.
That shows up in an inbox of everyone in the SR until one of them clicks an "I've got this" button.
The person who took on the post writes an explanation of how they could have written the post better / more in line with community norms.
The basic idea here is that lots of people have the ability to stage these interventions / do these corrections, but (a) it's draining and not the sort of thing that a lot of people want to do more than X times a month, and (b) not the sort of thing low-status but norm-acclimated members of the community feel comfortable doing unless they're given a badge.
A similar system is something like Stack Overflow's review queue, which gives users the ability to review more complicated things as their karma gets higher, and thus offloads basic administrative duties to users in a way that scales fairly well. But while SO is mostly concerned with making sure edits aren't vandalizing the post and garbage gets cleaned up, I think LW benefits from taking a more transformative approach towards posters. (If we have a lot of material that identifies errors of thought and can correct those, then let's use it!)
Replies from: sarahconstantin↑ comment by sarahconstantin · 2016-11-27T18:34:29.820Z · LW(p) · GW(p)
Happy to join Sunshine Regiment if you can set it up.
Replies from: SatvikBeri↑ comment by SatvikBeri · 2016-11-27T19:06:55.192Z · LW(p) · GW(p)
Also happy to join. And I'm happy to commit to a significant amount of moderation (e.g. 10/hours a week for the next 3 months) if you think it's useful.
↑ comment by AnnaSalamon · 2016-11-27T08:39:16.809Z · LW(p) · GW(p)
good intellectual content
Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.
Probably too meta / too clever an idea, but may be worth some individual brainstorms?
↑ comment by Evan_Gaensbauer · 2016-12-14T11:56:19.892Z · LW(p) · GW(p)
I've been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I've noticed it's not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum.
Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn't get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it's far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum.
(Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.)
[1] I abbreviate "Effective Altruism Forum" as "EA Forum", rather than "EAF", as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don't want people to get confused between the two.
Replies from: steven0461↑ comment by steven0461 · 2016-12-15T16:37:04.561Z · LW(p) · GW(p)
Some guesses:
- The EA forum has less of a reputation, so knowing about it selects better for various virtues
- Interest in altruism probably correlates with pro-social behavior in general, e.g. netiquette
- The EA forum doesn't have the "this site is about rationality, I have opinions and I agree with them, so they're rational, so I should post about them here" problem
comment by RobinHanson · 2016-11-27T18:31:46.662Z · LW(p) · GW(p)
I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.
Replies from: sarahconstantin, John_Maxwell_IV, scarcegreengrass, ingres↑ comment by sarahconstantin · 2016-11-27T18:53:31.550Z · LW(p) · GW(p)
I don't think that a reboot/revival of LW necessarily has to consist entirely of the people who were in the community before. If we produce good stuff, we can attract new people. A totally new site with new branding might get rid of some of the negative baggage of the past, but is also less likely to get off the ground in the first place. Making use of what already exists is the conservative choice.
I hear you as saying that people here should focus on learning rather than leadership. I think both are valuable, but that there's a lack of leadership online, and my intuition is to trust "forward momentum", carrying something forward even if I do not think I am optimally qualified. He who hesitates is lost, etc.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T05:46:45.962Z · LW(p) · GW(p)
I see Anna making the same complaint that you yourself have made a few times: namely, that most online discussions are structured in a way that makes the accumulation of knowledge difficult. (My explanation: no one has an incentive to fix this.)
Is the fact that economists mostly cite each other evidence of "cultish in-group favoring biases"? Probably to some degree. But this hasn't fatally wounded economics.
Replies from: Venryx, TheAncientGeek↑ comment by Venryx · 2017-08-13T01:51:53.445Z · LW(p) · GW(p)
"most online discussions are structured in a way that makes the accumulation of knowledge difficult."
It's a different kind of conversation, but I've been trying to improve on this problem by developing a "debate mapping" website, where conversation is structured in tree form based on claims, and then arguments underneath it which support or oppose each claim recursively.
This is the website if you're interested: https://debatemap.live
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2017-08-30T00:51:03.841Z · LW(p) · GW(p)
Glad to see you're working on this, it looks pretty nice!
I think the bottleneck for efforts like this is typically marketing, not code. (Analogy: If you want to found a city, the first step is not to go off alone in to the wilderness and build a bunch of houses.) I think I've seen other argument mapping sites, and it seems like every few months someone announces a new & improved discussion website on SlateStarCodex (then it proceeds to not get traction). I suspect the solution is to form a committee/"human kickstarter" of some kind so that everyone who's interested in this problem can coordinate to populate the same site simultaneously. For a project like yours that already has code, the best approach might be to try to join forces with a blogger who already has traffic, or a discussion site that already has a demand for a debate map, or something like that.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-08-30T11:54:48.417Z · LW(p) · GW(p)
Seconded.
↑ comment by TheAncientGeek · 2016-11-28T14:24:13.141Z · LW(p) · GW(p)
Is the fact that economists mostly cite each other evidence of "cultish in-group favoring biases"?
The behaviour of the Austrian School certainly is.
↑ comment by scarcegreengrass · 2016-11-28T19:26:51.050Z · LW(p) · GW(p)
I have similar uncertainty about the large-scale benefits of lesswrong.com, but on smaller scales i do think the site was very valuable. I've never seen a discussion forum as polite, detailed, charitable, & rigorous as the old Less Wrong.
↑ comment by namespace (ingres) · 2016-11-28T20:38:10.548Z · LW(p) · GW(p)
Spot on in my opinion, and one of the many points I was trying to get at with the 2016 LW Survey. For example, this community seems to have basically ignored Tetlock's latest research, relegating it to the status of a "good book" that SSC reviewed. I wish I'd included a 'never heard of it' button on the communities question because I suspect the vast majority of LessWrongers have never heard of the Good Judgement Project.
I've long felt that Eliezer Yudkowsky's sequences could use somebody going over them with a highlighter and filling in the citations for all the books and papers he borrowed from.
Replies from: Raemoncomment by Elo · 2016-11-27T22:19:37.600Z · LW(p) · GW(p)
"It is dangerous to be half a rationalist."
It is dangerous to half-arse this and every other attempt at recovering lesswrong (again).
I take into account the comments before mine which accurately mention several reasons for the problems on lw.
The codebase is not that bad. I know how many people have looked at it; and it's reasonably easy to fix it. I even know how to fix it; but I am personally without the coding skill to implement the specific changes. We are without volunteers willing to make changes; and without funds to pay someone to do them. Trust me. I collated all comments on all of the several times we have tried to collate ideas. We are unfortunately busy people. Working on other goals and other projects.
I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.
A year from now; crony beliefs may not be easy to find on lesswrong because it was never explicitly posted here in text, but it will still be in the minds of anyone active in the diaspora.
Having said all that; I am more than willing to talk to anyone who wants to work on changes or progress via skype. PM me to make a time. @Anna that includes you.
Replies from: AnnaSalamon, ciphergoth, Kaj_Sotala, John_Maxwell_IV, Vaniver↑ comment by AnnaSalamon · 2016-11-27T23:24:35.055Z · LW(p) · GW(p)
I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.
So: this is subtle. But to my mind, the main issue isn't that ideas won't mostly-percolate. (Yes, lots of folks seem to be referring to Crony Beliefs. Yes, Molloch. Yes, etc.) It's rather that there isn't a process for: creating common knowledge that an idea has percolated; having people feel empowered to author a reply to an idea (e.g., pointing out an apparent error in its arguments) while having faith that if their argument is clear and correct, others will force the original author to eventually reply; creating a common core of people who have a common core of arguments/analysis/evidence they can take for granted (as with Eliezer's Sequences), etc.
I'm not sure how to fully explicitly model it. But it's not mostly about the odds that a given post will spread (let's call that probability "p"). It's more about a bunch of second-order effects from thingies requiring that p^4 or something be large (e.g., that you will both have read the post I want to reference (p), and I'll know you'll have read it (~p^2), and that that'll be true for a large enough fraction of my audience that I don't have to painfully write my post to avoid being misunderstood by the people who haven't read that one post (maybe ~p^3 or something, depending on threshold proportion), for each of the "that one posts" that I want to reference (again, some slightly higher conjunctive requirement, with the probability correspondingly going down)...
I wish I knew how to model this more coherently.
Replies from: Viliam, Elo↑ comment by Viliam · 2016-11-28T13:06:14.909Z · LW(p) · GW(p)
I think I understand what you mean. On one hand it is great to have this fluid network of rationalist websites where everyone chooses the content they prefer to read. We don't have a single point of failure. We can try different writing styles, different moderation styles, etc. The rationalist community can survive and generate new interesting content even when LW is dying and infested by downvoting sockpuppets, and Eliezer keeps posting kitten videos on Facebook (just kidding).
On the other hand, it is also great to have a shared vocabulary; a list of words I can use freely without having to explain them. Because inferential distance is a thing. (For example, LW allows me to type "inferential distance" without having to explain. Maybe I could just use a hyperlink to the origin of the term. But doing it outside of LW includes a risk of people starting to debate the concept of the "inferential distance" itself, derailing the discussion.) The opposite of public knowledge is the Eternal September.
Maybe "Moloch" is an example that meaningful terms will spread across rationalist websites. (Natural selection of rationalist memes?) Maybe hyperlinking the original source is all it takes; linking to SSC is not more difficult than linking to LW Sequences, or Wikipedia. That is, assuming that the concept is clearly explained in one self-contained article. Which is not always the case.
Consider "motte and bailey". I consider it a critical rationalist concept, almost as important as "a map is not the territory". (Technically speaking, it is a narrower version of "a map is not the territory".) I believe it helps me to see more clearly through most political debates, but it can also be applied outside of politics. And what is the canonical link? Oh, this. So, imagine that I am talking with people who are not regular SSC readers, and we are debating something either unrelated to politics, or at least unrelated to the part of politics that the SSC article talks about, but somehow there appears to be a confusion, which could be easily solved by pointing out that this is yet another instance of the "motte and bailey" fallacy, so I just use these words in a sentence, and provide a hyperlink-explanation to the SSC article. What could possibly go wrong? How could it possibly derail the whole debate?
Okay, maybe the situation with "motte and bailey" could be solved by writing a more neutral article (containing a link to the original article) afterwards, and referring to the neutral article. More generally, maybe we could just maintain a separate Dictionary of Terms Generally Considered Useful by the Rationalist Community. Or maybe the dictionary would suffer the same fate as the Sequences; it would exist, but most new people would completely ignore it, simply because it isn't standing in the middle of the traffic.
So I guess there needs to be a community which has a community norm of "you must read this information, or else you are not a valid member of this community". Sounds ugly, when I put it like this, but the opposite is the information just being somewhere without people being able to use it freely in a debate.
Replies from: TheAncientGeek, entirelyuseless↑ comment by TheAncientGeek · 2016-11-28T13:44:18.196Z · LW(p) · GW(p)
And what is the canonical link? Oh, this.
No, this:
↑ comment by entirelyuseless · 2016-11-28T16:08:50.865Z · LW(p) · GW(p)
My problem with the "shared vocabulary" is that as you note yourself here, it implies that something has already been thought through, and it assumes that you have understood the thing properly. So for example if you reject an argument because "that's an example of a motte and bailey fallacy", then this only works if it is in fact correct to reject arguments for that reason.
And I don't think it is correct. One reason why people use a motte and bailey is that they are looking for some common ground with their interlocutor. Take one of Scott's examples, with this motte and bailey:
- God is just the order and love in the universe
- God is an extremely powerful supernatural being who punishes my enemies
When the person asserts #1, it is not because they do not believe #2. It is because they are looking for some partial expression of their belief that the other person might accept. In their understanding, the two statements do not contradict one another, even though obviously the second claims a good deal more than the first.
Now Scott says that #1 is "useless," namely that even if he could theoretically accept the word "God" as applying to this, there is no reason for him to do this, because there is nowhere to go from there. And this might be true. But the fact that #2 is false does not prove that it is true. Most likely, if you work hard, you can find some #3, stronger than #1, but weaker than #2, which will also be defensible.
And it would be right to tell them to do the work that is needed. But it would be wrong to simply say, "Oh, that's a motte and bailey" and walk away.
This is not merely a criticism of this bit of shared vocabulary, so that it would just be a question of getting the right shared vocabulary. A similar criticism will apply to virtually any possible piece of shared vocabulary -- you are always assuming things just by using the vocabulary, and you might be wrong in those assumptions.
Replies from: SatvikBeri↑ comment by SatvikBeri · 2016-11-28T16:28:05.639Z · LW(p) · GW(p)
Making shared vocabulary common and explicit usually makes it faster to iterate. For example, the EA community converged on the idea of replaceability as an important heuristic for career decisions for a while, and then realized that they'd been putting too much emphasis there and explicitly toned it down. But the general concept had been floating around in discussion space already, giving it a name just made it easier to explicitly think about.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-11-29T03:17:26.906Z · LW(p) · GW(p)
I think I agree with this in one sense and disagree in another. In particular, in regard to "giving it a name just made it easier to explicitly think about" :
I agree that this makes it easier to reason about, and therefore you might come to conclusions faster and so on, even correctly.
I don't agree that we really made it easier to think about. What we actually did is make it less necessary to think about it at all, in order to come to conclusions. You can see how this works in mathematics, for example. One of the main purpose of the symbols is to abbreviate complicated concepts so that you don't have to think through them every time they come up.
I think the second point here is also related to my objection in the previous comment. However, overall, the first point might be overall more important, so that the benefit outweighs the costs, especially in terms of benefit to a community.
↑ comment by Elo · 2016-11-28T02:39:52.738Z · LW(p) · GW(p)
percolate
What are you using this word to mean? At a guess it sounds like, "ideas will float to the surface" but also it does not always mean that, as used in "has percolated". Percolate relates to filtering of a substance like coffee, to get the good bits from the bad. Can you repeat the above without using this word?
Are we looking to separate and elevate good ideas from the general noise on the interwebs, or are we looking to ensure ideas filter through the diaspora to every little sub group that exists? Or are we looking to filter something else? I am not sure which you are trying to describe.
If you want to describe an earlier post that is well know, and well spread, it should be enough to describe the name of the concept, i.e. crony beliefs. If you want to reference a less well known concept; it should be enough to name the author and link to their post, like if I wanted to refer to the list of common human goals and talk about things that relate to it.
I don't see the gravity of the problem you are trying to describe with your concerns.
↑ comment by Paul Crowley (ciphergoth) · 2016-11-27T23:56:45.180Z · LW(p) · GW(p)
I don't think you can say both
The codebase is not that bad.
and
I am personally without the coding skill [...]
If I don't have the skills to fix a codebase, I'm pretty handicapped in assessing it. I might still manage to spot some bad things, but I'm in no shape to pronounce it good, or "not that bad".
Replies from: Elo↑ comment by Elo · 2016-11-28T02:28:42.510Z · LW(p) · GW(p)
personally without the coding skill
Clarification: I am not a coder any more. I had skill in a few languages but I can't code any more mostly I Frankenstein my own arduino projects out of other people's projects. This means I can now read code and understand it; but not write it. It's not that bad because I read every line of the codebase to get my head around how it works. It's not that bad because when I was trying to explain a fix I could come up with the code for it:
https://github.com/tricycle/lesswrong/issues/574
I just can't check my work or create a pull request.
It's not that bad in that it still definitely works fine, and does not crash very often and doesn't have security leaks despite having an open code base and is readable to someone with very little code skill.
Replies from: Viliam↑ comment by Viliam · 2016-12-12T12:43:10.569Z · LW(p) · GW(p)
For a person familiar with Python, reading most of the code, and even suggesting changes is relatively easy. It's just running the whole code on their own computer that is almost impossible.
But that means that when you write the code, you can't see it in action, which means you can't test it, which means that if you made a trivial error, you cannot find it and fix it. You can't debug your code, you can't print the intermediate values; you get zero feedback for what you did. Which means that the contribution is practically useless... unless someone else who can run the whole code on their computer will look at your code and finish it. If you need multiple iterations of this, then a work that would be otherwise done in an afternoon may take weeks. That's inconvenience far beyond trivial.
↑ comment by Kaj_Sotala · 2016-11-28T10:29:02.075Z · LW(p) · GW(p)
It's true that articles pass around the rationalist network, and if you happen to be in it, you're likely to see some such articles. But if you have something that you'd specifically want the rationalist community to see, and you're not already in the network, it's very hard.
Some time back, I had a friend ask me how to promote their book which they thought might be of interest to the rationalist community. My answer was basically "you could start out by posting about it on LW, but not that many people read LW anymore so after that I can help you out by leveraging my position in the community". If they didn't know me, or another insider, they'd have a lot harder time even figuring out what they needed to do.
"The rationalist network" is composed of a large number of people and sites, scattered over Tumblr blogs, Facebook groups and profiles, various individual blogs, and so on. If you want to speak to the whole network, you can't just make a post on LW anymore. Instead you need to spend time to figure out who the right people are, get to know them, and hope that you either get into the inner circle, or that enough insiders agree with your message and take up spreading it.
Heck, even though I count myself as "an insider", I've also frequently wanted a way to specifically address the "rationalist community" about various topics, and then not knowing how. I mean, a lot of people in the community read my Facebook posts so I could just post something on Facebook, but that's not quite the same thing.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T04:17:33.221Z · LW(p) · GW(p)
I'm disappointed that Elo's comment hasn't gotten more upvotes. He put a lot of work into fixing LW, and it seems to me that we should be very eager to listen & learn from him.
(I'm also disappointed that rayalez's comments are being ignored. His previous comment about his project was at -1 until I upvoted it. Seeing this kind of thing makes me cynical. Sometimes it seems like status in the LW community is more about who you know than what you've accomplished or what you're doing for the community.)
Arbital seems like the least half-arsed effort at fixing LW thus far. Maybe we should converge around advising Alexei & team?
Replies from: gjm↑ comment by Vaniver · 2016-11-27T22:38:04.834Z · LW(p) · GW(p)
A year from now; crony beliefs may not be easy to find on lesswrong because it was never explicitly posted here in text, but it will still be in the minds of anyone active in the diaspora.
Hmm, in that if you forget the name but remember an example from the post, you won't be able to search for it, because the LW page only has the title and comments, as opposed to the full text?
Replies from: Elocomment by Error · 2016-11-27T16:12:37.266Z · LW(p) · GW(p)
Sarah Constantin, Ben Hoffman, Valentine Smith, and various others have recently mentioned planning to do the same.
Prediction: If they do, we will see a substantial pickup in discussion here. If they don't, we won't.
People go where the content is. The diaspora left LW a ghost town not because nobody liked LW but because all the best content -- which is ever and always created by a relatively small number of people -- went elsewhere. I read SSC, and post on SSC, not because it is better than LW (it's not, its interface makes me want to hit babies with concrete blocks) but because that's where Yvain writes. LW's train wreck of a technical state is not as much of a handicap as it seems.
I like LW-ish content, so I approve of this effort -- but it will only work to the extent that the Royals return.
comment by RyanCarey · 2016-11-27T06:43:11.022Z · LW(p) · GW(p)
Thanks for addressing what I think is one of the central issues for the future of the rationalist community.
I agree that we would be in a much better situation if rationalist discussion was centralized and that we are instead in a tragedy of the commons - more people would post here if they knew that others would. However, I contend that we're further from that desired equilibrium that you acknowledge. Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:
- Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.
- Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. This is mostly because of (1). (Although posts on LW do reliably get at least a handful of comments and views)
- Comments on LessWrong are more critical and less polite than comments on other sites.
- Posts on LessWrong are held in lower regard academic communities like ML and policy than posts elsewhere, including on Medium.
The incentive that pushes in our favor is that writers can correctly perceive that by writing here, they are participating in a community that develops very well-informed and considered opinions on academic and future-oriented topics. But that it not enough.
To put this more precisely, it seems to me that the incentive gradient is currently pointing far too steeply away from LessWrong for 'I [and several friends] will try and post and comment here more often...' to be anything like a viable solution.
However, I would not go as far as to say that the whole project is necessarily doomed. I would give the following counterproposals:
- i) Wait for Arbital to build something that serves this purpose,thereby fixing (1)-(4)
- ii) Build a long list of bloggers who will move back (for some reasonable definition) to LessWrong, or some other such site, if >n other bloggers do. It's the "free state project" type approach where once >n people commit, you "trigger the move", thereby fixing the tragedy of the commons dynamic. Maybe one can independently patch (3) in this context by using this as a Schelling point to improve on community norms.
- iii) Raise funds for a couple of competent developers to make a new LessWrong in order to fix (1) and (2).
I think (i) or (ii) would have some reasonable hope of working. Maybe we should wait to figure out whether (i) will occur, and if not, then proceed with (ii) with or without (iii)?
Replies from: AnnaSalamon, AnnaSalamon, TheAltar, casebash↑ comment by AnnaSalamon · 2016-11-27T08:17:29.936Z · LW(p) · GW(p)
Thoughts on RyanCarey's problems list, point by point:
Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:
Not sure all of them are "problems", exactly. I agree that incentive gradients matter, though.
Comments on the specific "problems":
1 Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.
Insofar as 1 is true, it seems like a genuine and simple bug that is probably worth fixing. Matt Graves is I believe the person to talk to if one has ideas or $ to contribute to this. (Or the Arbital crew, insofar as they're taking suggestions.)
2 Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. [snip]
The extent to which this is a bug depends on the extent to which posts are aimed at "going viral" / getting shared. If our aim is intellectual generativity, then we do want to attract the best minds of the internet to come think with us, and that does require sometimes having posts go viral. But it doesn't require optimizing the average post for that; it in fact almost benefits from having most posts exist in the relative quiet of a stable community, a community (ideally) with deep intellectual context with which to digest that particular post, such that one can often speak to that community without worrying about whether one's points will be intelligible or palatable to newcomers.
Insofar as writers expect on a visceral level that "number of shares" is the useful thing... people will be pulling against an incentive gradient when choosing LW over Facebook. Insofar as writers come to expect on a visceral level that “adding to this centralized conversational project” tracks value, and that number of shares (from parties who don’t then join the conversation, and who don’t carry on their own good intellectual work elsewhere) is mostly a distraction or blinking light… the incentive may actually come to feel different.
People do sometimes do what is hard when they perceive it to be useful.
3 Comments on LessWrong are more critical and less polite than comments on other sites.
I feel there’s an avoidable part of this, which we should avoid; and then an actually useful part of this, which we should keep (and should endeavor to develop positive affect around — when one accurately perceives the usefulness of a thing, it can sometimes come to feel better). See Sarah’s recent post: On Trying Not To Be Wrong
4 Posts on LessWrong are held in lower regard academic communities like ML and policy than posts elsewhere, including on Medium.
This seems like a bad sign, though I am not sure what to do about it. I don’t think it’s worth compromising the integrity of our conversation for the sake of outside palatability; cross-posting seems plausible; I’d also like to understand it more.
Replies from: Vaniver↑ comment by AnnaSalamon · 2016-11-27T08:30:43.098Z · LW(p) · GW(p)
(ii) seems good, and worth adding more hands and voices to; it seems to me we can do it in a distributed fashion, and just start adding to LW and going for momentum, though.
sarahconstantin and some others have in fact been doing something like (ii), and was I suspect a partial cause of e.g. this post of mine, and of:
By paulchristiano:
By Benquo:
By sarahconstantin:
Efforts to add to (ii) would I think be extremely welcome; it is a good idea, and I may do more of it as well.
If anyone reading has a desire to revitalize LW, reading some of these or other posts and adding a substantive (or appreciative) comment is another way to encourage thoughtful posting.
Replies from: sarahconstantin, RyanCarey↑ comment by sarahconstantin · 2016-11-27T10:19:12.604Z · LW(p) · GW(p)
I also support (ii) and have been trying to recruit more good bloggers.
I'll note that good writers tend to be low on "civic virtue" -- creative work tends to cut against that as a motivation. I'm still trying to think of good ways to smooth the incentive gradient for writers.
One possibility is to get some people to spend a weekend together -- rent a place in Big Sur or something -- and brainstorm/hype up some LW-specific ideas together, which will be posted in real time.
Replies from: Vaniver↑ comment by RyanCarey · 2016-11-27T09:05:12.348Z · LW(p) · GW(p)
I agree that this is great.
I meant to propose something even more specific. Using for example a Google Form, you collect a list of people who agree to post on LW if and only if that list surpasses 200 names.
Once it gets to 200, you email everybody and tell them LW is relaunching.
Do I think it'd work? Maybe.
↑ comment by TheAltar · 2016-11-27T08:06:52.364Z · LW(p) · GW(p)
A separate action that could be taken by bloggers who are interested in it (especially people just starting new blogs) is to continue posting where they do, but disable comments on their posts and link people to corresponding LW link post to comment on. This is far less ideal, but allows them to post elsewhere and to have the comments content appear here on LW.
Replies from: sarahconstantin↑ comment by sarahconstantin · 2016-11-27T10:21:45.637Z · LW(p) · GW(p)
This is a nontrivial cost. I'm considering it myself, and am noticing that I'm a bit put off, given that some of my (loyal and reflective) readers/commenters are people who don't like LW, and it feels premature to drag them here until I can promise them a better environment. Plus, it adds an extra barrier (creating an account) to commenting, which might frequently lead to no outside comments at all.
A lighter-weight version of this (for now), might be just linking to discussion on LW, without disabling blog comments.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2016-11-29T12:28:47.062Z · LW(p) · GW(p)
Would you use the LW comments section if it was embeddable, like Disqus is?
comment by Daniel_Burfoot · 2016-11-28T01:09:44.978Z · LW(p) · GW(p)
There are lots of diverse opinions here, but you are not going to get anywhere just by talking. I recommend you do the following:
- Get together a small "LW 2.0" committee that has the authority to make serious changes
- Have committee members debate possible changes and hash out a plan. General community members should have a place to voice their feedback, but shouldn't get a vote per se.
- Once the plan is decided, implement it. Then reconvene the committee every 3 or 6 months to review the status and make incremental fixes.
To say it in a different way: success or failure depends much more on building and empowering a small group of dedicated individuals, than on getting buy-in from a large diffuse group of participants.
Replies from: sarahconstantin, John_Maxwell_IV↑ comment by sarahconstantin · 2016-11-28T15:17:28.880Z · LW(p) · GW(p)
This is being done.
Replies from: casebash↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T06:16:07.173Z · LW(p) · GW(p)
This is how most companies work: there are employees of the company working full-time on making users as happy as possible. (In this case, I'd guess the users to focus on are users who have a history of making valuable contributions.)
comment by JonahS (JonahSinick) · 2016-11-27T20:20:23.854Z · LW(p) · GW(p)
Brian Tomasik's article Why I Prefer Public Conversations is relevant to
I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed). Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort. (At least if we can build up toward in fact having a single locus.)
comment by VipulNaik · 2016-11-29T02:23:05.490Z · LW(p) · GW(p)
I might have missed it, but reading through the comment thread here I don't see prominent links to past discussions. There's LessWrong 2.0 by Vaniver last year, and, more recently, there is LessWrong use, successorship, and diaspora. Quoting from the section on rejoin conditions in the latter:
Replies from: VipulNaik, LoiathalA significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?
(links to rejoin condition write-ins)
Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.
↑ comment by VipulNaik · 2016-11-29T02:43:19.405Z · LW(p) · GW(p)
The impression I form based on this is that the main blocker to LessWrong revitalization is people writing sufficiently attractive posts. This seems to mostly agree with the emerging consensus in the comments, but the empirical backing from the survey is nice. Also, it's good to know that software or interface improvements aren't a big blocker.
As for what's blocking content creators from contributing to LessWrong, here are a few hypotheses that don't seem to have been given as much attention as I'd like:
- Contributing novel content becomes harder as people's knowledge base and expectations grow: Shooting off a speculative missive no longer works in 2016 the way it might have worked in 2011 -- people have already seen a lot of the basic speculation, and need something more substantive to catch their attention. But the flip side is that something that's truly substantive is going to require a lot of work to research and write, and then even more work to simplify and explain elegantly. This problem is stronger on LessWrong because of the asymmetric nature of rewards. On Facebook, you can still shoot off a speculative missive -- it's your own Facebook post -- and you won't get blasted for being unoriginal or boring. A lot of people will like, comment, and share your status if you're famous enough or witty enough. On LessWrong, you'll be blasted more.
- Negative reception and/or lack of reception is more obvious on LessWrong: Due to the karma system of LessWrong, it's brutally obvious when your posts aren't liked enough by people, and/or don't get enough comments. On personal blogs, this is a little harder for outsiders to make out (unless the blogger explicitly makes the signals obvious) and even then, harder to compare with other people's posts. This means that when people are posting things they have heavy personal investment in (e.g., they've spent months working on the stuff) they may feel reluctant to post it on LW and find it upvoted less than a random post that fits more closely in LW norms. The effects are mediated purely through psychological impact on the author. For most starting authors, the audience one reaches through LW, and the diversity of feedback one gets, is still way larger than that one would get on one's own blog (though social media circulation has lessened the gap). But the psychological sense of having gotten "only" three net upvotes compared to the 66 of the top-voted post, can make people hesitant. I remember a discussion with somebody who was disheartened about the lack of positive response but I pointed out that in absolute terms it was still more than a personal blog.
- Commenters' confidence often exceeds their competence, but the commenters still sound prima facie reasonable: On newspaper and magazine blogs, the comments are terrible, but they're usually obviously terrible. Readers can see them and shrug them off. On LessWrong, star power commenters often make confident comments that seem prima facie reasonable yet misunderstand the post. This is particularly the case as we move beyond LW's strong areas and into related domains, which any forum dedicated to applying rationality to the real world should be able to do. The blame here isn't solely on the commenters who make the mistaken assertions but also on the original post for not being clear enough, and on upvoters for not evaluating things carefully enough. Still, this does add to the burden of the original poster, who now has to deal with potential misconceptions and misguided but confident putdowns that aren't prima facie wrong. Hacker News has a similar problem though the comments on HN are more obviously bad (obviously ill-informed uncharitable criticism) so it might be less of a problem there.
- Commitment to topics beyond pet rationality topics isn't strong and clear enough: LessWrong is fairly unique as a forum with the potential for reasonably high quality discussion of just about any topic (except maybe politics and porn and sex stuff). But people posting on non-pet topics aren't totally sure how much their post belongs on LessWrong. A more clear embrace of "all topics under the sun" -- along with more cooperative help from commenters to people who post on non-conventional topics -- can help.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-29T11:25:04.584Z · LW(p) · GW(p)
I compiled some previous discussion here, but the troll downvoted it below visibility (he's been very active in this thread).
Crazy idea to address point #2: What if posts were made anonymously by default, and only became nonymous once they were upvoted past a certain threshold? This lets you take credit if your post is well-received while lessening the punishment if your post is poorly received.
Replies from: VipulNaik↑ comment by Lumifer · 2016-11-29T17:59:41.537Z · LW(p) · GW(p)
Commenters' confidence often exceeds their competence
Sometimes that's deliberate. It it well known that the best way to get teh internets to explain things to you is not to ask for an explanation, but to make a confident though erroneous claim.
Replies from: John_Maxwell_IV, VipulNaik↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-30T12:45:14.862Z · LW(p) · GW(p)
It it well known that the best way to get teh internets to explain things to you is not to ask for an explanation, but to make a confident though erroneous claim.
I've noticed you using this strategy in the past. It makes me frustrated with you, but I want to uphold LW's norms of politeness in conversation, so I grit my teeth through the frustration and politely explain why you're wrong. This drains my energy and makes me significantly less enthusiastic about using LW.
Please stop.
Replies from: Lumifer↑ comment by Lumifer · 2016-11-30T15:35:46.315Z · LW(p) · GW(p)
I don't make deliberately erroneous claims (unless I'm trolling which happens very rarely on LW and is obvious). I sometimes make claims without describing my confidence in them which, I think, is pretty normal. Offering an observation or an assertion up for debate so that it may be confirmed or debunked is one of the standard ways of how conversations work.
I am not sure what do you want me to do. My comments are already peppered with "I think...", and "seems to me...", and other expressions like that. Would you like me to make no errors? I would gladly oblige if only you show me how.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-12-01T09:25:49.609Z · LW(p) · GW(p)
I'll try to give you more specific feedback if I get frustrated by your comments again in the future.
↑ comment by VipulNaik · 2016-11-29T18:38:22.840Z · LW(p) · GW(p)
It could also be a good way for the Internets to give up on trying to talk in a forum where you are around.
Replies from: btrettel, Lumifer↑ comment by btrettel · 2016-11-29T21:13:15.162Z · LW(p) · GW(p)
According to 538's survey more people reported that they comment to fix errors than anything else.
This doesn't mean that you're wrong, though, because it doesn't seem 538 asked why people stop commenting (based on my skim of the article; feel free to correct me).
↑ comment by Lumifer · 2016-11-29T20:53:23.379Z · LW(p) · GW(p)
Why would teh internets be scared by the presence of lil' ol' me? I am very ignorable and have no desire to sealion. Not wanting to talk to me is perfectly fine.
Replies from: Jacobian, gjm↑ comment by Jacob Falkovich (Jacobian) · 2016-11-30T18:51:07.096Z · LW(p) · GW(p)
Because we're talking about the quality of discussion on LW and how to encourage people to post more good stuff. Whether or not you're OK with people ignoring your trollishness, trollishness lowers the quality of discussion and discourages people from posting. If you persist at it, you are choosing personal gain (whether provocation or learning stuff) over communally beneficial norms. And you're not "lil' ol' me" when you're in top 5 of commentors month in and month out.
"Feel free to ignore me" IS sealioning, because when people react to you in a way you didn't want (for example, they get angry or frustrated) you accept no blame or responsibility for it. The first comment I got to a post about empathy and altruism was you telling me that my recommendation leads to kulaks, ghettos and witch burning (I'm being uncharitable, but not untruthful). If I am then discouraged from posting new stuff, will you say that it's entirely my fault for being too sensitive and not ignoring you?
Replies from: g_pepper, Lumifer↑ comment by g_pepper · 2016-11-30T20:08:28.244Z · LW(p) · GW(p)
you are choosing personal gain ... over communally beneficial norms. ... And you're not "lil' ol' me" when you're in top 5 of commentors month in and month out.
By the same token, doesn't being in the top 5 of commentators regularly suggest that a person is not really too far outside of community norms?
IMO there is a difference between trolling and blunt but rational commentary, and the example you linked to above (involving kulaks and the like) is blunt but rational commentary (and frankly, it was not excessively blunt); there is a good case to be made for emotional human empathy acting as a check on utilitarianism running awry. The 20th century provides several examples of utopian projects ending badly, and it seems to me useful to ask if removing emotional empathy from the moral calculation is a good idea.
If I am then discouraged from posting new stuff, will you say that it's entirely my fault for being too sensitive and not ignoring you?
IMO, that is a false dichotomy - (being discouraged from posting new stuff vs. ignoring disagreeing posts). A third option is to read the disagreeing post, think about it, respond to it if you deem doing so worthwhile, and move on, while recognizing that divergent viewpoints exist.
My fear is that if comments like Lumifer's Kulak comment are discouraged for fear of discouraging future postings, LW is at risk of becoming an echo chamber.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2016-11-30T23:05:12.148Z · LW(p) · GW(p)
As you've noticed in that thread, I didn't cry that Lumifer offended me. I replied to his comment and we ended up having a semi-productive discussion on empathy, coercion and unintended consequences. If bringing that specific example up reads as concern trolling on my part, I apologize.
I wanted to make a more general point: I do recognize that there's a trade off to be made between criticism and niceness, both of which are needed for a good discussion. I'm also OK if you think LW is too nice and the comments should be harsher. The directness of criticism is one of my favorite things about LW, along with overall commitment to free speech. But I also care about practical outcomes on discussion quality, not abstract ideology.
I think that there's an important distinction between the following two positions:
- "I made a blunt comment because I judged that criticism is more important than niceness in this specific case."
- "I made a blunt comment and niceness is not my concern at all, because other people are free to ignore me."
I think that an environment where people hold #1 produces better discussion. And unless I'm corrected, it seems like Lumifer espouses #2.
Replies from: g_pepper↑ comment by g_pepper · 2016-12-01T01:33:34.883Z · LW(p) · GW(p)
As you've noticed in that thread, I didn't cry that Lumifer offended me. I replied to his comment and we ended up having a semi-productive discussion on empathy, coercion and unintended consequences.
Yes I did notice. That is why that particular exchange was a great example of how one need neither ignore nor be discouraged by a comment like Lumifer's kulak comment; instead, allow the comment to engender a useful dialog.
I'm also OK if you think LW is too nice and the comments should be harsher.
No, I don't think that. I really like the quality of the comments on LW, that is why I come here. However, I think that Lumifer's comments are within the range of LW community norms. One thing I like about LW is that there exists a diversity of commenting styles just as it has a diversity of viewpoints on various subjects. An example of another high-karma commentator with a style (and opinions) that are quite different from Lumifer's is gjm. IMO both commentators make thoughtful, valuable contributions to LW, albeit their styles are quite different; I think that LW benefits from both commentators' styles and opinions, and the distinct styles and opinions of many others as well. Note that I am in favor of community norms, but I feel that Lumifer's comments are within those norms.
I think that there's an important distinction between the following two positions... And unless I'm corrected, it seems like Lumifer espouses #2
IMO, Lumifer is not in category 2. Using the kulak comment again as an illustrative example, it seems to me that the comment was in no way a personal attack on you or anyone else and was not what I would classify as "not nice". It seems to me that the specific examples he chose did bring clarity to the discussion in a way that voicing an abstract objection or a less extreme example would not have. IMO Stalin's dekulakization (which is I suppose what Lumifer was referring to) really is the sort of thing that can happen more easily when an idealized (albeit flawed) utilitarian goal is pursued in the absence of emotional empathy. In short, I suspect that the examples were selected because they effectively made the point that Lumifer intended to make rather than because Lumifer was trying to offend or troll.
↑ comment by Lumifer · 2016-11-30T20:06:23.958Z · LW(p) · GW(p)
trollishness
I don't accept that I'm trollish. Trolling is basically pushing buttons to get an emotional reaction, mostly for the lulz. I'm not interested in triggering emotional reactions in people two screens away from me and LW isn't a fertile ground for the lulz, anyway.
I will confess to the propensity to make my arguments forcefully. I count it as a feature and not a bug. I will also confess to liking extreme and exaggerated examples -- the reason is that in some situations I want to trade off nuance against clarity and obviousness.
As to discouraging people from posting, I do want to discourage people from posting low-quality stuff. I see nothing wrong with that.
when people react to you in a way you didn't want (for example, they get angry or frustrated) you accept no blame or responsibility for it
Generally speaking, yes. I am not your mother, your nanny, or your mentor and making sure you're emotionally comfortable is not one of my duties. I also reject the popular political correctness / social justice notion that it's sufficient for the listener to claim offense (or some other variety of victimhood) to put all the responsibility/blame on the speaker.
will you say that it's entirely my fault for being too sensitive
I wouldn't put it in terms of "fault" and I don't know about you personally, but yes, I think that some chunk of the LW population is too thin-skinned and would greatly benefit from a dose of HTFU.
Note, though, that I don't consider it my obligation to go out of the way to provide that dose (see above re being a nanny). I just don't think that being particularly thin-skinned gives you any special rights.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2016-11-30T23:25:25.945Z · LW(p) · GW(p)
I also reject the popular political correctness / social justice notion that it's sufficient for the listener to claim offense (or some other variety of victimhood) to put all the responsibility/blame on the speaker.
I'm pretty sure I didn't write anything to suggest that the blame is all the speakers', and yet you seem to have read it this way. Who's responsible for this misunderstanding? I hope we can both agree that the responsibility is shared between speaker and listener, it can't work any other way in a dialogue when both people alternate roles. And when you write something in direct criticism of someone (and not some general statement), you are engaged in dialogue.
Now it also seems to me that "political correctness/SJ culture" is basically a pejorative on LW, but I'll take your word that you're not trying to push buttons by comparing me to them. Instead I'll just remind you that reversed stupidity is not intelligence, and being careless about offending people is not correlated with truth seeking. I support the Buddhist Victorian Sufi standard of SSC, and kindness is 33% of that standard.
Replies from: Lumifer↑ comment by Lumifer · 2016-12-01T15:38:56.220Z · LW(p) · GW(p)
Who's responsible for this misunderstanding?
Both are responsible for the misunderstanding, but only one of them is responsible for his own anger and frustration.
reversed stupidity is not intelligence, and being careless about offending people is not correlated with truth seeking
I agree. But note that "not correlated" is different from "negatively correlated". As in "being very careful to not offend people is negatively correlated with truth-seeking" :-P
I like the SSC standard, too, but notice that it's very flexible and can be bent into many different shapes :-/ And, of course, once in a while Yvain declares a reign of terror.
↑ comment by gjm · 2016-11-30T11:46:31.832Z · LW(p) · GW(p)
Who said anything about scared? Or for that matter about you?
Someone in the habit of making confident erroneous claims may start to get ignored for being a blowhard even if no one is scared of them.
Replies from: Lumifer↑ comment by Lumifer · 2016-11-30T15:28:56.501Z · LW(p) · GW(p)
Or for that matter about you?
Here: :-)
I've noticed you using this strategy in the past
And, as I mentioned, I'm perfectly fine with being ignored.
Replies from: gjm↑ comment by gjm · 2016-11-30T17:34:57.346Z · LW(p) · GW(p)
Here: :-)
Ah.
I'm perfectly fine with being ignored.
Fair enough, but some other people contemplating using the same technique might be less so.
Replies from: Lumifer↑ comment by Lumifer · 2016-11-30T17:48:04.332Z · LW(p) · GW(p)
some other people contemplating using the same technique might be less so
Feel free to point out to those some other people their shortcomings, then. I hope you don't think I'm a role model, do you now? X-)
Replies from: gjm↑ comment by gjm · 2016-11-30T17:58:59.897Z · LW(p) · GW(p)
I don't really believe in role models. Anyway, I wasn't intending to point out any person's shortcomings; I was agreeing with VipulNaik's misgivings about the technique.
(To be more concrete, "doing X may get you ignored as a blowhard" is a criticism of doing-X, not a criticism of someone who either does X or contemplates doing X.)
Replies from: Lumifer↑ comment by Lumifer · 2016-11-30T18:07:31.024Z · LW(p) · GW(p)
Sure, one might come across as a blowhard. But one might also come across as someone who can be persuaded by evidence to change his mind without a lot of kicking and screaming.
This is really about reputation management in an online community, a complicated topic.
comment by casebash · 2016-11-27T14:39:46.652Z · LW(p) · GW(p)
I know that there have been several attempts at reviving Less Wrong in the past, but these haven't succeeded because a site needs content to succeed and generating high quality content is both extremely hard and extremely time intensive.
I agree with Alexandros that Eliezer's ghost is holding this site back - you need to talk to Eliezer and ask if he would be willing to transfer control of this site to CFAR. What we need at the moment is clear leadership, a vision and resources to rebuild the site.
If you produced a compelling vision of what Less Wrong should become, I believe that there would be people would be willing to chip in to make this happen.
EDIT: The fact that this got promoted to main seems to indicate that there is a higher probability of this working than previous attempts at starting this discussion.
comment by steven0461 · 2016-11-27T22:17:38.873Z · LW(p) · GW(p)
I agree with your comments on small intellectually generative circles and wonder if the optimal size there might not be substantially smaller than LW. It's my sense that LW has been good for dissemination, but most of the generation of thoughts has been done in smaller IRL circles. A set of people more selected for the ability and will to focus on the problem you describe in 1-3, if gathered in some internet space outside LW, might be able to be a lot more effective.
comment by Danny_Hintze · 2016-11-27T19:00:42.932Z · LW(p) · GW(p)
I think we need to put our money and investment where our mouths are on this. Either Less Wrong (or another centralized discussion platform) are very valuable and worth tens of thousands of dollars in investment and moderation, or they are not that important and not worth it. It seems that every time we have a conversation about Less Wrong and the importance of it, the problem is that we expect everyone to do things on a volunteer basis and things will just magically get going again. It seems like Less Wrong was going great back when there was active and constant investment in it by MIRI and CFAR, and once that investment stopped things collapsed.
Otherwise we are just in a situation like that of Jaguar with the cupholders, where everyone is posting on forums for 10 years about how we need cupholders, but there is no one whose actual, paid job is to get cupholders in the cars.
Replies from: RyanCarey↑ comment by RyanCarey · 2016-11-27T21:48:12.676Z · LW(p) · GW(p)
The list of plausibly worthwhile changes that would help to revitalize LessWrong is long:
- redesigning LW's appearance
- cleaning up the codebase
- forming a new moderation team
- producing a bunch of new content
- removing the main/discussion distinction
- choosing one or more people to take full leadership of the project
- (maybe) recentering the list of topics for discussion to include more about EA, tech or politics
- (maybe) allow more links, rather than just posts
- rebranding. x) getting many people join at once.
Effort might be superlinear here - once you commit to a few, you might just want to bite the bullet a build a new damned site.
That's going to cost time and dollars - maybe hundreds of thousands, but if it's what has to be done...
comment by owencb · 2016-11-27T10:16:17.177Z · LW(p) · GW(p)
I think I disagree with your conclusion here, although I'd agree with something in its vicinity.
One of the strengths of a larger community is the potential to explore multiple areas in moderate amounts of depth. We want to be able to have detailed conversations on each of: e.g. good epistemic habits; implications of AI; distributions of cost-effectiveness; personal productivity; technical AI safety; ...
It asks too much for everyone to keep up with each of these conversations, particularly when each of them can spawn many detailed sub-conversations. But if they're all located in the same place, it's hard to browse through to find the parts that you're actually trying to keep up with.
So I think that we want two things:
- Separate conversational loci for each topic
- A way of finding the best material to get up to speed on a given topic
For the first, I find myself thinking back to days of sub-forums on bulletin boards (lack of nested comments obviously a big problem there). That way you could have the different loci gathered together. For the second, I suspect careful curation is actually the right way to identify this content, but I'm not sure what the best way to set up infrastructure for this is.
Replies from: AnnaSalamon, owencb↑ comment by AnnaSalamon · 2016-11-27T21:11:52.834Z · LW(p) · GW(p)
It seems to me that for larger communities, there should be both: (a) a central core that everyone keeps up on, regardless of subtopical interest; and (b) topical centers that build in themselves, and that those contributing to that topical center are expected to be up on, but that members of other topical centers are not necessarily up on. (So that folks contributing to a given subtopical center should be expected to be keeping up with both that subtopic, and the central cannon.)
It seems to me that (a) probably should be located on LW or similar, and that, if/as the community grows, the number of posts within (a) can remain capped by some "keep up withable" number, with quality standards raising as needed.
Replies from: owencb↑ comment by owencb · 2016-11-27T22:39:09.284Z · LW(p) · GW(p)
Your (a) / (b) division basically makes sense to me.[*] I think we're already at the point where we need this fracturing.
However, I don't think that the LW format makes sense for (a). I'd probably prefer curated aggregation of good content for (a), with fairly clear lines about what's in or out. It's very unclear what the threshold for keeping up on LW should be.
Also, I quite like the idea of the topical centres being hosted in the same place as the core, so that they're easy to find.
[*] A possible caveat is dealing with new community members nicely; I haven't thought about this enough so I'm just dropping a flag here.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2016-11-28T15:45:42.348Z · LW(p) · GW(p)
I quite like the idea of the topical centres being hosted in the same place as the core, so that they're easy to find.
Also it makes it easy for mods to enforce the distinction. Instead of "I think this post and discussion is not suited for this place, could you delete it and take it elsewhere?" it can just be "This should actually be over in sub-forum X, so I've moved it there."
comment by Fluttershy · 2016-11-27T09:40:27.951Z · LW(p) · GW(p)
It was good of you to write this post out of a sense of civic virtue, Anna. I'd like to share a few thoughts on the incentives of potential content creators.
Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of remarks with discourse that affirmed the worth of LessWrong posters would incentiveize more collaboration on this site.
I'm not sure if this implies that we should shift to a platform that doesn't have the taint of "LessWrong is dead" associated with it. Maybe we'll be ok if a selection of contributors who are highly regarded in the community begin or resume posting on the site. Or, perhaps this implies that the content creators who come to whatever locus of discussion is chosen should be praised for being virtuous by contributing directly to a central hub of knowledge. I'm sure that you all can think of even better ideas along these lines.
comment by shev · 2016-11-27T20:14:31.234Z · LW(p) · GW(p)
Here's an opinion on this that I haven't seen voiced yet:
I have trouble being excited about the 'rationalist community' because it turns out it's actually the "AI doomsday cult", and never seems to get very far away from that.
As a person who thinks we have far bigger fish to fry than impending existential AI risk - like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively-peaceful global state before AI even exists - I find that I have little desire to try to contribute here. Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.
(I'm not saying that AI stuff shouldn't be discussed. I'd like it to dominate the discussion a lot less.)
I think this community would have an easier time keeping members, not alienating potential members, and getting more useful discussion done, if the discussions were more located around rationality and effectiveness in general, instead of the esteemed founder's pet obsession.
Replies from: Vaniver, shev↑ comment by Vaniver · 2016-11-27T21:03:23.943Z · LW(p) · GW(p)
Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.
I don't think that it's true that you need to buy into the AI-thing to be a member of the community, and so I think that it seems that way is a problem.
But I think you do need to be able to buy into the non-weirdness of caring about the AI-thing, and that we may need to be somewhat explicit about the difference between those two things.
[This isn't specific to AI; I think this holds for lots of positions. Cryonics is probably an easy one to point at that disproportionately many LWers endorse but is seen as deeply weird by society at large.]
comment by Gordon Seidoh Worley (gworley) · 2016-11-27T21:31:49.795Z · LW(p) · GW(p)
As someone who is actively doing something in this direction at Map and Territory, a couple thoughts.
A single source is weak in several ways. In particular although it may sound nice and convenient from the inside, no major movement that affects a significant portion of the population has a single source. It may have its seed in a single source, but it is spread and diffuse and made up of thousands of voices saying different things. There's no one play to go for social justice or neoreaction or anything else, but there are lots of voices saying lots of things in lots of places. Some voices are louder and more respected than others, true, but success at spreading ideas means loss of centralization of the conversation.
A single source also restricts you to the choices of that source. Don't like the editorial choices and you don't have anywhere else to go. The only way to include everyone is to be like reddit and federate editorial power.
If I'm totally honest I think most desire to revitalize LW is about a nostalgia for what LW once was. I freely admit I even played on this nostalgia in the announcement of Map and Territory.
http://lesswrong.com/lw/o0u/map_and_territory_a_new_rationalist_group_blog/
I also suspect there is a certain amount of desire for personal glory. Wouldn't it be high status to be the person who was the new center of the rationalists community? So as much as people may not like to admit it, I suspect these kinds of calls for a new, unified thing play at least a little bit on people's status seeking desires. I have nothing against this if it creates the outcomes you want, but it's worth considering if it's also prohibiting coordination.
What seems to matter is spreading ideas that we/you believe will make the world better (though to be clear I don't personally care about that: I just like when my own thinking is influential on others). To this end having more content on LW is helpful, but only in so far as more content is helpful in general. Visibility for that content is probably even more important than the self-judged quality of the content itself.
I agree with Anna's sentiment, but I'd encourage you not to spin your wheels trying to recreate the LessWrong that once existed. Create new things you want to exist to spread the ideas you want to see others take up.
Replies from: SatvikBeri, Jacobian↑ comment by SatvikBeri · 2016-11-27T21:53:35.847Z · LW(p) · GW(p)
100% centralization is obviously not correct, but 100% decentralization seems to have major flaws as well–for example, it makes discovery, onboarding, and progress in discussion a lot harder.
On the last point: I think the LW community has discovered ways to have better conversations, such as tabooing words. Being able to talk to someone who has the same set of prerequisites allows for much faster, much more interesting conversation, at least on certain topics. The lack of any centralization means that we're not building up a set of prerequisites, so we're stuck at conversation level 2 when we need to achieve level 10.
↑ comment by Jacob Falkovich (Jacobian) · 2016-11-30T19:06:13.553Z · LW(p) · GW(p)
I also suspect there is a certain amount of desire for personal glory. Wouldn't it be high status to be the person who was the new center of the rationalists community? So as much as people may not like to admit it, I suspect these kinds of calls for a new, unified thing play at least a little bit on people's status seeking desires.
That's a good point, but I also want to offer that I don't personally see this as a huge problem for LW. Maybe it's because I'm a latecomer, but I never really cared or kept track of who was high status on LW. First of all, I imagine that a lot of the status hierarchy is settled in real-life interactions and not by counting karma. We're all in Eliezer's shadow anyway.
I just want LW to be great again. I don't mind donating money to a small group of people who will take responsibility for making it great again. I certainly don't mind letting this small group get glory and status, especially if getting paid in status will get us a discount on the monetary cost :)
comment by Morendil · 2016-11-27T09:49:58.890Z · LW(p) · GW(p)
We have lately ceased to have a "single conversation" in this way.
Can we hope to address this without understanding why it happened?
What are y'all's theories of why it happened?
Replies from: John_Maxwell_IV, sarahconstantin, SatvikBeri↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-27T13:57:53.453Z · LW(p) · GW(p)
There has been lots of discussion of this. This is probably at least the tenth thread on why/how to fix LW.
http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/
http://lesswrong.com/r/discussion/lw/nf2/lesswrong_potential_changes/
http://lesswrong.com/lw/n0l/lesswrong_20/
http://lesswrong.com/lw/n9b/upcoming_lw_changes/
https://wiki.lesswrong.com/index.php?title=Less_Wrong_2016_strategy_proposal
http://lesswrong.com/lw/nkw/2016_lesswrong_diaspora_survey_results/
http://lesswrong.com/lw/mbd/lesswrong_effective_altruism_forum_and_slate_star/
http://lesswrong.com/lw/mcv/effectively_less_altruistically_wrong_codex/
http://lesswrong.com/lw/m7g/open_thread_may_18_may_24_2015/cdfe
http://lesswrong.com/lw/kzf/should_people_be_writing_more_or_fewer_lw_posts/
http://lesswrong.com/lw/not/revitalizing_less_wrong_seems_like_a_lost_purpose/
http://lesswrong.com/lw/np2/revitalising_less_wrong_is_not_a_lost_purpose/
http://lesswrong.com/lw/o7b/downvotes_temporarily_disabled/
http://lesswrong.com/lw/oho/thoughts_on_operation_make_less_wrong_the_single/
(These are just the ones I recall, and they don't include all the posts Eugene generated or the discussion in Slack.)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2016-11-28T03:39:31.934Z · LW(p) · GW(p)
One thought that occurs to me re: why this discussion tends to fail, and why Less Wrong has trouble getting things done in general, is the forum structure. On lots of forums, contributing to a thread will cause the thread to be "bumped", which gives it additional visibility. This means if a topic is one that many people are interested in, you can have a sustained discussion that does not need to continually be restarted from scratch. Which creates the possibility of planning out and executing a project. (I imagine the linear structure of an old school forum thread is also better for building up knowledge, because you can assume that the person reading your post has already read all the previous posts in the thread.)
A downside of the "bump" mechanic is that controversial threads which attract a lot of comments will receive more attention than they deserve. So perhaps an explicit "sticky" mechanic is better. (Has anyone ever seen a forum where users could vote on what posts to "sticky"?)
↑ comment by sarahconstantin · 2016-11-27T10:27:46.868Z · LW(p) · GW(p)
#1: the general move of the internet away from blogs and forums and towards social media.
In particular, there seems to be a mental move that people make, that I've seen people write about quite frequently, of wanting to avoid the more "official"-seeming forms of online discussion, and towards more informal places. From blogging to FB, from FB to Tumblr and Twitter, and thence to Snapchat and other stuff I'm too old for. Basically, people say that they're intimidated to talk on the more official, public channels. I get a sense of people feeling hassled by unfriendly commenters, and also a sense of something like "kids wanting to hang out where the grownups aren't", except that the "kids" here are often adults themselves. A sense that you'll be judged if you do your honest best to write what you actually believe, in front of people who might critique it, and so that it's safer to do something that leaves you less exposed, like sharing memes.
I think the "hide, go in the darkness, do things that you can't do by daylight" Dionysian kind of impulse is not totally irrational (a lot of people do have judgmental employers or families) but it's really counterproductive to discourse, which is inherently an Apollonian, daylight kind of activity.
Replies from: steven0461, Morendil, gworley↑ comment by steven0461 · 2016-11-27T21:31:15.611Z · LW(p) · GW(p)
To me, the major advantage of social media is they make it easy to choose whose content to read. A version of LW where only my 25 favorite posters were visible would be exciting where the current version is boring. (I don't think that's a feasible change, but maybe it's another data point that helps people understand the problem.)
Replies from: Evan_Gaensbauer, AnnaSalamon↑ comment by Evan_Gaensbauer · 2016-11-28T06:10:03.842Z · LW(p) · GW(p)
You can already do this. If you click on a user's profile, there will be a little box in the top right corner. Click on the button that says "add to friends" there. When you "friend" someone on LessWrong, it just means you follow them. If you go to www.lesswrong.com/r/friends, there's a feed with submissions from only the other users you're following.
Replies from: steven0461↑ comment by steven0461 · 2016-11-28T06:17:38.003Z · LW(p) · GW(p)
Cool, thanks, but it looks like that's posts only, not comments.
↑ comment by AnnaSalamon · 2016-11-27T22:23:15.888Z · LW(p) · GW(p)
Ignoring the feasibility question for a minute, I'm confused about whether it would be desirable (if feasible). There are some obvious advantages to making it easy for people to choose what to read. And as a general heuristic, making it easy for people to do things they want to do seems usually good/cooperative. But there are also strong advantages to having common knowledge of particular content/arguments (a cannon; a single thread of assumed "yes that's okay to assume and build on"); and making user displays individual (as e.g. Facebook does) cuts heavily against that.
(I realize you weren't talking about what was all-things-considered desirable, only about what feels exciting/boring.)
Replies from: steven0461↑ comment by steven0461 · 2016-11-27T22:39:38.328Z · LW(p) · GW(p)
That seems an important set of concerns, but also I'm not sure how much people are letting lack of canonicity bother them in choosing what to cite and reply to, and popular content will become canon through other mechanisms than the front page, and the more canon there exists, the harder it will be to take it as common knowledge. User-picked content is to some extent also compatible with canon, e.g. through social pressure to read a general "best of" feed. (Just to be clear, though, I don't think this is probably the way we should go / the best use of resources.)
↑ comment by Morendil · 2016-11-27T10:38:52.785Z · LW(p) · GW(p)
Yes, and this would be a general trend - affecting all community blogs to some extent. I was looking for an explanation for the downfall of LessWrong specifically, but I suppose it's also interesting to consider general trends.
Would you say that LessWrong is particularly prone to this effect, and if so because of what properties?
Replies from: sarahconstantin↑ comment by sarahconstantin · 2016-11-27T10:52:41.271Z · LW(p) · GW(p)
Specifically, I think that LW declined from its peak by losing its top bloggers to new projects. Eliezer went to do AI research full-time at MIRI, Anna started running CFAR, various others started to work on those two organizations or others (I went to work at MetaMed). There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.
One trend I dislike is that highly competent people invariably stop hanging out with the less-high-status, less-accomplished, often younger, members of their group. VIPs have a strong temptation to retreat to a "VIP island" -- which leaves everyone else short of role models and stars, and ultimately kills communities. (I'm genuinely not accusing anybody of nefarious behavior, I'm just noting a normal human pattern.) Like -- obviously it's not fair to reward competence with extra burdens, I'm not that much of a collectivist. But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.
Replies from: Vaniver, kechpaja, Morendil↑ comment by Vaniver · 2016-11-27T17:31:57.513Z · LW(p) · GW(p)
But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.
As for why this is a problem for LW specifically, I would probably point at age. The full explanation is too long for this comment, and so may become a post, but the basic idea is that 'career consolidation' is a developmental task that comes before 'generativity', or focusing mostly on shepherding the next generation, which comes before 'guardianship', or focusing mostly on preserving the important pieces of the past.
The community seems to have mostly retracted because people took the correct step of focusing on the next stage of their development, but because there hadn't been enough people who had finished previous stages of their development, we didn't have enough guardians. We may be able to build more directly, but it might only work the long way.
Replies from: Alexei↑ comment by kechpaja · 2016-11-27T11:48:15.586Z · LW(p) · GW(p)
To expand on what sarahconstantin said, there's a lot more this community could be doing to neutralize status differences. I personally find it extremely intimidating and alienating that some community members are elevated to near godlike status (to the point where, at times, I simply cannot read i.e. SSC or anything by Eliezer — I'm very, very celebrity-averse).
I've often fantasized about a LW-like community blog that was entirely anonymous (or nearly so), so that ideas could be considered without being influenced by people's perceptions of their originators (if we could solve the moderation/trolling problem, that is, to prevent it from becoming just another 4chan). A step in the right direction that might be a bit easier to implement would be to revamp the karma system so that the number of points conferred by each up or down vote was inversely proportional to the number of points that the author of the post/comment in question had already accrued.
The thing is, in the absence of something like what I just described, I'm skeptical that it would be possible to prevent the conversation from quickly becoming centered around a few VIPs, with everyone else limited to commenting on those individuals' posts or interacting with their own small circles of friends.
↑ comment by Morendil · 2016-11-27T11:11:26.584Z · LW(p) · GW(p)
There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.
I'd be wary of post hoc ergo propter hoc in this context. You might also have expected that by leaving for other projects these posters would create a vacuum for others to fill. It could be worth looking at why that didn't happen.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2016-11-27T17:50:24.722Z · LW(p) · GW(p)
One interesting thing is that at one point post-Eliezer, there were two "rising stars" on LW who were regularly producing lots of fascinating content: lukeprog and So8res. Both stopped regularly posting here some time after they were recruited by MIRI and their priorities shifted.
↑ comment by Gordon Seidoh Worley (gworley) · 2016-11-27T21:49:13.204Z · LW(p) · GW(p)
This is why I very much like Medium. I think of it as Twitter for people who want to write/read long things rather than short things. It's also much nicer than Twitter in my experience.
↑ comment by SatvikBeri · 2016-11-27T09:59:52.829Z · LW(p) · GW(p)
My theory is that the main things that matter are content and enforcement of strong intellectual norms, and both degraded around the time a few major high-status members of the community mostly stopped posting (e.g. Eliezer and Yvain.)
The problem with lack of content is obvious, the problem with lack of enforcement is that most discussions are not very good, and it takes a significant amount of feedback to make them better. But it's hard for people to get away with giving subtle criticism unless they're already a high-status member of a community, and upvotes/downvotes are just not sufficiently granular.
Replies from: Morendil↑ comment by Morendil · 2016-11-27T10:33:22.419Z · LW(p) · GW(p)
This feels like a good start but one that needs significant improvement too.
For instance, I'm wondering how much of the situation Anna laments is a result of LW lacking an explicit editorial policy. I for one never quite felt sure what was or wasn't relevant for LW - what had a shot at being promoted - and the few posts I wrote here had a tentative aspect to them because of this. I can't yet articulate why I stopped posting, but it may have had something to do with my writing a bunch of substantive posts that were never promoted to Main.
If you look at the home page only (recent articles in Main) you could draw the inference that the main topics on LessWrong are MIRI, CFAR, FHI, "the LessWrong community", with a side dish of AI safety and startup founder psychology. This doesn't feel aligned with "refining the art of human rationality", it makes LessWrong feel like more of a corporate blog.
Replies from: SatvikBeri↑ comment by SatvikBeri · 2016-11-27T22:30:42.045Z · LW(p) · GW(p)
Agree that a lot more clarity would help.
Assuming Viliam's comment on the troll is accurate, that's probably sufficient to explain the decline: http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2n
comment by entirelyuseless · 2016-11-27T06:58:57.600Z · LW(p) · GW(p)
I disagree with #1 and #2, and I don't identify as a rationalist (or for that matter, much as a member of any community), but I think it is true that Less Wrong has been abandoned without being replaced by anything equally good, and that is a sad thing. In that sense I would be happy to see attempts to revive it.
I definitely disagree with the comment that SSC has a better layout, however; I think people moved there because there were no upvotes and downvotes. The layout for comments there is awful, and it has a very limited number of levels, which after a few comments prevents you from responding directly to anything.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2016-11-27T12:15:30.994Z · LW(p) · GW(p)
Gonna chip in a +1 regarding SSC's comment system. There are good comments, but this seems in spite of the comment mechanism, not because.
Replies from: Vaniver↑ comment by Vaniver · 2016-11-27T17:37:04.448Z · LW(p) · GW(p)
Eh, one thing I've noticed about SSC is a number of deeply bad comments, which I don't think I've seen on LW. Yes, there are also good comments, but I can imagine someone five years ago looking at the state of SSC commenting now and saying "and this is why we need to ban politics" instead of seeing it as a positive change.
comment by nimim-k-m · 2016-12-07T18:57:34.724Z · LW(p) · GW(p)
SSC linked to this LW post (here http://slatestarcodex.com/2016/12/06/links-1216-site-makes-right/ ). I suspect it might be of some use to you if explain my reasons why I'm interested in reading and commenting on SSC but not very much on LW.
First of all, the blog interface is confusing, more so than regular blogs or sub-reddits or blog-link-aggregators.
Also, to use LW terminology, I have pretty negative prior on LW. (Some other might say the LW has not a very good brand.) I'm still not convinced that AI risk is very important (nor that decision theory is going to be useful when it comes to mitigating AI risk (I work in ML)). The sequences and list of top posts on LW are mostly about AI risk, which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought (which I do consider a worthy goal). It feels like (mind you, this is my initial impression) this particular rationalist community tries to sell me the idea that there's this very important thing about AI risk and it's very important that you learn about it and then donate to MIRI (or whatever it's called today). Also, you can learn rationality in workshops, too! It's resembles just bit too much (and not a small bit) either a) the certain religions that have people stopping me on the street or ringing my doorbell and insisting on how it's most important thing in the world that I listen to them and read their leaflet, or b) the whole big wahoonie that is self-help industry. On both counts, my instincts tell me: stay clear out of it.
And yes, most of the all the important things to have a discussion about involve or at least touch politics.
Finally, I disliked the HPMOR. Both as fiction and and as presentation of certain arguments. I was disappointed when I found out HPMOR and LW were related.
On the other hand, I still welcome the occasional interesting content that happens to be posted on LW and makes ripples in the wider internet (and who knows maybe I'll comment now that I bothered to make an account). But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of "advancement of more rational discourse in public life" would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.
edit. explained arguments more
Replies from: Vaniver↑ comment by Vaniver · 2016-12-07T23:35:04.620Z · LW(p) · GW(p)
Thanks for sharing! I appreciate the feedback but because it's important to distinguish between "the problem is that you are X" and "the problem is that you look like you are X," I think it's worth hashing out whether some points are true.
The sequences and list of top posts on LW are mostly about AI risk
Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding. (His views have since changed, a document I think is worth reading in full.)
And the Sequences themselves are rarely if ever directly about AI risk; they're more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them "no, they're different" it's easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say "oh yeah, those are different." And if you're just curious about intelligence and morality, then you still end up with a crisper model than you started with!
which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought
I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of "do I understand this piece of mental machinery well enough to program it?", which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.)
But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of "advancement of more rational discourse in public life" would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.
I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents.
For the first half, I think considering this involves becoming more precise about 'healthiest'. On the one hand, LW's reputation has a lot of black spots, and those basically can't be washed off, but on the other hand, it doesn't seem like reputation strength is the most important thing to optimize for. That is, having a place where people are expected to have a certain level of intellectual maturity that grows over time (as the number of things that are discovered and brought into the LW consensus grows) seems like the sort of thing that is very difficult to do with a number of semi-related communities.
Replies from: nimim-k-m↑ comment by nimim-k-m · 2016-12-09T07:39:57.993Z · LW(p) · GW(p)
Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding.
I grant that I was talking out of my memory; the previous time I read the LW stuff was years ago. MIRI and CFAR logos up there did not help.
comment by old (remmelt) · 2016-11-30T22:38:35.887Z · LW(p) · GW(p)
I oversee a list of Facebook groups so if there's any way I can help support this, please let me know and your arguments: https://www.facebook.com/EffectiveGroups/
Here's some intuitions I have:
It will be really hard to work against the network effects and ease of Facebook but I think its social role should be emphasised instead. Likewise for EA Forum but maybe this can take on a specific role like being more friendly to new people / more of a place to share information and do announcements.
If you position LW as setting the gold standard of conversations on rationality and ethics and not just give anyone on the internet the ability to join in most conversations, that will give authors an incentive to cross-post or adapt their Facebook posts for it. Otherwise, there's no clear distinction and no reason to go to one place instead of the other. However, you can still include layers of access to communication.
Taking Anna's first point on the deadly puzzle, I think this should be a place for people of high merit and specialised knowledge to focus on solving it. I wouldn't know what the best meritocracy mechanisms for this would be that don't create bad side-effects. "The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have."
Maybe external blog and Facebook comments can be a first filter for thinking. If the commenters feel that their texts are of high enough quality, they can post a better version on them on the LW article. This would mean as an example that a blog like Slate Star Codex would have a LW icon link that allows a commenter to also post a cleaner version on the crossposted LW article if it meets the standards and he or she have the access. Having different usernames on different platforms may make this process less transparant.
To summarise, I would optimise for the quality of the people, not the quantity of them. A small group can make a major difference but gets hindered by the noise of the crowd. There are plenty of other places on the internet for people to pleasantly discuss bias, get into intriguing debates and one-up each other.
I don't mean this comment to sound elitist or arrogant. I probably wouldn't make the cut.
I read lots of good suggestions for improving this website. They risk making the plans too complicated and difficult to execute though. The fact that LW's structure has been stagnant for several years indicates to me that this is a much more difficult problem to solve than an inside view would suggest. I think starting with fundamentals for engaging people like above should be the priority and likely means making some hard decisions.
For the rest, I think I don't have much of use to contribute to this discussion as a newbie. Please mention where you think I'm wrong here.
comment by Morendil · 2016-11-28T18:41:50.398Z · LW(p) · GW(p)
I realize I haven't given a direct answer yet, so here it is: I'm in, if I'm wanted, and if some of the changes discussed here take place. (What it would take to get me onboard is, at the least, an explicit editorial policy and people in charge of enforcing it.)
comment by Bo102010 · 2016-11-28T02:24:26.495Z · LW(p) · GW(p)
Others have made these points, but here are my top comments:
- The site was best when there was a new, high-quality post from a respected community member every day or two.
- The ban on politics means that a lot of interesting discussion migrates elsewhere, e.g. to Scott's blog.
- The site's current structure - posts vs. comments seems dated. I'd like to try something like discourse.org?
comment by Lumifer · 2016-12-07T19:00:25.632Z · LW(p) · GW(p)
A interesting discussion on HN -- not about LW but about Reddit -- which still offers useful commentary about what HN people expect from a "conversational locus".
comment by itaibn0 · 2016-12-04T22:53:08.208Z · LW(p) · GW(p)
Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.
My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.
Finally, a reminder on Less Wrong history, which suggests that we lost more than a group of high-quality posters: Less Wrong wasn't always a polyamory hub. It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded. Do you think a change of that scale will ever happen in the future of the rationalist community?
Replies from: ChristianKl↑ comment by ChristianKl · 2016-12-05T12:41:39.480Z · LW(p) · GW(p)
It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded.
I don't buy that account of the history as being complete. Many people in the rationality community have contact with other communities that also have a higher prevalence of polyamory. The vegan community also has a higher share of polygamous people.
Replies from: itaibn0↑ comment by itaibn0 · 2016-12-06T00:59:21.851Z · LW(p) · GW(p)
Perhaps I should not have used such sensationalist language. I admit I don't know the whole story, and that more details are likely to find many nonrational reasons the change occurs. Still, I suspect rational persuasion did play a role, if not a complete one. Anecdotally, the Less Wrong discussion changed my opinion of polyamory from "haven't really thought about it that much" to "sounds plausible but I haven't tried it".
In any case, if your memory of that section of Less Wrong history contributes positively to your nostalgia, it's worth reconsidering the chance events like that will ever happen again.
comment by whpearson · 2016-11-29T01:05:39.492Z · LW(p) · GW(p)
My 2 cents. We are not at a stage to have a useful singular discussion. We need to collect evidence about how agents can or cannot be implemented before we can start to have a single useful discussion. Each world view needs their own space.
My space is currently my own head and I'll be testing my ideas against the world, rather than other people in discussion. If they hold up I'll come back here.
comment by NatashaRostova · 2016-11-28T21:52:01.267Z · LW(p) · GW(p)
I've known about Less Wrong for about two full years. A few weeks ago I started coming here regularly. A week ago I made an account -- right before this post and others like it.
My own poetic feeling is there is a change in the winds, and the demand for a good community is growing. SSC has no real community. Facebook is falling apart with fake news and awful political memes. People are losing control of their emotions w.r.t politics. And calm scientific rationalist approaches are falling apart.
I deactivated my FB, made an account here, and have done my best, despite not (yet) being anyone well known, to keep it going.
I think there is a change that is drawing people here again. But it needs to be encouraged. The rules for rationalist discussion 5 years ago aren't the ones of today. Get rid of the stupid no politics rule, and just enforce good scientific discussion. Scott Alexander's Trump post has received 200k+ views. The demand for a well written post like that was incredible, and the original 'rationalist' heart of the internet wouldn't let it be written. Is that not silly?
comment by rayalez · 2016-11-27T21:59:36.862Z · LW(p) · GW(p)
I am working on a project with the similar purpose, and I think you will find it interesting:
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
If you find it interesting and can offer some feedback - I would really appreciate it!
comment by Robin · 2016-11-30T20:11:14.277Z · LW(p) · GW(p)
I think the Less Wrong website diminished in popularity because of the local meetups. Face to face conversation beats online conversation for most practical purposes. But many Less Wrongers have transitioned to being parents, or have found more professional success so I'm not sure how well the meetups are going now. Plus some of the meetups ban members rather than rationally explaining why they are not welcome in the group. This is a horrible tactic and causes members to limit how they express themselves... which goes against the whole purpose of rationality meetups.