LW 2.0 Strategic Overview

post by habryka (habryka4) · 2017-09-15T03:00:07.826Z · LW · GW · Legacy · 296 comments

Contents

  Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).
  Why bother with LessWrong 2.0?  
  The Existing Discussion Around LessWrong 2.0
  My models of LessWrong 2.0
  I. 
  II.
  The site structure: 
  The writing experience: 
  Meta
  Featured posts
  Meetups (implementation unclear)
  Shortform (implementation unclear)
  Why? 
  The karma system:
  III
  How you can help and issues to discuss:
None
296 comments

Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).

Hey Everyone!

This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.

With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).

We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money.

Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?

Here’s the rough structure of this post:


Why bother with LessWrong 2.0?

I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:

On LessWrong…

When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post.

I also think Anna, in her post about the importance of a single conversational locus [? · GW], says another, somewhat broader thing, that is very important to me, so I’ve copied it in here:

1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.
2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle. This sounds like hubris, but it is at this point at least partially a matter of track record.
3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.
4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).
5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.
6. We have lately ceased to have a "single conversation" in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

The Existing Discussion Around LessWrong 2.0

Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and [? · GW] I [? · GW] won’t [? · GW] be [? · GW] able [? · GW] to [? · GW] summarise [? · GW] it [? · GW] all [? · GW] here [? · GW], but I can try my best to summarize the most important points, and give a bit of my own perspective on them.

Here is a comment by Alexandros [? · GW], on Anna’s post I quoted above:

Please consider a few gremlins that are weighing down LW currently:
1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.
[...]
...I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future... is exactly what I would do if I wanted to maintain the status quo for a few more years.
Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.
I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”.

In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time.

Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved.


Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website:

1. Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn't work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just "why am I reading this random bulletin board full of stuff I'm not interested in?"
2. Another of Eliezer's talents was carefully skirting the line between "so mainstream as to be boring" and "so wacky as to be an obvious crackpot". Most people couldn't skirt that line, and so ended up either boring, or obvious crackpots. This produced a lot of backlash, like "we need to be less boring!" or "we need fewer crackpots!", and even though both of these were true, it pretty much meant that whatever you posted, someone would be complaining that you were bad.
3. All the fields Eliezer wrote in are crackpot-bait and do ring a bunch of crackpot alarms. I'm not just talking about AI - I'm talking about self-help, about the problems with the academic establishment, et cetera. I think Eliezer really did have interesting things to say about them - but 90% of people who try to wade into those fields will just end up being actual crackpots, in the boring sense. And 90% of the people who aren't will be really bad at not seeming like crackpots. So there was enough kind of woo type stuff that it became sort of embarassing to be seen there, especially given the thing where half or a quarter of the people there or whatever just want to discuss weird branches of math or whatever.
4. Communities have an unfortunate tendency to become parodies of themselves, and LW ended up with a lot of people (realistically, probably 14 years old) who tended to post things like "Let's use Bayes to hack our utility functions to get superfuzzies in a group house!". Sometimes the stuff they were posting about made sense on its own, but it was still kind of awkward and the sort of stuff people felt embarassed being seen next to.
5. All of these problems were exacerbated by the community being an awkward combination of Google engineers with physics PhDs and three startups on one hand, and confused 140 IQ autistic 14 year olds who didn't fit in at school and decided that this was Their Tribe Now on the other. The lowest common denominator that appeals to both those groups is pretty low.
6. There was a norm against politics, but it wasn't a very well-spelled-out norm, and nobody enforced it very well. So we would get the occasional leftist who had just discovered social justice and wanted to explain to us how patriarchy was the real unfriendly AI, the occasional rightist who had just discovered HBD and wanted to go on a Galileo-style crusade against the deceptive establishment, and everyone else just wanting to discuss self-help or decision-theory or whatever without the entire community becoming a toxic outcast pariah hellhole. Also, this one proto-alt-right guy named Eugene Nier found ways to exploit the karma system to mess with anyone who didn't like the alt-right (ie 98% of the community) and the moderation system wasn't good enough to let anyone do anything about it.
7. There was an ill-defined difference between Discussion (low-effort random posts) and Main (high-effort important posts you wanted to show off). But because all these other problems made it confusing and controversial to post anything at all, nobody was confident enough to post in Main, and so everything ended up in a low-effort-random-post bin that wasn't really designed to matter. And sometimes the only people who didpost in Main were people who were too clueless about community norms to care, and then their posts became the ones that got highlighted to the entire community.
8. Because of all of these things, Less Wrong got a reputation within the rationalist community as a bad place to post, and all of the cool people got their own blogs, or went to Tumblr, or went to Facebook, or did a whole bunch of things that relied on illegible local knowledge. Meanwhile, LW itself was still a big glowing beacon for clueless newbies. So we ended up with an accidental norm that only clueless newbies posted on LW, which just reinforced the "stay off LW" vibe.
I worry that all the existing "resurrect LW" projects, including some really high-effort ones, have been attempts to break coincidental vicious cycles - ie deal with 8 and the second half of 7. I think they're ignoring points 1 through 6, which is going to doom them.

At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy:

When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD.

So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members.

And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future).

And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there).

But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).

I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)


My models of LessWrong 2.0

I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems):

  1. We need to be able to build on each other’s intellectual contributions, archive important content and avoid primarily being news-driven
  2. We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
  3. We need to actively moderate in a way that is both fun for the moderators, and helps people avoid future moderation policy violations

I.

The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.

Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old.

This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless.

To achieve this, we’ve built the following things:

And there are some more features the team is hoping to build in this direction, such as:

II.

The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge.

I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing.

The site structure:

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong:

The writing experience:

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post.

If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions.

Meta

Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site.

Featured posts

In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community.

Meetups (implementation unclear)

There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out.

Shortform (implementation unclear)

Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented.

Why?

The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts.

The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage.

The karma system:

Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.

(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)

I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined.

III

The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve.

The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta).

Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.

The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.


How you can help and issues to discuss:

The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the lesswrong.com url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.

During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.

Here are some issues where I discussion would be particularly fruitful:

The closed beta can be found at www.lesserwrong.com.

Ben, Vaniver, and I will be in the comments!

296 comments

Comments sorted by top scores.

comment by JenniferRM · 2017-09-17T23:22:27.797Z · LW(p) · GW(p)

I'm super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.

Fundamentally, I believe that a major component of LW's decline isn't in the primary article and isn't being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.

These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.

My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer's writing.

The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.

Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.

Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called "The Singularity Institute For Artificial Intelliegence". Then they started worrying that AI would turn out bad by default, and dropped the "...For Artificial Intelligence" part. Then a late arriving brand-taker-over ("Singularity University") bought their name for a large undisclosed amount of money and the real research started happening under the new name "Machine Intelligence Research Institute".

Drift is the default! As Hanson writes: Coordination Is Hard.

So basically my hope for "grit with respect to species level survival in the face of the singularity" rests in gritty individual humans whose commitment and skills arises from a process we don't understand, can't necessarily replicate, and often can't even reliably teach newbies to even identify.

Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.

If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that "important conversations happen" and the conversation might be enshrined or not... but this enshrining is often not the point.

The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.

Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet's theory of humor is relevant here.

This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.

The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).

The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.

I offer you Tim Tyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn't been around since 2014 that I'm aware of.

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.

Upvotes don't matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.

Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.

Yes, I want those "traditionally good" people to exist and I respect their work... but I don't expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.

Also, the traditionally good people's content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.

One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.

I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.

Replies from: ESRogs
comment by ESRogs · 2017-10-10T15:08:46.878Z · LW(p) · GW(p)

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector.

Is there a natural interpretation of what the first vector means vs what the second or third mean? My lin alg is rusty.

Replies from: Lukas_duplicate0.9564425385039328
comment by Lukas_duplicate0.9564425385039328 · 2017-11-12T13:37:09.912Z · LW(p) · GW(p)

I wondered the same thing. The explanation I've come up with is the following:

See https://en.wikipedia.org/wiki/Linear_dynamical_system for the relevant math.

Assuming the interaction matrix is diagonizable, the system state can be represented as a linear combination of the eigenvectors. The eigenvector with the largest positive eigenvalue grows the fastest under the system dynamics. Therefore, the respective compontent of the system state will become the dominating component, much larger than the others. (The growth of the components is exponential.) Ultimately, the normalized system state will be approximately equal to the fastest growing eigenvector, unless there are equally strongly growing other eigenvectors.

If we assume the eigenvalues are non-degenerate and thus sortable by size, one can identify the strongest growing eigenvector, the second strongest growing eigenvector, etc. I think this is what JenniferRM means with 'first' and 'second' eigenvector.

comment by Paul Crowley (ciphergoth) · 2017-09-15T03:53:27.322Z · LW(p) · GW(p)

Thank you all so much for doing this!

Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all - I'm all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.

I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I'd vote up the weight of Carl Schulman's votes even if he never commented.

The feature map link seems to be absent.

Replies from: DragonGod, Habryka
comment by Habryka · 2017-09-15T04:25:56.489Z · LW(p) · GW(p)

Feature roadmap link fixed!

comment by richardbatty · 2017-09-16T12:07:41.305Z · LW(p) · GW(p)

Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.

You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:

  • My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
    • Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don't think calling the moderation group "sunshine regiment" is a good idea for this reason.
    • Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
    • Encourage writers to do literature reviews to connect to existing work in relevant fields.
  • It could also help to:
    • Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don't want to force a particular methodology, it would be good to nudge people in an empirical direction.
    • Encourage content that's directly relevant to people doing important work, rather than mainly being abstract stuff.
Replies from: Habryka, NancyLebovitz, Nisan, scarcegreengrass
comment by Habryka · 2017-09-16T23:50:40.738Z · LW(p) · GW(p)

I feel that this comment deserves a whole post in response, but I probably won't get around to that for a while, so here is a short summary:

  • I generally think people have confused models about what forms of weirdness are actually costly. The much more common error mode for online communities is being boring and uninteresting. The vast majority of the most popular online forums are really weird and have a really strong distinct culture. The same is true for religions. There are forms of weirdness that prevent you from growing, but I feel that implementing the suggestions in this comment in a straightforward way would mostly result in the forum becoming boring and actually stinting its meaningful growth.

  • LessWrong is more than just weird in a general sense. A lot of the things that make LessWrong weird are actually the result of people having thought about how to have discourse, and then actually implementing those norms. That doesn't mean that they got it right, but if you want to build a successful intellectual community you have to experiment with norms around discourse, and avoiding weirdness puts a halt to that.

  • I actually think that one of the biggest problem with Effective Altruism is the degree to which large parts of it are weirdness averse, which I see as one of the major reasons why EA kind of hasn't really produced any particularly interesting insights or updates in the past few years. CEA at least seems to agree with me (probably partially because I used to work there and shaped the culture a bit, so this isn't independent), and tried to counteract this by making the explicit theme of this years EA Global in SF about "accepting the weird parts of EA". As such, I am not very interested in appeasing current EAs need for normalcy and properness and instead hope that this will move EA towards becoming more accepting of weird things.

I would love to give more detailed reasoning for all of the above, but time is short, so I will leave it at this. I hope this gave people at least a vague sense of my position on this.

Replies from: richardbatty
comment by richardbatty · 2017-09-17T18:55:47.073Z · LW(p) · GW(p)

You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.

On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

Replies from: John_Maxwell_IV, NancyLebovitz
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-18T05:05:16.749Z · LW(p) · GW(p)

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds.

I'm not persuaded that this is substantially more true of scientists than people in the LW community.

Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see "Profession" section here).

They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture.

I don't think people usually become scientists unless they like the culture of academic science.

I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

I think "intellectual communities" are just a high-status kind of subculture. "Be more high status" is usually not useful advice.

I think it might make sense to see academic science as a culture that's optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.

If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don't endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.

Replies from: richardbatty
comment by richardbatty · 2017-09-18T09:19:57.407Z · LW(p) · GW(p)

The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that's not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.

By intellectual community I wasn't meaning 'high status subculture', I was trying to get across the idea of a community that selects on people's ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.

I'm not hoping that lesswrong 2.0 will accumulate money and prestige, I'm hoping that it will make intellectual progress needed for solving the world's most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.

comment by NancyLebovitz · 2017-09-17T19:27:56.097Z · LW(p) · GW(p)

My impression is that you don't understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there's a venue that suits them-- the venue is necessary, but stays empty unless the desire comes into play.

" I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned."

Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?

Replies from: richardbatty
comment by richardbatty · 2017-09-17T20:19:47.411Z · LW(p) · GW(p)

"I think communities form because people discover they share a desire"

I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.

"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"

That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it's going to be difficult for it to solve some of the world's most important problems.

Perhaps we have different goals in mind for lesswrong 2.0. I'm thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you'd care less about appealing to audiences outside of the community.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2017-09-17T20:50:14.407Z · LW(p) · GW(p)

I'm fond of LW (or at least its descendants). I'm somewhat weird myself, and more tolerant of weirdness than many.

It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.

From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.

The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?

I'm hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.

Replies from: richardbatty
comment by richardbatty · 2017-09-18T09:05:51.298Z · LW(p) · GW(p)

"From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen."

I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it's also already produced good stuff.

Hopefully that's the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren't many people outside of it who could make valuable contributions.

comment by NancyLebovitz · 2017-09-16T14:37:09.102Z · LW(p) · GW(p)

It seems to me that you want to squeeze a lot of the fun out of the site.

I'm not sure how far it would be consistent with having a single focus for rationality online, but perhaps there should be a section or a nearby site for more dignified discussion.

I think the people you want to attract are likely to be busy, and not necessarily interested in interviews and testing for a rather hypothetical project, but I could be wrong.

comment by Nisan · 2017-09-19T04:28:16.597Z · LW(p) · GW(p)

Regarding a couple of your concrete suggestions: I like the idea of using existing academic jargon where it exists. That way, reading LW would teach me search terms I could use elsewhere or to communicate with non-LW users. (Sometimes, though, it's better to come up with a new term; I like "trigger-action plans" way better than "implementation intentions".)

It would be nice if users did literature reviews occasionally, but I don't think they'll have time to do that often at all.

comment by scarcegreengrass · 2017-09-19T16:05:08.527Z · LW(p) · GW(p)

This is a real dynamic that is worth attention. I particularly agree with removing HPMoR from the top of the front page.

Counterpoint: The serious/academic niche can also be filled by external sites, like https://agentfoundations.org/ and http://effective-altruism.com/.

comment by moridinamael · 2017-09-15T21:48:48.829Z · LW(p) · GW(p)

I've heard that in some cases, humans regard money to be an incentive.

Integrating Patreon, Paypal or some existing micropayments system could allow users to not only upvote but financially reward high-value community members.

If Less Wrong had a little "support this user on Patreon" icon next to every poster's username, I would certainly have thrown some dollars at more than a handful of Less Wrong posters. Put more explicitly - maybe Yvain and Eliezer would be encouraged to post certain content on LW2.0 rather than SSC/Facebook if they reliably got a little cash from the community at large every time they did it.

Speaking of the uses of money, I'm fond of communities that are free to read but require a small registration fee in order to post. Such fees are a practically insurmountable barrier to trolls. Eugine Nier could not have done what he did if registering an account cost $10, or even $1.

Replies from: John_Maxwell_IV, DragonGod, casebash
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-16T07:37:18.073Z · LW(p) · GW(p)

Does anyone know the literature on intrinsic motivation well enough to comment on whether paying users to post is liable to undermine other sources of motivation?

The registration fee idea is interesting, but exacerbates the chicken and egg problem inherent in online communities. I also have a hunch that registration fees tend to make people excessively concerned with preserving their account's reputation (so they can avoid getting banned and losing something they paid money for), in a way that's cumulatively harmful to discourse, but I can't prove this.

Replies from: None, Elo
comment by [deleted] · 2017-09-16T15:14:17.536Z · LW(p) · GW(p)

Yep!

See here and here

As one might expect, money is often a deterrent for actual habituation.

EDIT: Additional clarification:

The first link shows that monetary payment is only effective as a short-term motivator.

The second link is a massive study involving almost 2,000 people which tried to pay people to go to the gym. We found that after the payment period ended, gym attendance fell back to roughly pre-payment levels.

comment by Elo · 2017-09-16T07:50:13.857Z · LW(p) · GW(p)

Yes it will probably cause people to devalue the site. If you pay a dollar it will tend to "feel like" the entire endeavour is worth a dollar.

Replies from: John_Maxwell_IV, NancyLebovitz, moridinamael
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-17T02:06:48.241Z · LW(p) · GW(p)

I was talking about paying people to contribute. Not having people pay for membership.

comment by NancyLebovitz · 2017-09-16T14:29:35.378Z · LW(p) · GW(p)

Metafilter has continued to be a pretty good site even though it requires a small fee to join. There's also a requirement to post a few comments (you can comment for free but need to be a member to do top level posts) and wait a week after sending in money. And it's actively moderated.

http://www.metafilter.com/about.mefi

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-17T02:06:32.320Z · LW(p) · GW(p)

I was talking about paying people to contribute. Not having people pay for membership.

comment by moridinamael · 2017-09-16T14:27:18.855Z · LW(p) · GW(p)

So charge $50 =)

comment by DragonGod · 2017-09-16T17:22:44.330Z · LW(p) · GW(p)

What about a currency say tokens that you get with upvotes and posts? 10 upvotes gives 1 token. You may add token payment for posts and/or comments to incentivise activity (I'm not sure this would be all round a good idea though (adding payment incentives may lead to a greater quantity of activity, but lower quality). So token payments on activity that garners a certain number of upvotes?

Tokens can be given to other users, and gives them karma as well (if 10 karma = 1 token, then transferring a token may cause the recipient to gain an increase of 5 - 9 karma)? Tokens would be a method of costly signalling that you enjoyed particular content--sort of a non money analogue of reddit gold.

comment by casebash · 2017-09-15T22:40:52.934Z · LW(p) · GW(p)

Are there many communities that do that apart from meta-filter?

Replies from: moridinamael
comment by moridinamael · 2017-09-15T22:50:38.033Z · LW(p) · GW(p)

You mean communities that require a fee? I'm specifically thinking of SomethingAwful. Which has a bad reputation, but is actually an excellent utility if you visit only the subforums and avoid the general discussion and politics sections of the site.

comment by richard_reitz · 2017-09-15T04:47:53.310Z · LW(p) · GW(p)

if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the page you are familiar with.

Idea: give sequence-writers the option to include quizzes because this (1) demonstrates a badgeholder actually understands what the badge indicates they understand (or, at least, are more likely to) and (2) leverages the testing effect.

I await the open beta eagerly.

Replies from: ciphergoth, Raemon
comment by Paul Crowley (ciphergoth) · 2017-09-15T21:24:35.930Z · LW(p) · GW(p)

Also I have already read them all more than once and don't plan to do so again just to get the badge :)

comment by Raemon · 2017-09-15T06:47:51.153Z · LW(p) · GW(p)

leverages the which?

In any case, I like the idea, although it may be in the backlog for awhile.

Replies from: Raemon, richard_reitz
comment by Raemon · 2017-09-15T06:48:42.433Z · LW(p) · GW(p)

In any case, I like the idea, although it may be in the backlog for awhile.

Although, it occurs to me that the benefit of an open source codebase that actually is reasonable to learn is that anyone that wants something like this to happen can just make it happen.

comment by richard_reitz · 2017-09-15T12:33:16.260Z · LW(p) · GW(p)

Testing effect.

(At this point, I should really know better than to trust myself to write anything at 1 in the morning.)

comment by ozymandias · 2017-09-15T15:55:59.500Z · LW(p) · GW(p)

Thank you for making this website! It looks really good and like someplace I might want to crosspost to.

If I may make two suggestions:

(1) It doesn't seem clear whether Less Wrong 2.0 will also have a "no politics" norm, but if it doesn't I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc. I think that culture war stuff is salacious enough that people love discussing it in spite of its obvious unimportance, and it would be good to have a way to dissuade that. Personally, I've tended to avoid online rationalist spaces where I can't block people who annoy me, because culture war stuff keeps coming up and when interacting with certain people I get defensive and upset and not in a good frame for discussion at all.

(2) Some inconspicuous way of putting in assorted metadata (content warnings, epistemic statuses, that sort of thing) so that interested people can look at them but they are not taking up the first 500 words of the post.

Replies from: Bakkot, Vaniver, philh, Regex, Viliam
comment by Bakkot · 2017-09-15T17:22:13.168Z · LW(p) · GW(p)

I would strongly support just banning culture war stuff from LW 2.0. Those conversations can be fun, but they require disproportionately large amounts of work to keep the light / heat ratio decent (or indeed > 0), and they tend to dominate any larger conversation they enter. Besides, there's enough places for discussion of those topics already.

(For context: I moderate /r/SlateStarCodex, which gets several thousand posts in its weekly culture war thread every single week. Those discussions are a lot less bad than culture war discussions on the greater internet, I think, and we do a pretty good job keeping discussion to that thread only, but maintaining both of these requires a lot of active moderation, and the thread absolutely affects the tone of the rest of the subreddit even so.)

Replies from: ozymandias, Jiro
comment by ozymandias · 2017-09-15T18:55:16.343Z · LW(p) · GW(p)

I'm not sure if I agree with banning it entirely. There are culture-war-y discussions that seem relevant to LW 2.0: for instance, people might want to talk about sexism in the rationality community, free speech norms, particular flawed studies that touch on some culture-war issue, dating advice, whether EAs should endorse politically controversial causes, nuclear war as existential risk, etc.

OTOH a policy that people should post this sort of content on their own private blogs seems sensible. There are definite merits in favor of banning culture war things. In addition to what you mention, it's hard to create a consensus about what a "good" culture war discussion is. To pick a fairly neutral example, my blog Thing of Things bans neoreactionaries on sight while Slate Star Codex bans the word in the hopes of limiting the amount they take over discussion; the average neoreactionary, of course, would strongly object to this discriminatory policy.

Replies from: Bakkot
comment by Bakkot · 2017-09-15T21:21:37.038Z · LW(p) · GW(p)

I think - I hope - we could discuss most of those without getting into the more culture war-y parts, if there were sufficiently strong norms against culture war discussions in general.

Maybe just opt-in rather than opt-out would be sufficient, though. That is, you could explicitly choose to allow CW discussions on your post, but they'd be prohibited by default.

comment by Jiro · 2017-09-19T18:04:08.033Z · LW(p) · GW(p)

Please, no.

The SSC subreddit cultural war thread is basically run under the principle of "make the cultural war thread low quality so people will go away". All that gets you is a cultural war thread that is low quality.

comment by Vaniver · 2017-09-15T21:22:40.293Z · LW(p) · GW(p)

I expect the norm to be "no culture war" and "no politics" but there to be some flexibility. I don't want to end up with a LW where, say, this SSC post would be banned, and banning discussions of the rationality community that might get uncomfortable seems bad, and so on, but also I don't want to end up with a LW that puts other epistemic standards in front of rationality ones. (One policy we joked about was "no politics, unless you're Scott," and something like allowing people to put it on their personal page but basically never promoting it accomplishes roughly the same thing.)

Replies from: ozymandias
comment by ozymandias · 2017-09-16T01:25:38.166Z · LW(p) · GW(p)

Sorry, this might not be clear from the comment, but as a prospective writer I was primarily thinking about the comments on my posts. Even if I avoid culture war stuff in my posts, the comment section might go off on a tangent. (This is particularly a concern for me because of course my social-justice writing is the most well-known, so people might be primed to bring it up.) On my own blog, I tend to ban people who make me feel scared and defensive; if I don't have this capability and people insist on talking about culture-war stuff in the comments of my posts anyway, being on LW 2.0 will probably be unpleasant and aversive enough that I won't want to do it. Of course, I'm just one person and it doesn't make sense to set policy based on luring me in specific; however, I suspect this preference is common enough across political ideologies that having a way to accommodate it would attract more writers.

Replies from: Vaniver
comment by Vaniver · 2017-09-16T01:50:37.287Z · LW(p) · GW(p)

Got it; I expect the comments to have basically the same rules as the posts, and for you to be able to respond in some low-effort fashion to people derailing posts with culture war (by, say, just flagging a post and then the Sunshine Regiment doing something about it).

Replies from: Habryka
comment by Habryka · 2017-09-16T22:54:03.235Z · LW(p) · GW(p)

Yeah, that's roughly what I've been envisioning as well.

comment by philh · 2017-09-15T17:25:16.245Z · LW(p) · GW(p)

I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc.

To clarify: you want people to be able to apply this tag to their own posts, and in posts with it applied, culture war discussion is forbidden?

I approve of this.

I also wonder if it would be worth exploring a more general approach, where submitters have some limited mod powers on their own posts.

Replies from: ozymandias, Jiro
comment by ozymandias · 2017-09-15T18:42:00.348Z · LW(p) · GW(p)

Yes, that was my intent.

I believe the plan is to eventually allow some trusted submitters to e.g. ban people from commenting on their posts, but I would hope the "no culture war" tag could be applied even by people whom the mod team doesn't trust with broader moderation powers.

comment by Jiro · 2017-09-19T18:06:47.527Z · LW(p) · GW(p)

What do you do to people who

1) include culture war material in their own posts, and use this to prevent anyone from criticizing them, or

2) include things in their own posts that are not culture war, but to which a cultural war reference is genuinely relevant (sometimes to the point where they are saying something that can't be properly refuted without one)?

Replies from: philh
comment by philh · 2017-09-20T11:24:43.860Z · LW(p) · GW(p)

Play it by ear, but my instinctive reaction is to downvote (1). Options for (2) include "downvote", "ignore", and "try to tactfully suggest that you think they've banned discussion that would be useful, and between you try to work out a solution to this problem". Maybe they'll allow someone to create a CW-allowed discussion thread for that post and then to summarise the contents of that thread, so they don't actually have to read it.

It partly depends whether their posts are attracting attention or not.

comment by Regex · 2017-09-15T16:22:08.732Z · LW(p) · GW(p)

How culture war stuff is dealt with on the various discord servers is having a place to dump it all. This is often hidden to begin with and opt-in only, so people only become aware of it when they start trying to discuss it.

Replies from: Habryka
comment by Habryka · 2017-09-16T22:55:28.931Z · LW(p) · GW(p)

I've also been thinking quite a bit about certain tags on posts requiring a minimum karma for commenters. The minimum karma wouldn't have to be too high (e.g. 10-20 karma might be enough), but it would keep out people who only sign up to discuss highly political topics.

comment by Viliam · 2017-09-19T22:26:39.381Z · LW(p) · GW(p)

A big problem with culture wars is that they usually derail debates on other topics. At least my reaction to seeing them is often like: "if you want to debate a different topic, make your own damned thread!"

For example, I would be okay with having a debate about , as long as it happens in a thread called "". If someone is not interested, they can ignore the thread. People can upvote or downvote the thread to signal how they feel about an importance of debating the topic on LW.

But when such debates start in a different topic... well, sometimes it seems like there should be no problem with having some extra comments in a thread (the comment space is unlimited, you can just collapse the whole subthread), but the fact is that it still disrupts attention of people who would otherwise debate about the original topic.

There are also other aspects, like people becoming less polite, becoming obsesses with making their faction win, etc.

And the thing that having political debates on a websites sometimes attracts people who come here only for the political debates. I don't usually have a problem with LW regulars discussing X, but I have a problem with fans of X coming to LW to support their faction.

Not sure what to conclude, though. Banning political debates completely feels like going too far. I would prefer having the political debates separately from other topics. But separate political debates is probably what would most attract the fans of X. (One quick idea is to make it so that positive karma gained in explicitly political threads is not counted towards the user total, but the negative one is. Probably a bad idea anyway, just based on prior probabilities. Or perhaps to prevent users younger than 3 months from participating, i.e. both commenting and voting in the political threads.)

comment by Alicorn · 2017-09-15T20:23:41.072Z · LW(p) · GW(p)

I feel more optimistic about this project after reading this! I like the idea of curation being a separate action and user-created sequence collections that can be voted on. I'm... surprised to learn that we had view tracking that can figure out how much Sequence I have read? I didn't know about that at all. The thing that pushed me from "I hope this works out for them" to "I will bother with this myself" is the Medium-style individual blog page; that strikes a balance between desiderata in a good place for me, and I occasionally idly wish for a place for thoughts of the kind I would tweet and the size I would tumbl but wrongly themed for my tumblr.

I don't like the font. Serifs on a screen are bad. I can probably fix this client side or get used to it but it stood out to me a surprising amount. But I'm excited overall.

Replies from: SaidAchmiz, quanticle, DragonGod, Alicorn, Kaj_Sotala
comment by Said Achmiz (SaidAchmiz) · 2017-09-16T16:59:23.426Z · LW(p) · GW(p)

I don't like the font. … I can probably fix this client side or get used to it but it stood out to me a surprising amount.

My other comment aside, this is (apart from the general claim) a reasonable user concern. I would recommend (to the LW 2.0 folks) the following simple solution:

  • Have several pre-designed themes (one with a serif font, one with a well-chosen sans font, and then "dark theme" versions of both, at least)
  • Let users select between those themes via their Profile screen

This should satisfy most people, and would still preserve the site's aesthetics.

Replies from: Habryka
comment by Habryka · 2017-09-16T23:20:43.912Z · LW(p) · GW(p)

I am slightly hesitant to force authors to think about how their posts will look like in different fonts, and different styles. While I don't expect this to be a problem most of the time, there are posts that I write where the font choice would matter for how the content comes across.

Medium allows the writer to chose between a sans-serif and a serif font, which I like a bit more, but I would expect would not really satisfy Alicorn's preferences.

Maintaining multiple themes also adds a lot of design constraints and complexity to updating various parts of the page. The width of a button might change with different fonts, and depending on the implementation, you might end up needing to add special cases for each theme choice, which I would really prefer to avoid.

Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify. I would be curios whether she also has a similar reaction to the default Medium font, for example displayed in this post: https://medium.com/@pshrmn/a-simple-react-router-v4-tutorial-7f23ff27adf

Replies from: SaidAchmiz, Alicorn, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-16T23:43:06.127Z · LW(p) · GW(p)

As with many such things, there are standard, canonical solutions to your concerns.

In this case, the answer is "select pairs/sets of fonts that are specifically designed to have the same width in both the serif and the sans variants". There are many such "font superfamilies". If you'd like, I can draw up a list of recommendations. (It would be helpful if you could let me know your constraints w.r.t. licensing and budget.)

Theme variants do not have to be comprehensive redesigns. It is eminently possible to design a set themes that will not lead to the content being perceived very differently depending on the active theme.

P.S.:

Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify.

I suspect the distinction you're looking for, here, is between transitional serifs (of which Charter, the Medium font, is one, although it's also got slab-serif elements) and the quite different old-style serifs (of which ET Book, the current LW 2.0 font, is one). (There are also other differences, orthogonal to that distinction—such as ET Book's considerably smaller x-height—which also affect readability.)

Alicorn, if you're reading this, I wonder what your reaction is to the font used on this website:

https://www.readthesequences.com

P.P.S.: It is also possible that the off-black text color is negatively impacting readability! (Especially since it can interact in a somewhat unfortunate manner with certain text rendering engines.)

Alicorn, what OS and browser are you viewing the LW 2.0 site on?

Replies from: Alicorn
comment by Alicorn · 2017-09-17T01:19:57.280Z · LW(p) · GW(p)

I do not like the readthesequences font. It feels like I'm back in grad school and also reading is suddenly harder.

I'm on a Mac 'fox.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-17T02:37:50.031Z · LW(p) · GW(p)

Ok, thanks!

FYI, your assessment is in the extreme minority; most people who have seen that site have responded very positively to the font choice (and the typography in general). This suggests that your preferences are unusual, in this sphere.

I say this, not to suggest that your preference / reaction is somehow "wrong" (that would be silly!), but a) to point out the the danger in generalizing from one's own example (typical mind blah blah), and b) to underscore the importance of user choice and customization options!

rest of this response is not specifically for Alicon but is re: this whole comment thread

This is still a gold standard of UX design: sane defaults plus good[1] customizability.

[1] "Good" here means:

  • comprehensive
  • intuitive
  • non-overwhelming (i.e. layered)

Note, these are ideals, not basic requirements; every step we take toward the ideal is a good step. So by no means should you (the designer/developer) ever feel like "comprehensive customizability is an unreachable goal; there's no reason to bother, since Doing It Right™ is too much effort"! So in this case, just offering a couple of themes, which are basic variations on each other (different-but-matching font choices, a different color scheme), is already a great thing and will greatly improve the user experience.

comment by Alicorn · 2017-09-17T01:18:39.157Z · LW(p) · GW(p)

The Medium font is much less bad but still not great.

comment by Said Achmiz (SaidAchmiz) · 2017-09-18T08:19:39.962Z · LW(p) · GW(p)

Update: I have a recommendation for you!

Take a look at this page: https://wiki.obormot.net/Reference/MerriweatherFontsDemo

The Merriweather and Merriweather Sans fonts (available for free via Google Fonts) are, as you can see, designed to be identical in width, line spacing, etc. They are quite interchangeable, in body text, UI, etc. Both are quite readable, and aesthetically pleasing.

(As a bonus, active on that page is a tiny bit of JavaScript trickery that sets different body text font weights depending on whether the client is running on a Mac, Windows, or Linux platform, to ensure that everyone sees basically the same thing, and enjoys equally good text readability, despite differences in text rendering engines. Take a look at the page source to see how it's done!)

UPDATE 2: A couple of mockups (linking to individual images because Imgur's zoom sucks otherwise). Be sure to zoom in on each image (i.e. view at full magnification):

LW 2.0 with Merriweather:

LW 2.0 with Merriweather Sans:

Replies from: Viliam
comment by Viliam · 2017-09-19T22:39:33.502Z · LW(p) · GW(p)

When I compare the two examples, the second one feels "clear", while the first one feels "smudgy". I have to focus more to read the first one.

EDIT: Windows 10, Firefox 55.0.3, monitor 1920x1080 px

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-19T23:00:54.433Z · LW(p) · GW(p)

1. OS (and version), browser (and version), device/display, etc.?

(General note: folks, please, please include this information whenever you say anything to a web designer/developer/etc. about how a website looks or works for you!!)

2. Great! If one of them feels clear, then this goes to show exactly what I was saying: user choice is good.

comment by quanticle · 2017-09-28T21:31:38.332Z · LW(p) · GW(p)

In the age of the Internet and in the company of nonconformists, it does get a little tiring reading the 451st public email from someone saying that the Common Project isn't worth their resources until the website has a sans-serif font.

Eliezer Yudkowsky

Replies from: gjm
comment by gjm · 2017-09-29T13:54:07.194Z · LW(p) · GW(p)

It may be worth saying explicitly that this is from 2009 and therefore can't be talking about responses to "LW 2.0".

comment by DragonGod · 2017-09-16T15:45:53.557Z · LW(p) · GW(p)

Agreed, generally, it seems that sans serif are for screens, and serif is for print.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-16T16:53:35.911Z · LW(p) · GW(p)

This is old "received wisdom", and hasn't been the case for quite a while.

Folks, this is what people mean when they talk about LessWrong ignoring the knowledge of experts. Here's a piece of "knowledge" about typography and web design, that is repeated unreflectively, without any consideration of whether there exists some relevant body of domain knowledge (and people with that domain knowledge).

What do the domain experts have to say? Let's look:

But this domain knowledge has not, apparently, reached LessWrong; here, "Serifs on a screen are bad" and "sans serif are for screens, and serif is for print" is still true.

And now we have two people agreeing with each other about it. So, what? Does that make it more true? What if 20 people upvoted those comments, and five more other LessWrongers posted in agreement? Would that make it more true? What amount of karma and local agreement does it take to get to the truth?

Replies from: gjm, DragonGod
comment by gjm · 2017-09-19T11:53:04.624Z · LW(p) · GW(p)

Here's what I think is the conventional wisdom about serif/sans-serif; I don't think it is in any way contradicted by the material you've linked to.

Text that is small when measured in display pixels is generally harder to read fluently when set in a typeface with serifs.

Only interested in readers with lovely high-DPI screens? Go ahead, use serifs everywhere; it'll probably be fine. Writing a headline, or a splash screen with like 20 words on it? Use serifs if they create the effect you want; the text won't be small enough, nor will there be enough of it in a block, for there to be a problem.

But if you are choosing a typeface for substantial chunks of text that might be read on a not-so-great screen, you will likely get better results with a sans-serif typeface.

So, what about those domain experts? Jakob Nielsen is only addressing how things look on "decent computer screens with pixel densities of 220 PPI or more". Design Shack article 1 says that a blanket prohibition on serifed typefaces on screens is silly, which it is. But look at the two screenshots offered as counterexamples to "Only use serifs in print". One has a total of seven words in it. The other has a headline in a typeface with serifs ... followed by a paragraph of sanf-serif text. Design Shack article 2 says that sans-serif typefaces are better "for low-resolution displays", though it's not perfectly clear what they count as low-resolution. The Quora question has a bunch of answers saying different things, mostly not from "domain experts" in any strong sense.

I like seriffed typefaces. In a book, sans-serif is generally hideous and offputting to me. On my phone or my laptop, both of which have nice high-resolution displays, Lesser Wrong content with serifs looks just fine. (Better than if it were set sans-serif? Dunno.) On the desktop machine I'm using right now, though, it's ugly and it feels more effortful to read than the corresponding thing on, say, Less Wrong. For me, that is.

now we have two people agreeing [...] Does that make it more true?

Yes. More precisely: the proposition we should actually care about here is not some broad generality about serif versus sans-serif typefaces, but something like "Users of Lesser Wrong will, on the whole, find it a bit easier on the eyes if content is generally set in sans-serif typefaces". Consider the limiting case where every LW user looks at the site and says "ugh, don't like that font, the serifs make it harder for me to read". Even if all those users are shockingly ignorant of typography, this is a case where if no one likes it, then it is ipso facto bad.

Of course we don't have (anything like) the entire LW community saying in chorus how much they dislike those serifs. But yes, when what matters is the experience of a particular group of people, each individual person who finds a thing bad does contribute to its badness, and each individual person who says it's bad does provide evidence for its badness.

What amount of karma and local agreement does it take to get to the truth?

Karma is relevant here only as a proxy for participation. A crude answer to this question is: enough to constitute a majority of users, weighted by frequency of use.

In case I haven't made it clear enough yet, I am not arguing that LW people are always right, or that high-karma LW people are always right. I am arguing that when the thing at issue is the experience of LW people, the experiences of LW people should not be dismissed. And I am arguing that on the more general question (are typefaces with serifs a bad idea on the web?) the simple answer "no; that's an outdated bit of bogus conventional wisdom" is in fact just as wrong as the simple answer "yes; everyone knows that".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-19T17:32:13.746Z · LW(p) · GW(p)

And I am arguing that on the more general question (are typefaces with serifs a bad idea on the web?) the simple answer "no; that's an outdated bit of bogus conventional wisdom" is in fact just as wrong as the simple answer "yes; everyone knows that".

Disagree. (Keep reading for details.)

But if you are choosing a typeface for substantial chunks of text that might be read on a not-so-great screen, you will likely get better results with a sans-serif typeface.

This is still incorrect, because serif readability is superior to that of sans-serif, and see below for the matter of "not-so-great screens".

Screen DPS

Given the pixel resolution per character you need to make serifs work, they are inferior on the screen… if you have a 72ppi (or less) display.

Now, such displays exist; here's one. They are quite rare, though, and designed for entertainment, not work. The idea that any appreciable percentage of LW users have such hardware seems implausible.

On a ~96ppi display (such as this nearly decade-old cheap flat-panel I'm using right now, or indeed any display display made in the past 15+ years), the apparent (angular, a.k.a. "CSS reference pixel") font size that you need to bring out the superiority of serif typefaces is no larger than the minimum size called for by other accessibility guidelines.

“The LW 2.0 font is less readable”

On the desktop machine I'm using right now, though, it's ugly and it feels more effortful to read than the corresponding thing on, say, Less Wrong. For me, that is.

1. What OS is this on? If the answer is "Linux" or "Windows", then part of the answer is "text rendering works very different on those operating systems, and you have a) test your site on those systems, b) make sure to make typographic choices that compensate, c) take specific actions to ensure that the user experience is adjusted for each client platform". I of course can't speak to (a), but (b) and (c) are not in evidence here.

2. The body text font size on LW 2.0 is too small (especially for that font), period. Again I refer you to https://www.readthesequences.com/Biases-An-Introduction; the body text is at 21px there. I consider that to be a minimum (adjusted for the particular font); whereas LW 2.0 (with a similar type of font) is at 16px. Yes, it looks tiny and hard to read. (But have you tried zooming in? What happens then?)

3. Other issues, like color (#444, in this case) affecting text rendering. I speak of this in my other comments.

“Consensus matters locally”

Consider the limiting case where every LW user looks at the site and says "ugh, don't like that font, the serifs make it harder for me to read". Even if all those users are shockingly ignorant of typography, this is a case where if no one likes it, then it is ipso facto bad.

If every LW user looks at the site and says that, then we can't still conclude anything about serifs from that, because if all of those users have not the smallest ounce of typography or design expertise, then they don't know what the heck they like or dislike, serif-wise.

Let me be clear: I'm not saying that people can't tell whether they like or dislike a particular thing. I am saying that without domain knowledge, people can't generalize their preferences. Ok, so some text on their screen is hard for them to read. What's making it so? The fact that a font has serifs? Or maybe just that it's a particular kind of serif font? Or the font weight? Or the weight grade? Or the shape of the letterforms (how open the curves are, for instance, or the weight variability, perhaps due to which "optical size" is being used)? Or the color? Or the subpixel rendering settings? Or the kerning? Or the line spacing? Or the line length? Or the text-rendering CSS property setting? If you (the hypothetical-user you) don't know what most or all of those things are, then sure your preferences are real, but your opinion (generalized from those preferences) is worth jack squat.

In other words: "if no one likes it, then it is ipso facto bad"—yes, but what, exactly, is "it"? You're equivocating between two meanings, in that sentence! So, this is true:

“If no one likes , then is bad.”

Yes. Granted. But you seem to want to say something like:

“If no one likes , then are bad.”

But any particular thing belongs to many different classes, which intersect at the point defined by that thing! Obviously not all those classes are ipso facto bad, so which one(s) are we talking about?? We have no idea!

I am arguing that when the thing at issue is the experience of LW people, the experiences of LW people should not be dismissed.

Dismissed? No. Taken at anything even remotely resembling face value? Also no.

Come on, folks. This is just a rehash of the "people don't have direct access to their mental experience" debate. You know all of this already. Why suddenly forget it when it comes up in a new domain?

Replies from: gjm
comment by gjm · 2017-09-19T22:22:23.788Z · LW(p) · GW(p)

serif readability is superior to that of sans-serif

Do you have actual solid evidence for that? I'm guessing that if you did you'd have given it already in your earlier comments, and you haven't; but who knows? (One of the answers to that Quora question mentions a study that found a small advantage for serifs. It also remarks that the difference was not statistically significant, and calls into question the choice of typefaces used, and says it's not a very solid study. So I hope you have something better than that.)

On a ~96ppi display [...] the apparent [...] font size that you need to bring out the superiority of serif typefaces is no larger than the minimum size called for by other accessibility guidelines.

Again, I would be interested in more information about what evidence you have about the font size required "to bring out the superiority of serif typefaces". For the avoidance of doubt, that isn't a coded way of saying "I bet you're wrong"; I would just like to know what's known about this and how solidly. I do not have the impression that these issues are as settled as you are making them sound; but I may just be unaware of the relevant work.

What OS is this on?

One instance is Firefox on Windows; the other is Firefox on FreeBSD (which I expect is largely indistinguishable in this context from Firefox on Linux). I concur with your guess that the people responsible for LesserWrong have not done thorough testing of their site on a wide variety of platforms, though I would be surprised if no one involved uses either Windows or Linux.

Yes, it looks tiny and hard to read.

LesserWrong has what looks to me like a weird multiplicity of different text sizes. Some of the text is clearly too small (personally I like small text, but I am aware that my taste is not universally shared). However -- and I must stress again that here I am merely describing my own experience of the site -- if I go to, say, this post on the Unix box at my desk right now then (1) the size of the type at my typical viewing distance is about the same as that of a decently typeset paperback book at its typical viewing distance, and (2) I find the text ugly and harder to read than it should be because various features of the typeface (not only the serifs) are poorly represented -- for me, on that monitor, after rendering by my particular machine -- at the available resolution. (The text is very similar in size and appearance to that on readthesequences.com; LW2.0 appears to be using -- for me, etc., etc. -- ETBembo Roman LF at 19.2px actual size, whereas RTS is using GaramondPrmrPro at 21px actual size. ETBembo has a bigger x-height relative to its nominal size and most lowercase letters are almost exactly the same size in each.)

Other issues, like color

Yup, agreed. But I would say the same about readthesequences.com even though its body text is black.

If every LW user looks at the site and says that, then we can't still conclude anything about serifs from that,

I agree. (Though it would, despite their hypothetical ignorance, be evidence. Someone who says "this text is hard to read because of the serifs" may be wrong, but I claim they are more likely to say it in the face of text that's hard to read because of its serifs than of text that's hard to read for some other reason.)

Perhaps I left too much implicit in my earlier comment, so let me try to remedy that. I firmly agree that the mere fact that some LW users believe some proposition about serifs in webpage text is perfectly compatible with the falsehood of that proposition. Even if it's quite a lot of LW users. Even if they have a lot of karma.

But the thing that actually matters here is not the general proposition about serifs, but a more specific question about the type used on LesserWrong. I wasn't equivocating between this and the general claim about serifs, nor was I unaware of the difference; I was deliberately attempting to redirect discussion to the more relevant point.

(Not that the general question isn't interesting; it is.)

[EDITED to add:] Of course much of what I wrote before was about the general proposition. Whether I agree with you about that depends on exactly what version of the general proposition we're discussing -- I take it you would agree with me that many are possible, and some might be true while others are false. In particular, I am somewhat willing to defend the claim that there are otherwise reasonable choices of text size for which typical seriffed typefaces make for a worse reading experience than typical sans-serif typefaces for people using 100ish-ppi displays, and that while this can be mitigated somewhat by very careful choice of serif typefaces and careful working around the quirks of the different text rendering users on different platforms will experience, selecting sans-serif typefaces instead may well be the better option. I am also willing to be convinced to stop defending that claim, if there is really good evidence against it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-19T23:28:26.641Z · LW(p) · GW(p)

Do you have actual solid evidence for that?

Not close at hand. You may reasonably consider my claim to be undefended for now. When I have the time, I'll try to put together a bit of a lit survey on this topic.

LesserWrong has what looks to me like a weird multiplicity of different text sizes. Some of the text is clearly too small (personally I like small text, but I am aware that my taste is not universally shared). However -- and I must stress again that here I am merely describing my own experience of the site -- if I go to, say, this post on the Unix box at my desk right now then (1) the size of the type at my typical viewing distance is about the same as that of a decently typeset paperback book at its typical viewing distance, and (2) I find the text ugly and harder to read than it should be because various features of the typeface (not only the serifs) are poorly represented -- for me, on that monitor, after rendering by my particular machine -- at the available resolution. (The text is very similar in size and appearance to that on readthesequences.com; LW2.0 appears to be using -- for me, etc., etc. -- ETBembo Roman LF at 19.2px actual size, whereas RTS is using GaramondPrmrPro at 21px actual size. ETBembo has a bigger x-height relative to its nominal size and most lowercase letters are almost exactly the same size in each.)

Right you are. The 16px size is what I saw on the front page.

Even on my machines, ET Book (source) does not seem to render as well as Garamond Premier Pro (in a browser).

Though it would, despite their hypothetical ignorance, be evidence. Someone who says "this text is hard to read because of the serifs" may be wrong, but I claim they are more likely to say it in the face of text that's hard to read because of its serifs than of text that's hard to read for some other reason.

I think this is literally true but relevantly false; specifically, I think this is false once you condition on the cause of the text's unreadability not being some gross and obvious circumstance (like, it's neon purple on a fuchsia background, or it's set at 2px size, etc.)

I think that someone who is ignorant of typography is no more likely to blame serifs in the case of the serifs being to blame than in the case of the text rendering or line length being to blame.

But the thing that actually matters here is not the general proposition about serifs, but a more specific question about the type used on LesserWrong. I wasn't equivocating between this and the general claim about serifs, nor was I unaware of the difference; I was deliberately attempting to redirect discussion to the more relevant point.

Noted. I was responding to the general claim.

As to the specific question, the matter of serifs is moot, because (as with all specific design decisions), each designer decision should be comprehensively user-tested and environment-tested, and as much user choice should be offered as possible.

Of course much of what I wrote before was about the general proposition. Whether I agree with you about that depends on exactly what version of the general proposition we're discussing -- I take it you would agree with me that many are possible, and some might be true while others are false.

Indeed.

In particular, I am somewhat willing to defend the claim that there are otherwise reasonable choices of text size for which typical seriffed typefaces make for a worse reading experience than typical sans-serif typefaces for people using 100ish-ppi displays … I am also willing to be convinced to stop defending that claim, if there is really good evidence against it.

Nope, the claim is reasonable. Websites where information density is more important than long-form readability, or where text comes in small chunks and a user is expected not to read straight through but to extract those chunks, may be like this. For that use case, a smaller point size of "body" text may be called for, and a well-chosen sans font may be a better fit.

LessWrong is not such a website, though a hypothetical LessWrong community wiki may be (or it may not be; it depends on what sort of content it mostly contains).

(Aside: I somewhat object to speaking of "typical" serif typefaces, because that's hard to resolve nowadays. I suspect that you know that, and I know that, but in a public discussion it pays to be careful with language like this.)

However:

very careful choice of […] typefaces and careful working around the quirks of the different text rendering users on different platforms will experience

… is always advisable, regardless of typographic or other design choices.

comment by DragonGod · 2017-09-16T17:28:45.236Z · LW(p) · GW(p)

I have no knowledge of typography, but was thought in University that serif fonts should be used for screens, and sans serif for print; it is very possible, that my lecturers were wrong.

Would that make it more true?

No.

What amount of karma and local agreement does it take to get to the truth?

None. The truth is orthogonal to the level of local agreement. That said, local agreement is Bayesian evidence for the veracity of a proposition.

Replies from: quanticle, SaidAchmiz, SaidAchmiz
comment by quanticle · 2017-09-17T15:56:50.402Z · LW(p) · GW(p)

Mathematically, if the truth is orthogonal to the level of local agreement, local agreement cannot constitute Bayesian evidence for the veracity of the proposition. If we're taking local agreement as Bayesian evidence for the veracity of the proposition, we're assuming the veracity of the proposition and local agreement are not linearly independent, which would violate orthogonality.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T16:36:00.363Z · LW(p) · GW(p)

Either I don't know what Bayesian evidence is, or you don't.

My understanding is:

An outcome is Bayesian evidence for a proposition, if the outcome is more likely to occur if the proposition is true, than vice versa.

Based on that understanding of Bayesian evidence, I argue that Lesswrong consensus on a proposition is Bayesian evidence for that proposition. Lesswrongers have better than average epistemic hygiene, and pursue true beliefs. You expect the average lesswronger to have a higher percentage of true beliefs than a lay person. Furthermore if a belief is consensus among the Lesswrong community, then it is more likely to be true. (A single Lesswronger may have some false beliefs), but the set of false beliefs that would be shared by the overwhelming majority of Lesswrongers would be very small.

Replies from: quanticle
comment by quanticle · 2017-09-17T16:54:44.617Z · LW(p) · GW(p)

An outcome is Bayesian evidence for a proposition, if the outcome is more likely to occur if the proposition is true, than vice versa.

That assumes that there is a statistical correlation between the two, no? If the two are orthogonal to each other, they're statistically uncorrelated, by definition.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T18:17:06.834Z · LW(p) · GW(p)
  1. http://lesswrong.com/lw/nz/arguing_by_definition/
  2. The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition. To claim otherwise is to claim that Lesswrongers form their beliefs through a process that is no better than random guessing. That's a very strong claim to make, and extraordinary claims require extraordinary evidence.
Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-17T19:07:16.320Z · LW(p) · GW(p)

"The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition."

Sure, and that is equally true of indefinitely many other populations in the world and the whole population as well. It would take an argument to establish that LW local agreement is better than any particular one of those populations.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T19:58:15.777Z · LW(p) · GW(p)

Sure,

Then we are in agreement.

It would take an argument to establish that LW local agreement is better than any particular one of those populations.

As for Lesswrong vs the general population, I point to the difference in epistemic hygiene between the two groups.

comment by Said Achmiz (SaidAchmiz) · 2017-09-16T18:48:57.937Z · LW(p) · GW(p)

… it is very possible, that my lecturers were wrong.

They were lecturers in what subject? Design / typography / etc.? Or, some unrelated subject?

Replies from: DragonGod
comment by DragonGod · 2017-09-16T20:23:33.713Z · LW(p) · GW(p)

Unrelated subjects (insofar as webdesign is classified as unrelated).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-16T23:26:27.435Z · LW(p) · GW(p)

Well, in that case, what I conjecture is simply that either this (your university classes) took place a while ago, or your lecturers formed their opinions a while ago and didn't keep up with developments, or both.

"Use sans-serif fonts for screen" made sense. Once. When most people had 72ppi displays (if not lower), and no anti-aliasing, or subpixel rendering.

None of that has been true for many, many years.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T01:00:46.844Z · LW(p) · GW(p)

I am currently in my fourth year.

or your lecturers formed their opinions a while ago and didn't keep up with developments

I have expressed this sentiment myself, so it is plausible.

comment by Said Achmiz (SaidAchmiz) · 2017-09-16T18:33:40.882Z · LW(p) · GW(p)

… local agreement is Bayesian evidence for the veracity of a proposition.

Why? Are people around here more likely to agree with true propositions than false ones? This might be true in general, but is it true in domains where there exists non-trivial expertise? That's not obvious to me at all. What makes you think so?

Replies from: DragonGod
comment by DragonGod · 2017-09-16T20:26:12.807Z · LW(p) · GW(p)

Are people around here more likely to agree with true propositions than false ones? This might be true in general,

I was generalising from the above. I expect the epistemic hygiene on LW to be significantly higher than the norm.

For any belief b, let Pr(b) be the probability that b is true. Forall b such that b is a consensus on Lesswrong (greater than some k% of Lesswrongers believe b), then Pr(b) > 0.50 is a belief I hold.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-16T23:28:43.343Z · LW(p) · GW(p)

But this is an entirely unwarranted generalization!

Broad concepts like "the epistemic hygiene on LW [is] significantly higher than the norm" simply don't suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise, nor that LessWrongers have any kind of healthy respect for expertise—especially since, in the latter case, we know that they in fact do not.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T01:03:26.523Z · LW(p) · GW(p)

simply don't suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise

Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P <= 0.5?
As long as Pr(B|lesswrong consensus) is > 0.5, then Lesswrong consensus remains Bayesian evidence for truth.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-17T01:14:31.586Z · LW(p) · GW(p)

Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P <= 0.5?

For some domains, sure. For others, not.

We have no real reason to expect any particular likelihood ratio here, so should probably default to P = 0.5.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T01:37:12.385Z · LW(p) · GW(p)

I expect that for most domains (possibly all), Lesswrong consensus is more likely to be right than wrong. I haven't yet seen reason to believe otherwise; (it seems you have?).

Replies from: entirelyuseless, ingres, quanticle
comment by entirelyuseless · 2017-09-17T19:30:58.507Z · LW(p) · GW(p)

Again, there is nothing special about this. Given that I believe something, even without any consensus at all, I think my belief is more likely to be true than false. I expect this to apply to all domains, even ones that I have not studied. If I thought it did not apply to some domains, then I should reverse all of my beliefs about that domain, and then I would expect it to apply.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T20:00:56.336Z · LW(p) · GW(p)

I never suggested that there was anything extraordinary about my claim (au contraire, it was quite intuitive) I do not think we disagree.

comment by namespace (ingres) · 2017-09-17T15:42:37.774Z · LW(p) · GW(p)

Just so we're clear here:

Profession (Results from 2016 LessWrong Survey)

Art: +0.800% 51 2.300%

Biology: +0.300% 49 2.200%

Business: -0.800% 72 3.200%

Computers (AI): +0.700% 79 3.500%

Computers (other academic, computer science): -0.100% 156 7.000%

Computers (practical): -1.200% 681 30.500%

Engineering: +0.600% 150 6.700%

Finance / Economics: +0.500% 116 5.200%

Law: -0.300% 50 2.200%

Mathematics: -1.500% 147 6.600%

Medicine: +0.100% 49 2.200%

Neuroscience: +0.100% 28 1.300%

Philosophy: 0.000% 54 2.400%

Physics: -0.200% 91 4.100%

Psychology: 0.000% 48 2.100%

Other: +2.199% 277 12.399%

Other "hard science": -0.500% 26 1.200%

Other "social science": -0.200% 48 2.100%

The LessWrong consensus is massively overweighted in one particular field of expertise (computing) with some marginal commentators who happen to do other things.

As for evidence to believe otherwise, how about all of recorded human history? When has there ever been a group whose consensus was more likely to be right than wrong in all domains of human endeavor? What a ludicrous hubris, the sheer arrogance on display in this comment cowed me, I briefly considered whether I'm hanging out in the right place by posting here.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T16:41:32.948Z · LW(p) · GW(p)

Let B be the set of beliefs that are consensus among the LW community. Let b be any arbitrary belief. Let Pr(b) be the probability that b is true. Let (b|B) denote the event that b is a member of B.

I argue that Pr(b|B) (Probability that b is true given that b is a member of B) is greater than 0.5; how is that hubris?

If Lesswrongers are ignorant on a particular field, then I don't expect a consensus to form. Sure, we may have some wrong beliefs that are consensus, but the fraction of right beliefs that are consensus is greater than 1/2 (of total beliefs that are consensus).

comment by quanticle · 2017-09-17T15:38:20.150Z · LW(p) · GW(p)

This entire thread is reason to believe otherwise. We have the LessWrong consensus (sans-serif fonts are easier to read than serif fonts). We have a domain expert posting evidence to the contrary. And we have LessWrong continuing with its priors, because consensus trumps expertise.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T16:45:05.569Z · LW(p) · GW(p)

I'm not continuing with my priors for one (where do you get that Lesswrong is continuing with its priors?).

It is not clear to me that "serif fonts are easier to read than sans-serif fonts" was ever/is a consensus here. As far as I can tell, fewer than ten people expressed that opinion (and 10 is a very small sample).

1 example (if this was that) wouldn't detract from my point though. My point is that lesswrong consensus is better than random guessing.

comment by Alicorn · 2017-09-21T22:42:53.065Z · LW(p) · GW(p)

Now that I'm looking at it more closely: Quoted text in comments does not seem sufficiently set off. It's slightly bigger and indented but it would be easy on casual inspection to mistake it for part of the same comment.

comment by Kaj_Sotala · 2017-09-21T15:42:09.982Z · LW(p) · GW(p)

I think the font feels okay (though not great) when it's "normal" writing, but text in italics gets hard to read.

comment by cousin_it · 2017-09-15T10:30:09.363Z · LW(p) · GW(p)

What will happen with existing LW posts and comments? I feel strongly that they should all stay accessible at their old URLs (though perhaps with new design).

Replies from: Habryka
comment by Habryka · 2017-09-15T11:52:19.674Z · LW(p) · GW(p)

All old links will continue working. I've put quite a bit of effort into that, and this was one of the basic design requirements we built the site around.

Replies from: Vaniver
comment by Vaniver · 2017-09-15T21:11:38.206Z · LW(p) · GW(p)

"Basic design requirements" seems like it's underselling it a bit; this was Rule 0 that would instantly torpedo any plan where it wasn't possible.

It's also worth pointing out that we've already done one DB import (lesserwrong.com has all the old posts/comments/etc. as of May of this year) and will do another DB import of everything that's happened on LW since then, so that LW moving forward will have everything from the main site and the beta branch.

Replies from: Jiro
comment by Jiro · 2017-09-19T18:21:53.955Z · LW(p) · GW(p)

I just tried lesserwrong.com. Neither IE nor Firefox would do anything when I clicked "login". I had to use Chrome. Even using Chrome, I tried to sign in and had no feedback when I used a bad user and password, making it unclear whether the values were even submitted to the server.

Replies from: ChristianKl, Elo
comment by ChristianKl · 2017-09-20T16:22:57.283Z · LW(p) · GW(p)

That sounds like it just isn't a development priority to give feedback when there's a bad user/password.

comment by Elo · 2017-09-19T19:16:01.977Z · LW(p) · GW(p)

I believe that is on purpose. Login is not open yet.

Replies from: Jiro
comment by Jiro · 2017-09-19T20:34:04.314Z · LW(p) · GW(p)

People are clearly posting things there that postdate the DB import, so they must be logging in. Also, that doesn't explain it working better on Chrome than on other browsers.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-19T20:59:28.446Z · LW(p) · GW(p)

A private beta has been ongoing, clearly.

Replies from: Jiro
comment by Jiro · 2017-09-20T16:02:41.247Z · LW(p) · GW(p)

That can't explain it, unless the private beta is accessed by going somewhere other than lesserwrong.com. The site isn't going to know that someone is a participant in the private beta until they've logged in. And the problems I described happen prior to logging in.

Replies from: Raemon, SaidAchmiz
comment by Raemon · 2017-09-20T19:00:56.894Z · LW(p) · GW(p)

Not 100% I understand your description, but currently the expected behavior when you attempt to login (if not already a part of the beta) is nothing happening when you click "submit" (although in the browser console there'll be an error message)

This is simply because we haven't gotten to that yet, but it should be something we make sure to fix before the open-beta launch later today so people have a clear sense of whether it's working.

Replies from: Jiro
comment by Jiro · 2017-09-20T19:35:05.618Z · LW(p) · GW(p)

And the expected behavior when using IE or Firefox is that you can't even get to the login screen? I find that unlikely.

Replies from: Benito
comment by Ben Pace (Benito) · 2017-09-20T19:55:24.098Z · LW(p) · GW(p)

Hi! I just checked on Firefox, and the login dialog box opened for me. If you still have this issue, next time you try to log in (open beta will happen by 4pm today) please ping us in the intercom (bottom right-hand corner of the lesserwrong page), and let us know what browser version you're using.

If your intercom doesn't work, let me know here.

Replies from: Jiro
comment by Jiro · 2017-09-21T17:14:08.403Z · LW(p) · GW(p)

It seems to have been a cookie problem so I got it working.

However, I ended up with two logins here. One I never used much, and the other is this one. Naturally, lesserwrong decided that the one that it was going to associate with my email address is the one that I never used much.

I'd like to get "Jiro" on lesserwrong, but I can't, since changing password is a per-email thing and it changes the password of the other login. Could you please fix this?

Replies from: Habryka
comment by Habryka · 2017-09-23T00:01:47.301Z · LW(p) · GW(p)

Sure, happy to change the email address associated with your account!

Just send me a pm with the email you want it changed to, and I will make the modification.

comment by Said Achmiz (SaidAchmiz) · 2017-09-20T17:58:38.826Z · LW(p) · GW(p)

Good point!

In that case, I'm not sure what the problem is (though, I, too, see a similar problem to yours, now that I just tried it in a different browser (Firefox 55.0.3, Mac OS 10.9) than my usual one (Chrome)). I suspect, as another commenter said, that login just isn't fully developed yet.

comment by John_Maxwell (John_Maxwell_IV) · 2017-09-15T09:00:11.389Z · LW(p) · GW(p)

Sounds great!

Is there anything important I missed

This analysis found that LW's most important issue is lack of content. I think there are two models that are most important here.

There's the incentives model: making it so good writers have a positive hedonic expectation for creating content. There's a sense in which an intellectual community online is much more fragile than an intellectual community in academia: academic communities can offer funding, PhDs, etc. whereas internet discussion is largely motivated by pleasure that's intrinsic to the activity. As a concrete example, the way Facebook lets you see the name of each person who liked your post is good, because then you can take pleasure in each specific person who liked it, instead of just knowing that X strangers somewhere on the internet liked it. Contrast with academia, which plods on despite frequently being hellish.

And then there's the chicken-and-egg model. Writers go where the readers are and readers go where the writers are. Interestingly, sometimes just 1 great writer can solve this problem and bootstrap a community: both Eliezer and Yvain managed to create communities around their writing single-handedly.

The models are intertwined, because having readers is a powerful incentive for writers.

My sense is that LW currently performs poorly according to both models, and although there's a lot of great stuff here, it's not clear to me that any of the proposed actions are going to attack either of these issues head on.

Replies from: Habryka, Viliam
comment by Habryka · 2017-09-16T23:07:09.368Z · LW(p) · GW(p)

Thanks! :)

I agree with the content issue, and ultimately having good content on the page is one of primary goals that guided all the modeling in the post. Good content is downstream from having a functioning platform and an active community that attracts interesting people and has some pointers on how to solve interesting problems.

I like your two models. Let me think about both of them...

The hedonic incentive model is one that I tend to use quite often, especially when it comes to the design of the page, but I didn't go into too much in this post because talking about it would inevitably involve a much larger amount of details. I've mentioned "making sure things are fun" a few times, but going into the details on how I am planning to achieve this would require me talking about the design of buttons, and animations and notification systems, each of which I could write a whole separate 8000 word post filled with my own thoughts. That said, it is also a ton of fun for me, and if anyone ever wants to discuss the details of any design decision on the page, I am super happy to do that.

I do feel that there is still a higher level of abstraction in the hedonic incentives model that in game design would be referred to as "the core game loop" or "the core reward loop". What is the basic sequence of actions that a user executes when he comes to your page that reliably results in positive hedonic reward? (on Facebook there are a few of those, but the most dominant one is "Go to frontpage, see you have new notifications, click notifications, see that X people have liked your content") And I don't think I currently have a super clear answer to this. I do feel like I have an answer on a System 1 level, but it isn't something I have spent enough time thinking about, and haven't clarified super much, and this comment made me realize that this is a thing I want to pay more attention to.

We hope to bootstrap the chicken-and-egg model by allowing people to practically just move their existing blogs to the LessWrong platform, either via RSS imports or by directly using their user-profile as a blog. My current sense is that in the larger rationality diaspora we have a really large amount of content, and so far almost everyone I've talked to seemed very open to having their content mirrored on LessWrong, which makes me optimistic about solving that aspect.

comment by Viliam · 2017-09-19T22:32:22.571Z · LW(p) · GW(p)

The lack of content is related to the other issues. For example, it can quite reduce my willingness to write a post for LW when I remember that Eugine can single-handledly dominate the whole discussion with his sockpuppets, if he decides so. And I imagine that if Yvain posted his political articles here, he wouldn't like the resulting debate.

comment by NancyLebovitz · 2017-09-15T16:19:49.342Z · LW(p) · GW(p)

Thank you for developing this.

I'm reminded of an annoying feature of LW 1.0. The search function was pretty awful. The results weren't even in reverse chronological order.

I'm not sure how important better search is, but considering your very reasonable emphasis on continuity of discussion, it might matter a lot.

Requiring tags while offering a list of standard tags might also help.

Replies from: Benito, ingres
comment by Ben Pace (Benito) · 2017-09-15T21:38:30.233Z · LW(p) · GW(p)

We all thought search was very important, and so tried to make it very efficient and effective. Try out the search bar on the new site.

Added: I realise that comment links are currently broken - oops! We'll fix that before open beta.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2017-09-16T00:24:28.935Z · LW(p) · GW(p)

I've tried it and it's very fast. I haven't come up with good ideas for testing it yet.

comment by namespace (ingres) · 2017-09-15T21:33:51.163Z · LW(p) · GW(p)

Better search is paramount in my opinion. Part of how academic institutions maintain a shared discussion is through a norm of checking for previous work in a space before embarking on new adventures. Combined with strong indexing this norm means that things which could be like so many forgotten Facebook discussions get many chances to be seen and read by members of the academic community.

http://www.overcomingbias.com/2007/07/blogging-doubts.html

Replies from: Habryka
comment by Habryka · 2017-09-16T23:10:16.738Z · LW(p) · GW(p)

Yeah, we do now have much better word-based search, but also still feel that we want a way to archive content on the site into more hierarchical or tag-based structures. I am very open to suggestions of existing websites that do this well, or maybe even physical library systems that work here.

I've been reading some information architecture textbooks (http://shop.oreilly.com/product/0636920034674.do) on this, but still haven't found a great solution or design pattern that doesn't feel incredibly cumbersome and adds a whole other dimension to the page that users need to navigate.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-17T00:19:42.989Z · LW(p) · GW(p)

… [we] still feel that we want a way to archive content on the site into more hierarchical or tag-based structures. I am very open to suggestions of existing websites that do this well…

This is a slightly odd comment, if only because "hierarchical or tag-based structures" describes almost all extant websites that aggregate / archive / collect content in any way! You would, I think, be somewhat hard-pressed to find a site that does not use either a hierarchical or a tag-based structure (or, indeed, both!).

But here are some concrete examples of sites that both do this well, and where it plays a critical role:

  • Wikipedia. MediaWiki Categories incorporate both tag-based and hierarchical elements (subcategories).
  • Other Wikis. TVTropes, which uses a modified version of the PmWiki engine, is organized primarily by placing all pages into one or more indexes, along many (often orthogonal) categories. The standard version of PmWiki offers several forms of hierarchical (groups, SiteMapper) and tag-based (Categories, pagelists in general) structures and navigation schemes.
  • Blogs, such as Wordpress. Tags are a useful way to find all posts on a subject.
  • Tumblr. I have much beef with Tumblr, but tags are a sensible feature.
  • Pinboard. Tags, including the ability to list intersections of tag-based bookmark sets, is key to Pinboard's functionality.
  • Forums, such as ENWorld. The organization is hierarchical (forum groups contain forums contain subforums contain threads contain posts) and tag-based (threads are prefixed with a topic tag). You can search by hierarchical location or by tag(s) or by text or by any combination of those.
Replies from: Habryka, 9eB1
comment by Habryka · 2017-09-17T02:51:10.447Z · LW(p) · GW(p)

Thanks for the recommendations!

"This is a slightly odd comment, if only because "hierarchical or tag-based structures" describes almost all extant websites that aggregate / archive / collect content in any way!"

Well, the emphasis here was on the "more". I.e. there are more feed based architectures, and there are more taxonomy/tagging based architectures. There is a spectrum, and reddit very much leans towards the feed direction, which is what LessWrong has historically been. And wiki's very much lean towards the taxonomy spectrum. I feel we want to be somewhere in between, but I don't know where yet.

Replies from: SaidAchmiz, morganism
comment by Said Achmiz (SaidAchmiz) · 2017-09-17T04:08:01.180Z · LW(p) · GW(p)

Certainly there is variation, but I actually don't think that viewing that variation as a unidimensional spectrum is correct. Consider:

I have a blog. It functions just like a regular (wordpress) blog—it's sequential, it even has the usual RSS feed, etc. But it runs on pmwiki. So every page is a wikipage (and thus pages are organized into groups; they have tags and are viewable by group, by tag, by custom pagelist, etc.)

So what is that? Feed-based, or tag-based, or hierarchical, or... what? I think these things are much more orthogonal than you give them credit for. Tag-based structure can overlay hierarchical structure without affecting it; custom pagelist/index structure, ditto; and you can serve anything you like as a feed by simply applying an ordering (by timestamp is the obvious and common one, but there are many other possibilities), and you can have multiple feeds, custom feeds, dynamic feeds, etc.; you can subset (filter) in various ways…

(Graph-theoretic interpretations of this are probably obvious, but if anyone wants me to comment on that aspect of it, I will)

P.S.: I think reddit is a terrible model, quite honestly. The evolution of reddit, into what it is today, makes it fairly obvious (to me, anyway) that it's not to be emulated.

Edit: To be clear, the scenario above isn't hypothetical—that is how my actual blog works.

Edit2: Consider also https://readthesequences.com. (It, too, runs on pmwiki.) There's a linear structure (it's a book; the linear navigation UI takes you through the content in order), but it would obviously be trivial to apply tags to pages, and the book/sequence structure is hierarchical already.

comment by morganism · 2017-09-18T23:19:53.820Z · LW(p) · GW(p)

How bout a circular hierarchy, with different color highlights for posts, comments, articles, wiki, tags,and links.

http://yed.yworks.com/support/manual/layout_circular.html

you could have upvotes contribute to weighting , and just show a tag cloud like connection diagram.

comment by 9eB1 · 2017-09-17T03:13:06.707Z · LW(p) · GW(p)

That is very interesting. An exception might be "Google search pages." Not only is there no hierarchical structure, there is also no explicit tag structure and the main user engagement model is search-only. Internet Archive is similar but with their own stored content.

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-17T04:11:51.862Z · LW(p) · GW(p)

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

I have encountered a truly shocking degree of variation in how people use TVTropes, to the extent that I've witnessed several people talking to each other about this were each in utter disbelief (to the point of anger) that the other person's usage pattern is a real thing.

Generalizations about TVTropes usage patterns are extremely fraught.

Replies from: 9eB1
comment by 9eB1 · 2017-09-17T14:47:31.887Z · LW(p) · GW(p)

Sure.

Since then I've thought of a couple more sites that are neither hierarchical nor tag-based. Facebook and eHow style sites.

There is another pattern that is neither hierarchical, tag-based nor search-based, which is the "invitation-only" pattern of a site like pastebin. You can only find content by referral.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2017-09-17T17:56:55.615Z · LW(p) · GW(p)

It is therefore not a coincidence that Facebook is utterly terrible as a content repository. (I am unfamiliar with eHow.)

comment by NancyLebovitz · 2017-09-15T17:51:07.305Z · LW(p) · GW(p)

I'm hoping there will be something like the feature at ssc to choose the time when the site considers comments to be new. It's frustrating to not be able to recover the pink borders on new comments on posts at LW.

Replies from: Benito
comment by Ben Pace (Benito) · 2017-09-15T21:25:17.525Z · LW(p) · GW(p)

I agree - and we've built this feature! It's currently live on the beta site.

comment by casebash · 2017-09-15T09:54:04.426Z · LW(p) · GW(p)

Firstly, well done on all your hard work! I'm very excited to see how this will work out.

Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.

I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.

I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremely difficult to get up and running/messy b) Because it wasn't clear who to talk to if you wanted to know if your changes were likely to be approved if you made them.

It looks like a) has been solved, if you also improve b), then I expect a bunch of people will want to contribute.

Replies from: ingres
comment by namespace (ingres) · 2017-09-15T22:40:04.644Z · LW(p) · GW(p)

I'm going to write a top level post at some point (hopefully soon) but in the meantime I'd like to suggest the content in the original post and comments be combined into a wiki. There's a lot of information here about LW 2.0 which I wasn't previously aware of and significantly boosted my confidence in the project.

Replies from: Habryka
comment by Habryka · 2017-09-16T23:26:35.525Z · LW(p) · GW(p)

A wiki feels too high of a barrier to entry to me, though maybe there are some cool new wiki softwares that are better than what I remember.

For now I feel like having an about page on LessWrong that has links to all the posts, and tries to summarize the state of discussion and information is the better choice, until we reach the stage where LW gets a lot more open-source engagement and is being owned more by a large community again.

Replies from: ingres, SaidAchmiz
comment by namespace (ingres) · 2017-09-17T00:32:21.214Z · LW(p) · GW(p)

Seconding SaidAchmiz on pmwiki, it's what we use for our research project on effective online organizing and it works wonders. It's also how I plan to host and edit the 2017 survey results.

As far as the high barrier to entry goes, I'll repeat here my previous offer to set up a high quality instance of pmwiki and populate it with a reasonable set of initial content - for free. I believe this is sufficiently important that if the issue is you just don't have the capacity to get things started I'm fully willing to help on that front.

comment by Said Achmiz (SaidAchmiz) · 2017-09-16T23:51:22.858Z · LW(p) · GW(p)

http://www.pmwiki.org/ is a cool new wiki softwares that is better than most things

comment by Manfred · 2017-09-15T19:47:53.513Z · LW(p) · GW(p)

I also agree that HPMOR might need to go somewhere other than the front page. From a strategic perspective, I somehow want to get the benefits of HPMOR existing (publicity, new people finding the community) without the drawbacks (it being too convenient to judge our ideas by association).

Replies from: Habryka
comment by Habryka · 2017-09-16T23:13:55.082Z · LW(p) · GW(p)

I am somewhat conflicted about this. HPMOR has been really successful at recruiting people to this community (HPMOR is the path by which I ended up here), and according to last year's survey about 25% of people who took the survey found out about LessWrong via HPMOR. I am hesitant to hide our best recruitment tool behind trivial inconveniences.

One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

Replies from: gjm, SaidAchmiz, DragonGod
comment by gjm · 2017-09-19T11:55:32.913Z · LW(p) · GW(p)

I am hesitant to hide our best recruitment tool behind trivial inconveniences.

HPMOR is an effective tool for getting people to find out about Less Wrong. But someone who is at the front page of the site has already found Less Wrong.

a separate section of the page filled with rationalist art and fiction

A separate section of the site, I suggest. It doesn't need to be on the front page.

comment by Said Achmiz (SaidAchmiz) · 2017-09-16T23:32:52.283Z · LW(p) · GW(p)

Are you sure that the set of people that are being recruited to the community via HPMOR, and the set of people whom we most want to recruit into the community, have a lot of overlap? Or are these, perhaps, largely disjoint sets? What about the set of people whom we most want to recruit, and the set of people who are repelled by HPMOR? Might there not be quite a bit of overlap there?

Numbers aren't everything!

I agree with the idea of having a separate rationalist fiction page. (Perhaps we might even make it so separate that it's actually a whole other site! A page / site section of "links to rationality-themed fiction" wouldn't be out of place, however.)

Replies from: Habryka, DragonGod
comment by Habryka · 2017-09-17T02:35:44.671Z · LW(p) · GW(p)

"Are you sure that the set of people that are being recruited to the community via HPMOR, and the set of people whom we most want to recruit into the community, have a lot of overlap?"

I agree that this is a concern to definitely think about, though in this case I feel like I have pretty solid evidence that there is indeed large amount of overlap. A lot of the best people that I've seen show up over the last few years seem to have been attracted by HPMOR (I would say more than 25%). It would be great to have some better formatted data on this, and for a long time I wanted someone to just create a spreadsheet for a large set of people in the rationalist community and codify their origin story, but until we have something like this, the data that I have from various surveys + personal experience + being in a key position to observe where people are coming from (working with CFAR and CEA for the last few years) I am pretty sure that there is significant overlap.

comment by DragonGod · 2017-09-17T01:25:30.695Z · LW(p) · GW(p)

Perhaps we might even make it so separate that it's actually a whole other site!

I think this is counter productive.

comment by DragonGod · 2017-09-17T09:35:43.276Z · LW(p) · GW(p)

One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

I think this is a great solution.

comment by arundelo · 2017-09-15T18:51:45.654Z · LW(p) · GW(p)

Has the team explicitly decided to call it "LessWrong" (no space) instead of "Less Wrong" (with a space)?

The spaced version has more precedent behind it. It's used by Eliezer and by most of the static content on lesswrong.com, including the element.

Replies from: Habryka
comment by Habryka · 2017-09-16T23:35:15.300Z · LW(p) · GW(p)

Being aware that this is probably the most bikesheddy thing in this whole discussion, I've actually thought about this a bit.

From skimming a lot of early Eliezer posts, I've seen all three uses "LessWrong", "Lesswrong" and "Less Wrong" and so there isn't a super clear precedent here, though I do agree that "Less Wrong" was used a bit more often.

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words. It makes it sound too much like it wants to refer to the original meaning of the words, instead of being a pointer towards the brand/organization/online-community, and while one might think that is actually useful, it usually just results in a short state of confusion when I read a sentence that has "Less Wrong" in it, because I just didn't parse it as the correct reference.

I am currently going with "LessWrong" and "LESSWRONG", which is what I am planning to use in the site navigation, logos and other areas of the page. If enough people object I would probably change my mind.

Replies from: arundelo, Viliam, gjm, ESRogs, Elo
comment by arundelo · 2017-09-17T20:06:54.035Z · LW(p) · GW(p)

I just used Wei Dai's lesswrong_user script to download Eliezer's posts and comments (excluding, last I knew, those that don't show up on his "OVERVIEW" page e.g. for karma reasons). This went back to late December 2009 before the network connection got dropped.

I counted his uses of "LessWrong" versus "Less Wrong". (Of course I didn't count things such as the domain name "lesswrong.com", the English phrase "less wrong", or derived words like "LessWrongers".)

"LessWrong": 1 2 3* 4*

"Less Wrong": 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20* 21 22* 23 24 25 26

Entries with asterisks appear in both lists. Of his four uses of "LessWrong", three are modifying another word (e.g., "LessWrong hivemind").

(For what it's worth, "LessWrongers": 1 2; "Less Wrongians": 1.)

comment by Viliam · 2017-09-19T23:04:59.716Z · LW(p) · GW(p)

I think "Less Wrong" was an appropriate name at the beginning, when the community around the website was very small. Now that we have grown, both in user count and in content size, we could simply start calling ourselves "Wrong". One word, no problems with capitalization or spacing.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-09-21T15:39:38.081Z · LW(p) · GW(p)

Calling ourselves "Wrong" or "Wrongers" would also fix the problem of "rationalist" sounding like we'd claim to be totally rational!

Replies from: gjm
comment by gjm · 2017-09-21T16:25:22.851Z · LW(p) · GW(p)

On the other hand, I think this might come across as too "cute" and be felt insincere.

comment by gjm · 2017-09-19T11:57:47.869Z · LW(p) · GW(p)

I personally really [dis]like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words.

"LessWrong" also has two weirdly capitalized words, but it's one notch weirder because they've been stuck together.

I agree that this is a super-bikesheddy topic and will try to avoid getting into an argument about this, but I would like to register a strong preference for "Less Wrong" as the default version of the name.

comment by ESRogs · 2017-09-17T09:20:23.135Z · LW(p) · GW(p)

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words.

Did you mean to write, 'dislike' "Less Wrong"'?

Replies from: Habryka
comment by Habryka · 2017-09-17T19:05:40.523Z · LW(p) · GW(p)

Wow... yes. This is the second time in this comment thread that I forgot to add a "dis" in front of a word.

comment by Elo · 2017-09-17T01:58:52.655Z · LW(p) · GW(p)

Irrelevant as to which. Just pick one and stick to. It.

comment by efenj · 2017-09-19T01:21:56.192Z · LW(p) · GW(p)

Thank you, very much for making this effort! I love the new look of the site — it reminds me of http://practicaltypography.com/ which is (IMO) the nicest looking site on the internet. I also like the new font.

Some feedback, especially regarding the importing of old posts.

  • Firstly, I'm impressed by the fact that the old links (with s/lesswrong.com/lesserwrong.com/) seem to consistently redirect to the correct new locations of the posts and comments. The old anchor tag links (like http://lesswrong.com/lw/qx/timeless_identity/#kl2 ) do not work, but with the new structuring of the comments on the page that's probably unavoidable.

  • Some comments seem to have just disappeared (e.g. http://lesswrong.com/lw/qx/timeless_identity/dhmt ). I'm not sure if these are deliberate or not.

  • Both the redirection and the new version, in general, somehow feel slow/heavy in a way that the old versions did not (I'd chalk that up to my system being to blame, but why would it disproportionately affect the new rather than the old versions).

  • Images seem to be missing from the new versions (e.g. from http://lesswrong.com/lw/qx/timeless_identity/https://www.lesserwrong.com/static/imported/2008/06/02/manybranches4.png for instance does not exist)

  • Citations (blockquotes) are not standing out very well in the new versions, to the extent that I have trouble easily determining where they end and the surrounding text restarts. (A possible means of improving this could perhaps be to increase the padding of blockquotes.) For an example, see http://lesswrong.com/lw/qx/timeless_identity .

  • Straight quotation marks ("), rather than (“ ”) look out of place with the new font (I have no idea how to easily remedy this.) For examples, yet again see http://lesswrong.com/lw/qx/timeless_identity .

comment by DragonGod · 2017-09-16T21:53:14.813Z · LW(p) · GW(p)

I think adding a collection of the best Overcoming Bias posts, including posts like "you are never entitled to your own opinion" to the front page would be a great idea, and it might be better than putting a link to HPMOR (some users seem to believe that linking HPMOR on the front page may come across as puerile).

Replies from: Habryka
comment by Habryka · 2017-09-16T23:53:37.360Z · LW(p) · GW(p)

I agree that I really want a Robin Hanson collection in a similar style to how we already have a Scott Alexander collection. We will have to coordinate with Robin on that. I can imagine him being on board, but I can also imagine him being hesitant to have all his content crossposted to another site. He seemed to prefer having full control over everything on his own page, and apparently didn't end up posting very much on LessWrong, even as LW ended up with a much larger community and much more activity.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T01:13:16.543Z · LW(p) · GW(p)

Well, maintaining links to them (if he prefers them on his site) might be an acceptable compromise then? I think Robin's posts are a core part of the "rationalist curriculum", and the site would be incomplete if we don't include them.

comment by DragonGod · 2017-09-16T17:13:39.939Z · LW(p) · GW(p)

On StackExchange, you lose reputation whoever you downvote a question/answer; this makes downvoting a costly signal for displeasure. I like the notion, and hope it is included in the new site. If you have to spend your hard-earned karma to cause someone to lose karma, then it may discourage karma assassination, and ensure that downvotes are only used on content people have strong negative feelings towards.

##Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

#Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I think you lose one reputation, per downvote, and cause the person downvoted to lose 2 - 5 reputation.

I think downvoting costing 0.33 - 0.5 the karma you deduct from the target of your downvote is a good idea, and will encourage better downvote practices and would overall be an improvement to the karma feature.

Replies from: Habryka, ChristianKl, Viliam
comment by Habryka · 2017-09-16T23:30:34.023Z · LW(p) · GW(p)

Hmm... I feel that this disincentivizes downvoting too strongly, and just makes downvoting feel kind of shitty on an emotional level.

An alternative thing that I've been thinking about is to make it so that when you downvote something, you have to give a short explanation between 40 and 400 characters about why you think the comment was bad. Which both adds a cost to downvoting, and actually translates that cost into meaningful information for the commenter. Another alternative implementation of this could work with a set of common tags that you can choose from when downvoting a comment, maybe of the type "too aggressive", "didn't respond to original claim", "responded to strawman", etc.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T01:09:44.766Z · LW(p) · GW(p)

Hmm... I feel that this incentivizes downvoting too strongly

How does this incentivise downvoting? Downvoting is costly signal of displeasure, and as downvotes cost a certain fraction of the karma you deduct, it disincentivises downvoting.

makes downvoting feel kind of shitty on an emotional level.

This is a feature not a bug; we don't want to encourage downvoting and karma assassination. The idea is that downvoting becomes costly signalling of displeasure. Mere disagreement would not cause downvoting. Downvoting should be costly signalling.

An alternative thing that I've been thinking about is to make it so that when you downvote something, you have to give a short explanation between 40 and 400 characters about why you think the comment was bad. Which both adds a cost to downvoting, and actually translates that cost into meaningful information for the commenter.

I thought of this as well, but decided that the StackExchange system of making downvotes cost karma is better for the purposes I thought of.

Another alternative implementation of this could work with a set of common tags that you can choose from when downvoting a comment, maybe of the type "too aggressive", "didn't respond to original claim", "responded to strawman", etc.

This fails to achieve "adds a cost to downvoting"; if there are custom downvoting tags, then the cost of downvoting is removed. I think making downvotes cost a fraction (<= 0.5) of the karma you deduct serves to discourage downvoting.

Replies from: Habryka
comment by Habryka · 2017-09-17T02:40:51.465Z · LW(p) · GW(p)

"How does this incentivise downvoting?"

Sorry, my bad. I wanted to write "disincentivize", but failed. I guess it's a warning against using big words.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T09:24:07.106Z · LW(p) · GW(p)

Oh, okay. I still think we want to disincentivise downvoting though.

##Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

#Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I suggest users lose 40% of the karma they deduct (since you want to give different users different weights). For example, if you downvote someone, they lose 5 karma, but you lose 2 karma.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2017-09-17T19:32:02.012Z · LW(p) · GW(p)

How about the boring simplicity of having downvote limits? Maybe something around one downvote/24 hours-- not cumulative.

If you're feeling generous, maybe add a downvote/24 hours per 1000 karma, with a maximum or 5 downvotes/24 hours.

Replies from: J_Thomas_Moros, DragonGod
comment by J Thomas Moros (J_Thomas_Moros) · 2017-09-18T17:41:56.960Z · LW(p) · GW(p)

I'm not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.

comment by DragonGod · 2017-09-17T20:02:19.858Z · LW(p) · GW(p)

This is a solution as well; it is not clear to me though, that it is better than the solution I proposed.

comment by ChristianKl · 2017-09-20T13:20:29.899Z · LW(p) · GW(p)

I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.

Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

Replies from: DragonGod
comment by DragonGod · 2017-09-20T19:32:43.003Z · LW(p) · GW(p)

I'll add the point you raise about downvotes to the "cons" of my argument.

comment by Viliam · 2017-09-19T22:58:14.437Z · LW(p) · GW(p)

So... let's imagine that one day the website will attack e.g. hundreds of crackpots... each of them posting obviously crazy stuff, dozens of comments each... but most people will hesitate to downvote them, because they would remember that doing so reduces their own karma.

Okay, this will probably not happen. But I think that downvoting is an important thing and should not be disincentivized per se. Bad stuff needs to get downvoted. Actually, other than Eugine, people usually don't downvote enough. (And for Eugine, this is not a problem at all; he will get the karma back by upvoting himself with his other sockpuppets.)

I think it is already too easy to get a lot of karma on LW just by posting a lot of mediocre quality comments, each getting 1 karma point on average. Sometimes I suspect that maybe half of my own karma is for the quality of things I wrote, and the remaining half is for spending too much time commenting here even when I have nothing especially insightful to say.

Replies from: DragonGod
comment by DragonGod · 2017-09-20T07:53:21.411Z · LW(p) · GW(p)

Okay, this will probably not happen.

Thank God you agree, and thus I think it's value as a thought experiment is nil.

But I think that downvoting is an important thing and should not be disincentivized per se.

Disincentivising downvoting discourages frivolous use of downvotes, and encourages responsible downvoting usage.

If you just disagree with someone, you're more likely to reply than downvote them if you care about your karma for example.

Actually, other than Eugine, people usually don't downvote enough. (And for Eugine, this is not a problem at all; he will get the karma back by upvoting himself with his other sockpuppets.)

On StackExchange upvotes and downvotes from accounts with less than 15 rep are recorded but don't count (presumably until the account gains more than 15 rep). LW may decide to set her bar lower (10 rep?) or higher (>= 20 rep?), but I think the core insight is very good and would be a significant improvement if applied to LW.

comment by Kaj_Sotala · 2017-09-15T10:18:18.343Z · LW(p) · GW(p)

Thank you for doing this!

Not a comment on the overview, but on LW2.0 itself: are you intentionally de-emphasizing comment authorship by making the author names show up in a smaller font than the text of the comment? Reading the comments under the roadmap page, it feels slightly annoying that the author names are small enough that my brain ignores them instead of registering them automatically, and then I have to consciously re-focus my attention to see who wrote a comment, each time that I read a new comment.

Replies from: Habryka
comment by Habryka · 2017-09-16T23:23:17.774Z · LW(p) · GW(p)

That was indeed intentional, but after playing around with it a bit, I actually think it had a negative effect on the skimmability of comment threads, and I am planning to try out a few different solutions soon. In general I feel that I want to increase the spacing between different comments and make it easier to identify the author of a comment.

Replies from: Elo
comment by Elo · 2017-09-17T01:56:33.047Z · LW(p) · GW(p)

I think I would prefer information density. I am annoyed by the classic mybb type forum of low density of comments and prefer the more, "Facebook" style density but it will shorten comments to go that dense. So a balance close to the current density would be my suggestion.

comment by WhySpace_duplicate0.9261692129075527 · 2017-09-15T08:29:07.524Z · LW(p) · GW(p)

I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!

Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bits of positive reinforcement to keep your system 1 happy while your system 2 does the analytic thinking stuff to digest the piece.

Now, obviously this could go overboard, since memetics dictates that short, likeable things will get upvoted faster than long, thoughtful things, outcompeting them. But, I don't think we as a community are currently at risk of that, especially with the moderation techniques described in the OP.

And, I don't mean random normal "guy walks into a bar" jokes. I mean the sort of thing that you see in the comments on old LW posts, or on Weird Sun Twitter. Jokes about Trolley Problems and Dust Specks and Newcomb-like problems and negative Utilitarians. "Should Pascal accept a mugging at all, if there's even a tiny chance of another mugger with a better offer?" Or maybe "In the future, when we're all mind-uploads, instead of arguing about the simulation argument we'll worry about being mortals in base-level reality. Yes, we'd have lots of memories of altering the simulation, but puny biological brains are error-prone, and hallucinate things all the time."

I think a lot of the reason social media is so addictive is the random dopamine injections. People could go to more targeted websites for more of the same humor, but those get old quickly. The random mix of serious info intertwined with joke memes provides novelty and works well together. The ideal for a more intellectual community should probably be more like 90-99% serious stuff, with enough fun stuff mixed in to avoid akrasia kicking in and pulling us toward a more concentrated source.

The implementation implications would be to present short-form stuff between long-form stuff, to break things up and give readers a quick break.

comment by Yosarian2 · 2017-09-16T21:00:26.847Z · LW(p) · GW(p)

My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content that nobody might see unless it happens to get promoted.

Once you get a constant stream of content on a daily basis, then maybe you can find a way to curate it to highlight the best content. But you need that stream of content and engagement first and foremost or I worry the whole thing may be stillborn.

Replies from: Habryka, Viliam
comment by Habryka · 2017-09-16T23:56:18.935Z · LW(p) · GW(p)

Agree with this.

I do however think that we actually have a really large stream of high-quality-content already in the broader rationality diaspora that we just need to tap into and get onto the new page. As such, the problem is a bit easier than getting a ton of new content creators, and is instead more of a problem of building something that the current content creators want to move towards.

And as soon as we have a high-quality stream of new content I think it will be easier to attract new writers who will be looking to expand their audience.

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-17T01:48:05.802Z · LW(p) · GW(p)

Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.

I guess my concern here though is that right now, LessWrong has a "discussion" side which is a little active and a "main" side which is totally dead. And it sounds like this plan would basically get rid of the discussion side, and make it harder to post on the main side. Won't the most likely outcome just be to lower the amount of content and the activity level even more, maybe to zero?

Fundamentally, I think the premise of your second bottleneck is incorrect. We don't really have a problem with signal-to-noise ratio here, most of the posts that do get posted here are pretty good, and the few that aren't don't get upvoted and most people ignore them without a problem. We have a problem with low total activity, which is almost the exact opposite problem.

comment by Viliam · 2017-09-19T23:17:14.195Z · LW(p) · GW(p)

the sheer quantity of new content is extremely low

That depends on how much time you actually want to spend reading LW. I mean, the optimal quantity will be different for a person who reads LW two hours a day, or a person who reads LW two hours a week. Now the question is which one of these should we optimize LW for? The former seems more loyal, but the latter is probably more instrumentally rational if we agree that people should be doing things besides reading web. (Also, these days LW competes for time with SSC and others.)

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-19T23:28:55.645Z · LW(p) · GW(p)

Ideally, you would want to generate enough content for the person who wants to read LW two hours a day, an then promote or highlight the best 5%-10% of the content so someone who has only two hours a week can see it.

Everyone is much better off that way. The person with only two hours a week is getting much better content then if there was much less content to begin with.

Replies from: Viliam
comment by Viliam · 2017-09-20T22:38:32.032Z · LW(p) · GW(p)

If LW2 remembers who read what, I guess "a list of articles you haven't read yet, ordered by highest karma, and secondarily by most recent" would be a nice feature that would scale automatically.

comment by IlyaShpitser · 2017-09-15T14:28:41.295Z · LW(p) · GW(p)

(a) Thanks for making the effort!

(b)

"I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation."

This won't work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion. I think the important thing to do is making toxic people (defined in a socially constructed way as people you don't want around) go away. Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Replies from: Habryka, Manfred, ESRogs, Vaniver, SaidAchmiz
comment by Habryka · 2017-09-17T00:02:18.001Z · LW(p) · GW(p)

"This won't work, for the same reason PageRank did not work"

I am very confused by this. Google's search vastly outperformed its competitors with PageRank and is still using a heavily tweaked version of PageRank to this day, delivering by far the best search on the market. It seems to me that PageRank should widely be considered to be the most successful reputation algorithm that has ever been invented, having demonstrated extraordinary real-world success. In what way does it make sense to say "PageRank did not work"?

Replies from: ZorbaTHut_duplicate0.11042347698617805, IlyaShpitser
comment by ZorbaTHut_duplicate0.11042347698617805 · 2017-09-17T12:19:54.222Z · LW(p) · GW(p)

FWIW, I worked at Google about a decade ago, and even then, PageRank was basically no longer used. I can't imagine it's gotten more influence since.

It did work, but I got the strong sense that it no longer worked.

comment by IlyaShpitser · 2017-09-17T01:08:59.545Z · LW(p) · GW(p)

Google is using a much more complicated algorithm that is constantly tweaked, and is a trade secret -- precisely because as soon as it became profitable to do so, the ecosystem proceeded to game the hell out of PageRank.

Google hasn't been using PageRank-as-in-the-paper for ages. The real secret sauce behind Google is not eigenvalues, it's the fact that it's effectively anti-inductive, because the algorithm isn't open and there is an army of humans looking for attempts to game it, and modifying it as soon as such an attempt is found.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2017-09-17T02:04:03.599Z · LW(p) · GW(p)

Given that, it seems equally valid to say "this will work, for the same reason that PageRank worked", i.e., we can also tweak the reputation algorithm as people try to attack it. We don't have as much resources as Google, but then we also don't face as many attackers (with as strong incentives) as Google does.

I personally do prefer a forum with karma numbers, to help me find quality posts/comments/posters that I would likely miss or have to devote a lot of time and effort to sift through.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-17T15:05:01.518Z · LW(p) · GW(p)

It's not PageRank that worked, it's anti-induction that worked. PageRank did not work, as soon as it faced resistance.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-18T07:54:42.297Z · LW(p) · GW(p)

You really are a "glass half empty" kind of guy aren't you.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-18T13:51:58.005Z · LW(p) · GW(p)

I am not really trying to be negative for the sake of being negative here, I am trying to correctly attribute success to the right thing. People get "halo effect" in their head because "eigenvalues" sound nice and clean.

Reputation systems, though, aren't the type of problem that linear algebra will solve for you. And this isn't too surprising. People are involved with reputation systems, and people are far too complex for linear algebra to model properly.

Replies from: Lumifer
comment by Lumifer · 2017-09-19T19:36:36.345Z · LW(p) · GW(p)

people are far too complex for linear algebra to model properly

True, but not particularly relevant. Reputation systems like karma will not solve the problem of who to trust or who to pay attention to -- but they are not intended to. Their task is to be merely helpful to humans navigating the social landscape. They do not replace networking, name recognition, other reputation measures, etc.

comment by Manfred · 2017-09-15T17:27:50.648Z · LW(p) · GW(p)

I think votes have served several useful purposes.

Downvotes have been a very good way of enforcing the low-politics norm.

When there's lots of something, you often want to sort by votes, or some ranking that mixes votes and age. Right now there aren't many comments per thread, but if there were 100 top-level comments, I'd want votes. Similarly, as a new reader, it was very helpful to me to look for old posts that people had rated highly.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-15T18:57:30.064Z · LW(p) · GW(p)

How are you going to prevent gaming the system and collusion?


Goodhart's law: you can game metrics, you can't game targets. Quality speaks for itself.

Replies from: Kaj_Sotala, John_Maxwell_IV, Manfred
comment by Kaj_Sotala · 2017-09-16T15:34:59.859Z · LW(p) · GW(p)

Curious as to why you think that LW2.0 will have a problem with gaming karma when LW1.0 hasn't had such a problem (unless you count Eugine, and even if you do, we've been promised the tools for dealing with Eugines now).

Replies from: Habryka, IlyaShpitser
comment by Habryka · 2017-09-17T00:05:58.868Z · LW(p) · GW(p)

I think this roughly summarizes my perspective on this. Karma seems to work well for a very large range of online forums and applications. We didn't really have any problems with collusion on LW outside of Eugine, and that was a result of a lack of moderator tools, not a problem with the karma system itself.

I agree that you should never fully delegate your decision making process to a simple algorithm, that's what the value-loading problem is all about, but that's what we have moderators and admins for. If we see suspicious behavior in the voting patterns we investigate and if we find someone is gaming the system we punish them. This is how practically all social rules and systems get enforced.

comment by IlyaShpitser · 2017-09-17T15:46:13.031Z · LW(p) · GW(p)

LW1.0's problem with karma is that karma isn't measuring anything useful (certainly not quality). How can a distributed voting system decide on quality? Quality is not decided by majority vote.

The biggest problem with karma systems is in people's heads -- people think karma does something other than what it does in reality.

Replies from: Kaj_Sotala, tristanm, DragonGod
comment by Kaj_Sotala · 2017-09-17T16:02:12.541Z · LW(p) · GW(p)

LW1.0's problem with karma is that karma isn't measuring anything useful (certainly not quality).

That's the exact opposite of my experience. Higher-voted comments are consistently more insightful and interesting than low-voted ones.

Quality is not decided by majority vote.

Obviously not decided by it, but aggregating lots of individual estimates of quality sure can help discover the quality.

Replies from: Vladimir_Nesov, IlyaShpitser
comment by Vladimir_Nesov · 2017-09-17T16:15:34.695Z · LW(p) · GW(p)

Higher-voted comments are consistently more insightful and interesting than low-voted ones.

This was also my experience (on LW) several years ago, but not recently. On Reddit, I don't see much difference between highly- and moderately-upvoted comments, only poorly-upvoted comments (in a popular thread) are consistently bad.

comment by IlyaShpitser · 2017-09-17T16:34:02.068Z · LW(p) · GW(p)

aggregating lots of individual estimates of quality sure can help discover the quality.

I guess we fundamentally disagree. Lots of people with no clue about something aren't going to magically transform into a method for discerning clue regardless of aggregation method -- garbage in garbage out. For example: aggregating learners in machine learning can work, but requires strong conditions.

Replies from: John_Maxwell_IV, Kaj_Sotala
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-18T05:10:07.446Z · LW(p) · GW(p)

Do you disagree with Kaj that higher-voted comments are consistently more insightful and interesting than low-voted ones?

It sounds like you are making a different point: that no voting system is a substitute for having a smart, well-informed userbase. While that is true, that is also not really the problem that a voting system is trying to solve.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-18T13:55:03.086Z · LW(p) · GW(p)

Sure do. On stuff I know a little about, what gets upvoted is "LW folk wisdom" or perhaps "EY's weird opinions" rather than anything particularly good. That isn't surprising. Karma, being a numerical aggregate of the crowd, is just spitting back a view of the crowd on a topic. That is what karma does -- nothing to do with quality.

Replies from: DragonGod
comment by DragonGod · 2017-09-19T16:03:42.383Z · LW(p) · GW(p)

What if the view of the crowd is correlated with quality.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-19T16:33:09.383Z · LW(p) · GW(p)

Every crowd thinks so.

Replies from: DragonGod
comment by DragonGod · 2017-09-19T16:46:51.127Z · LW(p) · GW(p)

I think Lesswrong might be (or at the very least was once) such a place where this is actually true.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-19T17:37:29.873Z · LW(p) · GW(p)

Every crowd thinks they are such a place where it's actually true. Outside view: they are wrong.

Replies from: DragonGod
comment by DragonGod · 2017-09-19T17:47:13.624Z · LW(p) · GW(p)

Every crowd thinks they are such a place where it's actually true.

Some of the extreme sceptics do not believe they are much closer to the truth than anyone else.

Outside view: they are wrong.

There does not exist a group such that consensus of the group is highly correlated with truth? That's quite an extraordinary claim you're making; do you have the appropriate evidence?

Replies from: gjm, IlyaShpitser
comment by gjm · 2017-09-19T22:37:34.161Z · LW(p) · GW(p)

I think Ilya is not claiming that no such group exists but that it is well nigh impossible to know that your group is one such. At least where the claim is being made very broadly, as it seems to be upthread. I don't think it's unreasonable for experimental physicists to think that their consensus on questions of experimental physics is strongly correlated with truth, for instance, and I bet Ilya doesn't either.

More specifically, I think the following claim is quite plausible: When a group of people coalesces around some set of controversial ideas (be they political, religious, technological, or whatever), the correlation between group consensus and truth in the area of those controversial ideas may be positive or negative or zero, and members of the group are typically ill-equipped to tell which of these cases they're in.

Replies from: DragonGod, Lumifer
comment by DragonGod · 2017-09-20T07:51:28.728Z · LW(p) · GW(p)

LW has the best epistemic hygiene of all the communities I've encountered and/or participated in.

In so far as epistemic hygiene is positively correlated with truth, I expect LW consensus to be more positively correlated with truth than most (not all) other internet communities.

comment by Lumifer · 2017-09-20T00:24:54.849Z · LW(p) · GW(p)

members of the group are typically ill-equipped to tell which of these cases they're in

Doesn't LW loudly claim to be special in this respect?

And if it actually is not, doesn't this represent a massive failure of the entire project?

comment by IlyaShpitser · 2017-09-19T21:58:02.488Z · LW(p) · GW(p)

Talking about LW, specifically. Presumably, groups exist that truth-track, for example experts on their area of expertise. LW isn't an expert group.

The prior on LW is the same as on any other place on the internet, it's just a place for folks to gab. If LW were extraordinary, truth-wise, they would be sitting on an enormous pile of utility.

Replies from: DragonGod, Lumifer
comment by DragonGod · 2017-09-20T07:42:12.316Z · LW(p) · GW(p)

The prior on LW is the same as on any other place on the internet.

I disagree. Epistemic hygiene is genuinely better on LW, and insofar as Epistemic hygiene is positively correlated with truth, I expect LW consensus to be more positively correlated with truth than most (not all) other internet communities.

comment by Lumifer · 2017-09-20T00:32:55.725Z · LW(p) · GW(p)

Presumably, groups exist that truth-track, for example experts on their area of expertise.

A group of experts will not necessarily truth-track -- there are a lot of counterexamples from gender studies to nutrition.

I would probably say that a group which implements its ideas in practice and is exposed to the consequences is likely to truth-track. That's not LW, but that's not most of the academia either.

Replies from: DragonGod
comment by DragonGod · 2017-09-20T07:45:40.938Z · LW(p) · GW(p)

I don't think LW is perfect; I think LW has the best epistemic hygiene of all communities I've encountered and/or participated in.

I think epistemic hygiene is positively correlated with truth.

comment by Kaj_Sotala · 2017-09-18T10:40:04.456Z · LW(p) · GW(p)

Lots of people with no clue about something aren't going to magically transform into a method for discerning clue regardless of aggregation method -- garbage in garbage out.

I think that's the core of the disagreement: I assume that if the forum is worth reading in the first place, then the average forum user's opinion of a comment's quality tends to correlate with my own. In which case something have lots of upvotes is evidence in favor of me also thinking that it will be a good comment.

This assumption does break down if you assume that the other people have "no clue", but if that's your opinion of a forum's users, then why are you reading that forum in the first place?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-18T13:57:33.397Z · LW(p) · GW(p)

"Clue" is not a total ordering of people from best to worst, it varies from topic to topic.


The other issue to consider is what you view the purpose of a forum is.

Consider a subreddit like TheDonald. Presumably they may use karma to get consensus on what a good comment is, also. But TheDonald is an echo chamber. If your opinions are very correlated with opinions of others in a forum, then naturally you get a number that tells you what everyone agrees is good.

That can be useful, sometimes. But this isn't quality, it's just community consensus, and that can be arbitrarily far off. "Less wrong," as-is-written-on-the-tin is supposedly about something more objective than just coming to a community consensus. You need true signal for that, and karma, being a mirror a community holds to itself, cannot give it to you.


edit: the form of your question is: "if you don't like TheDonald, why are you reading TheDonald?" Is that what you want to be saying?

comment by tristanm · 2017-09-17T18:04:56.928Z · LW(p) · GW(p)

Hopefully this question is not too much of a digression - but has anyone considered using something like Arxiv-Sanity but instead of for papers it could include content (blog posts, articles, etc.) produced by the wider rationality community? Because at least with that you are measuring similarity to things you have already read and liked, things other people have read and liked, or things people are linking to and commenting on, and you can search things pretty well based on content and authorship. Ranking things by (what people have stored in their library and are planning on taking time to study) might contain more information than karma.

comment by DragonGod · 2017-09-17T20:14:48.178Z · LW(p) · GW(p)

Karma serves as an indicator of the reception that certain content got. High karma means several people liked it. Negative karma means it was very disliked, etc.

comment by John_Maxwell (John_Maxwell_IV) · 2017-09-16T07:43:07.246Z · LW(p) · GW(p)

How are you going to prevent gaming the system and collusion?

Keep tweaking the rules until you've got a system where the easiest way to get karma is to make quality contributions?

There probably exist karma systems which are provably non-gameable in relevant ways. For example, if upvotes are a conserved quantity (i.e. by upvoting you, I give you 1 upvote and lose 1 of my own upvotes), then you can't manufacture them from thin air using sockpuppets.

However, it also seems like for a small community, you're probably better off just moderating by hand. The point of a karma system is to automatically scale moderation up to a much larger number of people, at which point it makes more sense to hash out details. In other worse, maybe I should go try to get a job on reddit's moderator tools team.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-16T11:34:38.386Z · LW(p) · GW(p)

Keep tweaking the rules until you've got a system where the easiest way to get karma is to make quality contributions?

This will never ever work. Predicting this in advance.

There probably exist karma systems which are provably non-gameable in relevant ways.

You should tell Google and academia, they will be most interested in your ideas. Don't you think people already thought very hard about this? This is such a typical LW attitude.

Replies from: John_Maxwell_IV, DragonGod
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-17T02:05:44.655Z · LW(p) · GW(p)

Don't you think people already thought very hard about this?

Can you show me 3 peer-reviewed papers which discuss discussion site karma systems that differ meaningfully from reddit's, and 3 discussion sites that implement karma systems that differ from reddit's in interesting ways? If not, it seems like a neglected topic to me.

Maybe I'm just not very good at doing literature searches. I did a search on Google Scholar for "reddit karma" and found only one paper which focuses on reddit karma. It's got brilliant insights such as

The aforementioned conflict between idealistically and quantitatively motivated contributions has however led to a discrepancy between value assessments of content.

...

This is such a typical LW attitude.

I believe Robin Hanson when he says academics neglect topics if they are too weird-seeming. Do you disagree?

It's certainly plausible that there is academic research relevant to the design of karma systems, but I don't see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own. Relevant quote.

Coincidentally, just a couple days ago I was having a conversation with a math professor here at UC Berkeley about the feasibility of doing research outside of academia. The professor's opinion was that this is very difficult to do in math, because math is a very "vertical" field where you have to climb to the top before making a contribution, and as long as you are going to spend half a decade or more climbing to the top, you might as well do so within the structure of academia. However, the professor did not think this was true of computer science (see: stuff like Bitcoin which did not come out of academia).

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-17T15:33:17.380Z · LW(p) · GW(p)

Maybe I'm just not very good at doing literature searches. I did a search on Google Scholar for "reddit karma" and found only one paper which focuses on reddit karma.

You can't do lit searches with google. Here's one paper with a bunch of references on attacks on reputation systems, and reputation systems more generally:

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36757.pdf

You are right that lots of folks outside of academia do research on this, in particular game companies (due to toxic players in multiplayer games). This is far from a solved problem -- Valve, Riot and Blizzard spend an enormous amount of effort on reputation systems.


I don't see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own.

I don't think there is a way to write this in a way that doesn't sound mean: because you are an amateur. Imo, the best way for amateurs to proceed is to (a) trust experts, (b) read expert stuff, and (c) mostly not talk. Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion. In principle, taking expert consensus as the prior is a part of rationality. In practice, people ignore this part because it is not a practice that is fun to follow. It's much more fun to talk than to read papers.

LW's love affair with amateurism is one of the things I hate most about its culture.


My favorite episode in the history of science is how science "forgot" what the cure of scurvy was. In order for human civilization not to forget things, we need to be better about (a), (b), (c) above.

Replies from: John_Maxwell_IV, DragonGod
comment by John_Maxwell (John_Maxwell_IV) · 2017-09-18T06:05:16.470Z · LW(p) · GW(p)

I appreciate the literature pointer.

taking expert consensus as the prior

What expert consensus are you referring to? I see an unsolved engineering problem, not an expert consensus.


My view of amateurism has been formed, in a large part, from reading experts on the topic:

The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you'll probably see problems that software could solve. In fact, you're doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don't even know what the status quo is to take it for granted.

Paul Graham

Introspection, and an examination of history and of reports of those who have done great work, all seem to show typically the pattern of creativity is as follows. There is first the recognition of the problem in some dim sense. This is followed by a longer or shorter period of refinement of the problem. Do not be too hasty at this stage, as you are likely to put the problem in the conventional form and find only the conventional solution.

Richard Hamming

Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read, even when you’re reading what you conceive to be introductory stuff.

Edward Boyden

This past summer I was working at a startup that does predictive maintenance for internet-connected devices. The CEO has a PhD from Oxford and did his postdoc at Stanford, so probably not an amateur. But working over the summer, I was able to provide a different perspective on the problems that the company had been thinking about for over a year, and a big part of the company's proposed software stack ended up getting re-envisioned and written from scratch, largely due to my input. So I don't think it's ridiculous for me to wonder whether I'd be able to make a similar contribution at Valve/Riot/Blizzard.

The main reason I was able to contribute as much as I did was because I had the gumption to consider the possibility that the company's existing plans weren't very good. Basically by going in the exact opposite direction of your "amateurs should stay humble" advice.

Here are some more things I believe:

  • If you're solving a problem that is similar to a problem that has already been solved, but is not an exact match, sometimes it takes as much effort to re-work an existing solution as to create a new solution from scratch.

  • Noise is a matter of place. A comment that is brilliant by the standards of Yahoo Answers might justifiably be downvoted on Less Wrong. It doesn't make sense to ask that people writing comments on LW try to reach the standard of published academic work.

  • In computer science, industry is often "ahead" of academia in the sense that important algorithms get discovered in industry first, then academics discover them later and publish their results.

Interested to learn more about your perspective.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-18T13:49:48.635Z · LW(p) · GW(p)

(a) They also laughed at Bozo the Clown. (I think this is Carl Sagan's quote).

(b) Outside view: how often do outsiders solve a problem in a novel way, vs just adding noise and cluelessness to the discussion? Base rates! Again, nothing that I am saying is controversial, having good priors is a part of "rationality folklore" already. Going with expert consensus as a prior is a part of "rationality folklore" already. It's just that people selectively follow rationality practices only when they are fun to follow.

(c) "In computer science, industry is often "ahead" of academia in the sense that important algorithms get discovered in industry first"

Yes, this sometimes happens. But again, base rates. Google/Facebook is full of academia-trained PhDs and ex-professors, so the line here is unclear. It's not amateurs coming up with these algorithms. John Tukey came up with the Fast Fourier Transform while at Bell Labs, but he was John Tukey, and had a math PhD from Princeton.

comment by DragonGod · 2017-09-17T20:22:11.418Z · LW(p) · GW(p)

(Upvoted).

Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion.

This is where we differ; I think the potential for substantial contribution vastly outweighs any "noise" that may be be caused by amateurs taking stabs at he problem. I do not think all the low hanging fruit are gone (and if they were, how would we know so?), I think that amateurs are capable of substantial contributions in several fields. I think that optimism towards open problems is a more productive attitude.

I support "LW's love affair with amateurism", and it's a part of the culture I wouldn't want to see disappear.

comment by DragonGod · 2017-09-16T17:03:31.317Z · LW(p) · GW(p)

You should tell Google and academia, they will be most interested in your ideas. Don't you think people already thought very hard about this? This is such a typical LW attitude.

This reply contributes nothing to the discussion of the problem at hand, and is quite uncharitable, I hope such replies were discouraged, and if downvoting was enabled, I would have downvoted it.

If thinking that they can solve the problem at hand (and making attempts at it) is a "typical LW attitude", then it is an attitude I want to see more of and believe should be encouraged (thus, I'll be upvoting /u/John_Maxwell_IV 's post). A priori assuming that one cannot solve a problem (that hasn't been proven/isnt known to be unsolvable) and thus refraining from even attempting the problem, isn't an attitude that I want to see become the norm in Lesswrong. It's not an attitude that I think is useful, productive, optimal or efficient.

It is my opinion, that we want to encourage people to attempt problems of interest to the community (the potential benefits are vast (e.g the problem is solved, and/or significant improvements are made on the problem, and future endeavours would have a better starting point), and the potential demerits are of lesser impact (time (ours and whoever attempts it) is wasted on an unpromising solution).

Coming back to the topic that was being discussed, I think methods of costly signalling are promising (for example, when you upvote a post you transfer X karma to the user, and you lose k*X (k < 1)).

Replies from: IlyaShpitser, Vladimir_Nesov
comment by IlyaShpitser · 2017-09-16T23:16:52.710Z · LW(p) · GW(p)

I have been here for a few years, I think my model of "the LW mindset" is fairly good.


I suppose the general thing I am trying to say is: "speak less, read more." But at the end of the day, this sort of advice is hopelessly entangled with status considerations. So it's hard to give to a stranger, and have it be received well. Only really works in the context of an existing apprenticeship relationship.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T00:58:02.095Z · LW(p) · GW(p)

Status games outside, the sentiment expressed in my reply are my real views on the matter.

comment by Vladimir_Nesov · 2017-09-16T17:17:15.131Z · LW(p) · GW(p)

A priori assuming that one cannot solve a problem

("A priori" suggests lack of knowledge to temper an initial impression, which doesn't apply here.)

There are problems one can't by default solve, and a statement, standing on its own, that it's feasible to solve them is known to be wrong. A "useful attitude" of believing something wrong is a popular stance, but is it good? How does its usefulness work, specifically, if it does, and can we get the benefits without the ugliness?

Replies from: DragonGod
comment by DragonGod · 2017-09-16T17:54:26.749Z · LW(p) · GW(p)

that hasn't been proven/isnt known to be unsolvable)

An optimistic attitude towards problems that are potentially solvable is instrumentally useful—and dare I argue—instrumentally rational. The drawbacks of encouraging an optimistic attitude towards open problems are far outweighed by the potential benefits.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2017-09-16T18:09:25.666Z · LW(p) · GW(p)

(The quote markup in your comment designates a quote from your earlier comment, not my comment.)

You are not engaging the distinction I've drawn. Saying "It's useful" isn't the final analysis, there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya).

The problem of improving over the stance of an "optimistic attitude" might be solvable.

Replies from: DragonGod
comment by DragonGod · 2017-09-16T20:32:04.155Z · LW(p) · GW(p)

(The quote markup in your comment designates a quote from your earlier comment, not my comment.)

I know: I was quoting myself.

Saying "It's useful" isn't the final analysis

I guess for me it is.

there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya)

The beliefs aren't known to be false. It is not clear to me, that someone believing they can solve a problem (that isn't known/proven or even strongly suspected to be unsolvable) is a false belief.

What do you propose to replace the optimism I suggest?

comment by Manfred · 2017-09-15T19:58:46.025Z · LW(p) · GW(p)

Moderation is basically the only way, I think. You could try to use fancy pagerank-anchored-by-trusted-users ratings, or make votes costly to the user in some way, but I think moderation is the necessary fallback.

Goodhart's law is real, but people still try to use metrics. Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.

Replies from: IlyaShpitser, Vladimir_Nesov
comment by IlyaShpitser · 2017-09-15T19:59:25.144Z · LW(p) · GW(p)

People use name recognition in practice, works pretty well.

Replies from: Kaj_Sotala, tristanm
comment by Kaj_Sotala · 2017-09-17T10:53:33.762Z · LW(p) · GW(p)

I can use name recognition to scroll through a comment thread to find all the comments by the people that I consider in high regard, but this is much more effort than just having a karma system which automatically shows the top-voted comments first. (The karma system also doesn't discriminate against new writers as badly as relying on name recognition does.)

comment by tristanm · 2017-09-16T21:55:10.421Z · LW(p) · GW(p)

Going to reply to this because I don't think it should be overlooked. It's a valid point - people tend to want to filter out information that's not from the sources they trust. I think these kind of incentive pressures are what led to the "LessWrong Diaspora" being concentrated around specific blogs belonging to people with very positive reputation such as Scott Alexander. And when people want to look at different sources of information they will follow the advice of said people usually. This is how I operate when I'm doing my own reading / research - I start somewhere I consider to be the "safest" and move out from there according to the references given at that spot and perhaps a few more steps outward.

When we use a karma / voting system, we are basically trying to calculate P(this contains useful information | this post has a high number of votes) but no voting system ever offers as much evidence as a specific reference from someone we recognize as trustworthy. The only way to increase the evidence gained from a voting system is to add further complexity to the system by increasing the amount of information contained in a vote, either by weighing the votes or by identifying the person behind the vote. And then from there you can add more to a vote, like a specific comment or a more nuanced judgement. I think the end of that track is basically what we have now, blogs by a specific person linking to other blogs, or social media like Facebook where no user is anonymous and everyone has their information filtered in some way.

Essentially I'm saying we should not ignore the role that optimization pressure has played in producing the systems we already have.

comment by Vladimir_Nesov · 2017-09-15T22:17:31.779Z · LW(p) · GW(p)

Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.

Which is why there should be a way to vote on users, not content, the quantity of unevaluated content shouldn't divide the signal. This would matter if the primary mission succeeds and there is actual conversation worth protecting.

comment by ESRogs · 2017-09-17T09:35:57.900Z · LW(p) · GW(p)

Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Ranking helps me know what to read.

The SlateStarCodex comments are unusable for me because nothing is sorted by quality, so what's at the top is just whoever had the fastest fingers and least filter.

Maybe this isn't a problem for fast readers (I am a slow reader), but I find automatic sorting mechanisms to be super useful.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2017-09-17T10:55:12.977Z · LW(p) · GW(p)

This. SSC comments I basically only read if there are very few of them, because of the lack of karma; on LW even large discussions are actually readable, thanks to karma sorting.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-17T15:13:13.284Z · LW(p) · GW(p)

That's an illusion of readability though, it's only sorting in a fairly arbitrary way.

Replies from: ESRogs, Dustin
comment by ESRogs · 2017-09-17T17:33:12.360Z · LW(p) · GW(p)

As long as it's not anti-correlated with quality, it helps.

It doesn't matter if the top comment isn't actually the very best comment. So long as the system does better than random, I as a reader benefit.

comment by Dustin · 2017-09-17T17:50:15.106Z · LW(p) · GW(p)

Over the years I've gone through periods of time where I can devote the effort/time to thoroughly reading LW and periods of time where I can basically just skim it.

Because of this I'm in a good position to judge the reliability of karma in surfacing content for its readability.

My judgement is that karma strongly correlates with readability.

comment by Vaniver · 2017-09-15T21:15:54.443Z · LW(p) · GW(p)

Oli and I disagree somewhat on voting systems. I think you get a huge benefit from doing voting at all, a small benefit from doing simple weighted voting (including not allowing people below ~10 karma to vote), and then there's not much left from complicated vote weighting schemes (like eigenkarma or so on). Part of this is because more complicated systems don't necessarily have more complicated gaming mechanics.

There are empirical questions involved; we haven't looked at, for example, the graph of what karma converges to if you use my simplistic vote weighting scheme vs. an eigenkarma scheme, but my expectation is a very high correlation. (I'd be very surprised if it were less than .8, and pretty surprised if it were less than .95.)

I expect the counterfactual questions--"how would Manfred have voted if we were using eigenkarma instead of simple aggregation?"--to not make a huge difference in practice, altho they may make a difference for problem users.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-15T21:18:40.376Z · LW(p) · GW(p)

What's the benefit? Also, what's the harm? (to you)

Replies from: Vaniver
comment by Vaniver · 2017-09-15T23:17:13.930Z · LW(p) · GW(p)

Main benefits to karma are feedback for writers (both informative and hedonic) and sorting for attention conservation. Main costs are supporting the underlying tech, transparency / explaining the system, and dealing with efforts to game it.

(For example, if we just clicked a radio button and we had eigenkarma, I would be much more optimistic about it. As is, there are other features I would much rather have.)

comment by Said Achmiz (SaidAchmiz) · 2017-09-15T19:31:16.706Z · LW(p) · GW(p)

Strongly seconded. I think there should be no karma system.

I commented on LW 2.0 itself about another reason why a karma system is bad.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-15T20:00:32.221Z · LW(p) · GW(p)

Yeah I agree that people need to weigh experts highly. LW pays lipservice to this, but only that -- basically as soon as people have a strong opinion experts get discarded. Started with EY.

Replies from: Vaniver
comment by Vaniver · 2017-09-16T01:42:41.277Z · LW(p) · GW(p)

My impression of how to do this is to give experts an "as an expert, I..." vote. So you could see that a post has 5 upvotes and a beaker downvote, and say "hmm, the scientist thinks this is bad and other people think it's good."

Multiple flavors lets you separate out different parts of the comment in a way that's meaningfully distinct from the Slashdot-style "everyone can pick a descriptor;" you don't want everyone to be able to say "that's funny," just the comedians.

This works somewhat better than simple vote weighting because it lets people say whether they're doing this as just another reader or 'in their professional capacity;' I want Ilya's votes on stats comments to be very highly weighted and I want his votes on, say, rationality quotes to be weighted roughly like anyone else's.

Of course, this sketch has many problems of its own. As written, I lumped many different forms of expertise into "scientist," and you're trusting the user to vote in the right contexts.

Replies from: SaidAchmiz, IlyaShpitser
comment by Said Achmiz (SaidAchmiz) · 2017-09-16T04:01:04.378Z · LW(p) · GW(p)

If you have a more-legible quality signal (in the James C. Scott sense of "legibility"), and a less-legible quality signal, you will inevitably end up using the more-legible quality signal more, and the less-legible one will be ignored—even if the less-legible one is tremendously more accurate and valuable.

Your suggestion is not implausible on its face, but the devil is in the details. No doubt you know this, as you say "this sketch has many problems of its own". But these details and problems conspire to make such a formalized version of the "expert's vote" either substantially decoupled from what it's supposed to represent, or not nearly as legible as the simple "people's vote". In the former case, what's the point? In the latter case, the result is that the "people's vote" will remain much more influential on visibility, ranking, inclusion in canon, contribution to a member's influence in various ways, and everything else you might care to use such formalized rating numbers for.

The question of reputation, and of whose opinion to trust and value, is a deep and fundamental one. I don't say it's impossible to algorithmize, but if possible, it is surely quite difficult. And simple karma (based on unweighted votes) is, I think, a step in the wrong direction.

Replies from: ingres
comment by namespace (ingres) · 2017-09-16T04:17:28.927Z · LW(p) · GW(p)

As far as an algorithm for reputation goes, academia seems to have something that sort of scales in the form of citations and co-authors:

http://www.overcomingbias.com/2017/08/the-problem-with-prestige.html

It's certainly a difficult problem however.

comment by IlyaShpitser · 2017-09-17T02:05:36.718Z · LW(p) · GW(p)

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That's evidence about pagerank systems not being great on their own. People game the hell out of citations.


Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.

Replies from: Vaniver, John_Maxwell_IV
comment by Vaniver · 2017-09-19T18:34:40.763Z · LW(p) · GW(p)

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems

To be clear, in this scheme whether or not someone had access to the expert votes would be set by hand.

Replies from: Lumifer
comment by Lumifer · 2017-09-19T19:42:31.140Z · LW(p) · GW(p)

What is going to be the definition of "an expert" in LW 2.0?

Replies from: gjm
comment by gjm · 2017-09-19T22:26:35.365Z · LW(p) · GW(p)

From context, it's clearly (conditional on the feature being there at all) "someone accepted by the administrators of the site as an expert". How they make that determination would be up to them; I would hope that (again, conditional on the thing happening at all) they would err on the side of caution and accept people as experts only in cases where few reasonable people would disagree.

Replies from: Lumifer
comment by Lumifer · 2017-09-20T00:28:23.417Z · LW(p) · GW(p)

"All animals are equal... " X-)

The issue is credibility.

comment by John_Maxwell (John_Maxwell_IV) · 2017-09-18T06:19:19.646Z · LW(p) · GW(p)

People game the hell out of citations.

Is there anyone whose makes it their business to guard against this?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-09-18T23:30:13.424Z · LW(p) · GW(p)

Academics make it their business, and they rely on name recognition and social networks.

comment by Gram_Stone · 2017-09-18T14:03:52.979Z · LW(p) · GW(p)

Will there be LaTeX support?

Replies from: DragonGod
comment by DragonGod · 2017-09-20T08:01:12.027Z · LW(p) · GW(p)

Please add this.

comment by ESRogs · 2017-09-17T10:09:32.563Z · LW(p) · GW(p)

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages

Some questions about this (okay if you don't have answers now):

  • Can anyone make a personal page?
  • Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)?
  • Can a user get kicked off for inappropriate content (whatever that means)?
Replies from: Benito, Habryka
comment by Ben Pace (Benito) · 2017-09-17T19:04:22.574Z · LW(p) · GW(p)

Thanks for the questions.

  • From the start, all user pages will be personal pages. If you make an account, you'll have a basic blog.
  • No requirements for the content. This is for people in the community (and others) to write about whatever they're interested in. If you want a place to write those short statistical oddities you've been posting to tumblr; if you want a place to write those not-quite-essays you've been posting to facebook; if you want a place to try out writing full blog posts; if you wish, you can absolutely do that here.
  • I expect we'll have some basic norms of decency. I've not started the discussion within the Sunshine Regiment on what these will be yet, but once we've had a conversation we'll open it up to input from the community, and I'll make sure to publish clearly both the norms and info on what happens when someone breaks a norm.
Replies from: Habryka
comment by Habryka · 2017-09-17T19:15:20.706Z · LW(p) · GW(p)

Apparently me and Ben responded to this at the same time. We seem to have mostly said the same things, so we are apparently fairly in sync.

comment by Habryka · 2017-09-17T19:13:51.255Z · LW(p) · GW(p)

"Can anyone make a personal page? Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)? Can a user get kicked off for inappropriate content (whatever that means)?"

Current answer to all of those is:

I don't have a plan for that yet, let's figure it out as we run into that problem. For now having too much traffic or content to the site seems like a less important error mode, even if that content is bad, as long as it doesn't clog up the attention of everyone else.

I would probably suggest warning and eventually banning people who repeatedly try to bring highly controversial politics onto the site, or who repeatedly act in bad faith or taste, so I don't think we want to leave those personal pages fully unmoderated. But the moderation threshold should be a good bit higher than on the main page. No other constraints on content for now.

Replies from: ChristianKl
comment by ChristianKl · 2017-09-20T13:09:04.571Z · LW(p) · GW(p)

When deciding whether to publish content it seems to me to be important whether content is welcome or isn't. Unclarity about the policy can hold people back from contributing.

comment by gbear605 · 2017-09-15T04:23:12.014Z · LW(p) · GW(p)

I'd love to see achieved the goal of an active rationalist-hub and I think this might be a method that can lead to it.

Ironically, after looking at the post you made on lesserwrong that combines various Facebook posts, Eliezer unknowingly demonstrates the exact issue: "because of that thing I wrote on FB somewhere" On one of his old LW posts, he would have linked to it. Instead, the explanation is missing for those who aren't up to date on his entire FB feed.

Thanks for the work that you've put into this.

Replies from: Benito, philh
comment by Ben Pace (Benito) · 2017-09-15T04:56:07.532Z · LW(p) · GW(p)

We've actually talked a bit with Eliezer about importing his past and future facebook and tumblr essays to LW 2.0, and I think this is a plausible thing we'll do after launch. I think it will be good to have his essays be more linkable and searchable (and the people I've said this to tend to excitedly agree with me on this point).

(I'm Ben Pace, the other guy working full time on LW 2.0)

Replies from: ingres
comment by namespace (ingres) · 2017-09-15T21:53:44.598Z · LW(p) · GW(p)

Please do this. This alone would be enough to get me to use and link LW 2.0, at least to read stuff on it.

UPDATE (Fri Sep 15 14:56:28 PDT 2017): I'll put my money where my mouth is. If the LW 2.0 team uploads at least 15 pieces of content authored by EY of a length at least one paragraph each from Facebook, I'll donate 20 dollars to the project.

Preferably in a way where I can individually link them, but just dumping them on a public web page would also be acceptable in strict terms of this pledge.

comment by philh · 2017-09-15T12:01:24.598Z · LW(p) · GW(p)

(As it happens, that particular post ("why you absolutely need 4 layers of conversation in order to have real progress") was un-blackholed by Alyssa Vance: https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/)

comment by Kaj_Sotala · 2017-09-15T17:25:13.654Z · LW(p) · GW(p)

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong:

I notice that this picture doesn't seem to include link posts. Will those still exist?

Replies from: Raemon
comment by Raemon · 2017-09-15T18:27:16.492Z · LW(p) · GW(p)

We have link post functionality but I think we're trying to shift away from it, and instead more directly solve the problem of people-posting-to-other-blogs (both by making it a better experience to post things here on your personal section, and to make it possible to post things to your blog that are auto-imported into LW)

Replies from: ingres
comment by namespace (ingres) · 2017-09-15T21:49:20.180Z · LW(p) · GW(p)

and to make it possible to post things to your blog that are auto-imported into LW

What kind of technical implementation are you looking at for this?

Replies from: Habryka, Raemon
comment by Habryka · 2017-09-16T23:41:13.582Z · LW(p) · GW(p)

This already exists! You can see an example of that with Elizabeth's blog "Aceso Under Glass" here:

https://www.lesserwrong.com/posts/mjneyoZjyk9oC5ocA/epistemic-spot-check-a-guide-to-better-movement-todd

We set it up so that Elizabeth has a tag on her wordpress blog such that whenever she adds something to that tag, it automatically gets crossposted to LessWrong. We can do this with arbitrary RSS feeds, as long as the RSS feeds export the full html of the post.

comment by Raemon · 2017-09-16T06:58:42.523Z · LW(p) · GW(p)

Habryka knows that better than I, I just know that it's in the works.

comment by BrassLion · 2017-10-11T21:17:47.615Z · LW(p) · GW(p)

I will say that lesserwrong is already useful to me, and I'm poking around reading a few things. I haven't been on LessWrong (this site) in a long time before just now, and only got here because I was wondering where this "LesserWrong" site came from. So, at the very least, your efforts are reaching people like me who often read and sometimes change their behavior based on posts, but rarely post themselves. Thanks for the all work you did - the UX end of the new site is much, much better.

comment by AFinerGrain_duplicate0.4555006182262571 · 2017-10-03T00:45:06.199Z · LW(p) · GW(p)

I've always been half-way interested in LessWrong. SlateStar, Robin Hanson, and Bryan Caplan have been favorite reading for a very long time. But every once in a while I'd have a look at the LessWrong, read something, and forget about it for months at a time.

After the rework I find this place much more appealing. I created a profile and I'm even commenting. I hope one day I can contribute. But honestly, I feel 200% better about just browsing and reading.

Great job.

comment by Craig_Heldreth · 2017-09-17T02:27:52.910Z · LW(p) · GW(p)

What would make you personally use the new LessWrong?

Quality content. Quality content. And quality content.

Is there any specific feature that would make you want to use it?

The features which I would most like to see:

Wiki containing all or at least most of the jargon.

Rationality quotations all in one file alphabetically ordered by author of the quote.

Book reviews and topical reading lists.

Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas ever did this but the tradition of the scholastics was always to get this pie in the sky.) It might be interesting to see what an experienced book editor would advise doing with this material.

Everything I would want to not see has been covered by yourself or others in this thread.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T09:32:00.063Z · LW(p) · GW(p)

Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas ever did this but the tradition of the scholastics was always to get this pie in the sky.) It might be interesting to see what an experienced book editor would advise doing with this material.

Doesn't Rationality: From AI to Zombies achieve this already?

Replies from: ingres
comment by namespace (ingres) · 2017-09-17T13:46:37.236Z · LW(p) · GW(p)

Rat:A-Z is like...a slight improvement over EY's first draft of the sequences. I think when Craig says condensed he has much more substantial editing in mind.

Replies from: Benito
comment by Ben Pace (Benito) · 2017-09-17T19:06:45.138Z · LW(p) · GW(p)

FYI R:AZ is shorter than The Sequences by a factor of 2, which I think is a substantial improvement. Not that it couldn't be shorter still ;-)

Replies from: gjm, ingres
comment by gjm · 2017-09-19T11:14:04.611Z · LW(p) · GW(p)

How much of that is selection (omitting whole articles) and how much is condensation (making individual articles shorter)?

Replies from: Benito
comment by Ben Pace (Benito) · 2017-09-19T20:29:00.714Z · LW(p) · GW(p)

I don't know for sure, my guess is 80/20. Rob wrote some great introductions that give more context, but mostly the remaining posts are written the same (I think).

comment by namespace (ingres) · 2017-09-17T20:41:54.563Z · LW(p) · GW(p)

Oh huh, TIL. Thanks!

comment by DragonGod · 2017-09-16T17:08:02.203Z · LW(p) · GW(p)

I've often faced frustration (I access LW from mobile) due to the "close" button being clicked (it is often not visible when typing in portrait mode (my phone is such that I can't see the comment while typing in landscape, and I'm used to the portrait keyboard)) resulting in me losing the entire comment. This is very demotivating, and quite frustrating. I hope that this is not a problem in Lesswrong 2.0, and hope that functionality for saving drafts of comments is added.

Replies from: Habryka
comment by Habryka · 2017-09-16T23:42:13.090Z · LW(p) · GW(p)

Yeah, the design of the commenting UI is sufficiently different, and more optimized for mobile that I expect this problem to be gone. That said, we are still having some problems with our editor on mobile, and it will take a bit to sort that out.

Replies from: DragonGod
comment by DragonGod · 2017-09-17T01:10:21.102Z · LW(p) · GW(p)

Thanks. Even if it's no longer a problem, I think saving drafts of comments (if it's not too big a headache to add) would be a nice improvement.

comment by [deleted] · 2017-09-16T16:17:57.748Z · LW(p) · GW(p)

Two things I'd like to see:

1) Some sort of "example-pedia" where, in addition to some sort of glossary, we're able crowd-source examples of the concepts to build upon understanding. I think examples continue to be in short supply, and that's a large understanding gap, especially when we deal with concepts unfamiliar to most people.

2) Something similar to Arbital's hover-definitions, or a real-time searchable glossary that's easily available.

I think the above two things could be very useful features, given the large swath of topics we like to discuss, from cognitive psych to decision theory, to help people more invested in one area more easily swap to reading stuff in another area.

Replies from: Habryka, DragonGod
comment by Habryka · 2017-09-16T23:59:18.680Z · LW(p) · GW(p)

1) I think this would be great, but is also really hard. I feel like you would need to build a whole wiki-structure with conflict resolution and moderation norms and collaborative editing features to achieve that kind of thing. But who knows, there might be an elegant and simple implementation that would work that I haven't thought of.

2) Arbital-style greenlinks are in the works and should definitely exist. For now they would only do the summary and glossary thing when you link to LW posts, but we can probably come up with a way of crowdsourcing more definitions of stuff without needing to create whole posts for it. Open to design suggestions here.

Replies from: DragonGod, None
comment by DragonGod · 2017-09-17T01:21:03.410Z · LW(p) · GW(p)

1) I think this would be great, but is also really hard. I feel like you would need to build a whole wiki-structure with conflict resolution and moderation norms and collaborative editing features to achieve that kind of thing. But who knows, there might be an elegant and simple implementation that would work that I haven't thought of.

I think the wiki is an integral feature of LW, such that if the new site lacks a Wiki, I'll resist moving to the new site.

Replies from: Habryka
comment by Habryka · 2017-09-17T02:48:45.638Z · LW(p) · GW(p)

We are planning to leave the wiki up, and probably restyle it at some point, so it will not be gone. User accounts will no longer be shared though, for the foreseeable future, which I don't think will be too much of an issue.

But I don't yet have a model of how to make the wiki in general work well. The current wiki is definitely useful, but I feel that it's main use has been the creation of sequences and collections of posts, which is now integrated more deeply into the site via the sequences functionality.

Replies from: Wei_Dai, DragonGod
comment by Wei Dai (Wei_Dai) · 2017-09-17T16:56:53.270Z · LW(p) · GW(p)

The wiki is also useful for defining basic concepts used by this community, and linking to them in posts and comments when you think some of your readers might not be familiar with them. It might also be helpful for outreach, for example our wiki page for decision theory shows up in the first page of Google results for "decision theory".

Replies from: Habryka
comment by Habryka · 2017-09-17T19:20:45.815Z · LW(p) · GW(p)

Oh, that's cool! I didn't know that.

This does update me towards the wiki being important. I just pinged Malo on whether I can get access to the LessWrong wiki analytics, so that I can look a bit more into this.

comment by DragonGod · 2017-09-17T09:26:50.271Z · LW(p) · GW(p)

Several people have suggested pmwiki; perhaps you should give it a try?

comment by [deleted] · 2017-09-17T00:59:10.541Z · LW(p) · GW(p)

The easiest method for 1, I think, would just to have a section under every item in the glossary called "Examples" and trust the community to put in good ones and delete bad ones.

For 2, I was thinking about something like a page running Algolia instant search, that would quickly find the term you want, bolded, with it's accompanying definition after it, dictionary-esque.

comment by DragonGod · 2017-09-16T17:15:28.068Z · LW(p) · GW(p)

Doesn't the wiki already achieve (1) to a satisfactory level?

I support (2).

comment by DragonGod · 2017-09-15T15:45:31.419Z · LW(p) · GW(p)

This sounds very promising. The UI looks like a site from 2017 as well (as opposed to the previous 2008 feel). The design is very aesthetically pleasing.

I'm very excited about the personal blog feature (posting our articles to our page is basically like a blog).

How long would the open beta last?

Replies from: Manfred
comment by Manfred · 2017-09-15T19:49:32.738Z · LW(p) · GW(p)

The only thing I don't like about the "2017 feel" is that it sometimes feel like you're just adrift in the text, with no landmarks. Sometimes you just want guides to the eye, and landmarks to keep track of how far you've read!

Replies from: DragonGod
comment by DragonGod · 2017-09-16T15:01:02.047Z · LW(p) · GW(p)

I haven't run into that problem, but I'm reading from my phone, and Chrome tracks where I've scrolled to.

comment by ChristianKl · 2017-09-20T13:04:06.005Z · LW(p) · GW(p)

I think one big problem about using the Reddit Codebase was that while there was a lot of additional code development we couldn't simply copy the code over as changing the code to be about LW takes the editing of source code.

Given that you now published the code under an MIT license, I ask myself whether it would be good to to have a separate open source project for the basic engine behind the website that can be used by different communities.

The effective altruism forum also used a Reddit fork and might benefit from using the basic engine behind the website as well. If there's a good openly licensed engine I would expect it to be used by additional projects and that as a result more people would contribute to the code.

Have you thought about such a setup? If so, why do you believe that having one Git Hub project for Lesswrong 2.0 is the right decision?

comment by Waltus · 2017-09-26T22:05:35.137Z · LW(p) · GW(p)

I would favor the option to hide comments' scores while retaining their resultant organization (best/popular/controversial/etc). I have the sense that I'm biased toward comments with higher scores even before I've read them, which is counterproductive to my ability to evaluate arguments on their own merit.

comment by NancyLebovitz · 2017-09-20T13:01:42.484Z · LW(p) · GW(p)

LW2.0 doesn't seem to be live yet, but when it is, will I be able to use my 1.0 username and password?

comment by DragonGod · 2017-09-20T08:01:25.180Z · LW(p) · GW(p)

On StackExchange upvotes and downvotes from accounts with less than 15 rep are recorded but don't count (presumably until the account gains more than 15 rep). LW may decide to set her bar lower (10 rep?) or higher (>= 20 rep?), but I think the core insight is very good and would be a significant improvement if applied to LW.

comment by Ben Pace (Benito) · 2017-09-19T02:52:33.693Z · LW(p) · GW(p)

error

comment by MaryCh · 2017-09-15T21:16:06.881Z · LW(p) · GW(p)

And the "Recent on rationality blogs" button will work again?

comment by username2 · 2017-09-18T17:10:47.603Z · LW(p) · GW(p)

People sure like to talk about meta topics.