Raemon's Shortform Feed

post by Raemon · 2017-12-30T21:09:29.890Z · score: 46 (None votes) · LW · GW · 101 comments

This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:

  1. don't feel ready to be written up as a full post
  2. I think the process of writing them up might make them worse (i.e. longer than they need to be)

I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.

101 comments

comment by Raemon · 2017-12-31T08:02:54.527Z · score: 49 (None votes) · LW · GW

Something struck me recently, as I watched Kubo, and Coco - two animated movies that both deal with death, and highlight music and storytelling as mechanisms by which we can preserve people after they die.

Kubo begins "Don't blink - if you blink for even an instant, if you a miss a single thing, our hero will perish." This is not because there is something "important" that happens quickly that you might miss. Maybe there is, but it's not the point. The point is that Kubo is telling a story about people. Those people are now dead. And insofar as those people are able to be kept alive, it is by preserving as much of their personhood as possible - by remembering as much as possible from their life.

This is generally how I think about death.

Cryonics is an attempt at the ultimate form of preserving someone's pattern forever, but in a world pre-cryonics, the best you can reasonably hope for is for people to preserve you so thoroughly in story that a young person from the next generation can hear the story, and palpably feel the underlying character, rich with inner life. Can see the person so clearly that he or she comes to live inside them.

Realistically, this means a person degrades with each generation. Their pattern is gradually distorted. Eventually it is forgotten.

Maybe this horrendously unsatisfying - it should be. Stories are not very high fidelity storage device. Most of what made the person an agent is gone.

But not necessarily - if you choose to not just remember humorous anecdotes about a person, but to remember what they cared about, you can be a channel by which that person continues to act upon the world. Someone recently pointed this out as a concrete reason to respect the wishes of the dead - as long as there are people enacting that person's will, there is some small way in which they meaningfully still exist.

This is part of how I chose to handle the Solstices that I lead myself: Little Echo, Origin of Stories, and Endless Lights are stories/songs touching on this theme. They don't work for everyone but they work for me. It's an unsatisfying concept but it's what we have.

This is what struck me:

I know no stories of my great great grandparents.

I do know stories of ancient generals and philosophers and artists and other famous people - people who lived such a captivating life that people wrote biographies about them.

I know stories about my grandmothers. I know stories about my great grandmothers. But one step beyond that... nothing. I never knew my great great grandparents, never had reason to ask about them. And I think it is probably too late - I think I could perhaps collect some stories of great-great-grandparents on my father's side. On my mothers... it's possible I could track them down but I doubt it.

And as things go, this isn't hugely upsetting to me. These are people I never met, and in all honesty it seems less pressing to preserve them than to cultivate the relationships I have in the near and now, and to save what lives I can who have yet to die in the first, physical fashion.

But, these are people who are dead forever. When fades at last the last lit sun, there will not be anyone to remember them.

And that's sad.

comment by weft · 2017-12-31T17:02:31.565Z · score: 20 (None votes) · LW · GW

One of the things that makes Realistically Probably Not Having Kids sad is that I'm pretty much the last of the line on my Dad's side. And I DO know stories (not much, but some) of my great-great-grandparents. Sure, I can write them down, so they exist SOMEWHERE. But in reality, when I die, that line and those stories die with me.

comment by Raemon · 2017-12-31T17:51:40.038Z · score: 24 (None votes) · LW · GW

I wanted to just reply something like "<3" and then became self-conscious of whether that was appropriate for LW.

comment by habryka (habryka4) · 2018-01-01T01:51:15.312Z · score: 11 (None votes) · LW · GW

Seems good to me.

comment by Raemon · 2018-01-01T01:52:47.293Z · score: 21 (None votes) · LW · GW

In particular, I think if we make the front-page comments section filtered by "curated/frontpage/community" (i.e. you only see community-blog comments on the frontpage if your frontpage is set to community), then I'd feel more comfortable posting comments like "<3", which feels correct to me.

comment by Raemon · 2018-01-27T07:34:15.539Z · score: 40 (None votes) · LW · GW

Conversation with Andrew Critch today, in light of a lot of the nonprofit legal work he's been involved with lately. I thought it was worth writing up:

"I've gained a lot of respect for the law in the last few years. Like, a lot of laws make a lot more sense than you'd think. I actually think looking into the IRS codes would actually be instructive in designing systems to align potentially unfriendly agents."

I said "Huh. How surprised are you by this? And curious if your brain was doing one particular pattern a few years ago that you can now see as wrong?"

"I think mostly the laws that were promoted to my attention were especially stupid, because that's what was worth telling outrage stories about. Also, in middle school I developed this general hatred for stupid rules that didn't make any sense and generalized this to 'people in power make stupid rules', or something. But, actually, maybe middle school teachers are just particularly bad at making rules. Most of the IRS tax code has seemed pretty reasonable to me."

comment by Raemon · 2018-01-18T20:30:00.312Z · score: 36 (None votes) · LW · GW

More in neat/scary things Ray noticed about himself.

I set aside this week to learn about Machine Learning, because it seemed like an important thing to understand. One thing I knew, going in, is that I had a self-image as a "non technical person." (Or at least, non-technical relative to rationality-folk). I'm the community/ritual guy, who happens to have specialized in web development as my day job but that's something I did out of necessity rather than a deep love.

So part of the point of this week was to "get over myself, and start being the sort of person who can learn technical things in domains I'm not already familiar with."

And that went pretty fine.

As it turned out, after talking to some folk I ended up deciding that re-learning Calculus was the right thing to do this week. I'd learned in college, but not in a way that connected to anything and gave me a sense of it's usefulness.

And it turned out I had a separate image of myself as a "person who doesn't know Calculus", in addition to "not a technical person". This was fairly easy to overcome since I had already given myself a bunch of space to explore and change this week, and I'd spent the past few months transitioning into being ready for it. But if this had been at an earlier stage of my life and if I hadn't carved out a week for it, it would have been harder to overcome.

Man. Identities. Keep that shit small yo.

comment by Pamela Fox (pamelafox) · 2018-06-30T22:12:24.411Z · score: 3 (None votes) · LW · GW

I went on a 4-month Buddhist retreat, and one week covered "Self-images". We received homework that week to journal our self-images - all of them. Every time I felt some sense of self, like "The self that prides itself on being clean" or "The self that's playful and giggly", I'd write it down in my journal. I ended up filling 20 pages over a month period, and learning so much about the many selves my mind/body were trying to convey to the world. I also discovered how often two self-images would compete with each other. Observing the self-images helped them to be less strongly attached.

It sounds like you discovered that yourself this week. You might find such an exercise useful for discovering more of that.

comment by Raemon · 2018-01-07T05:12:24.188Z · score: 36 (None votes) · LW · GW

So there was a drought of content during Christmas break, and now... abruptly... I actually feel like there's too much content on LW. I find myself skimming down past the "new posts" section because it's hard to tell what's good and what's not and it's a bit of an investment to click and find out.

Instead I just read the comments, to find out where interesting discussion is.

Now, part of that is because the front page makes it easier to read comments than posts. And that's fixable. But I think, ultimately, the deeper issue is with the main unit-of-contribution being The Essay.

A few months ago, mr-hire said (on writing that provokes comments)

Ideas should become comments, comments should become conversations, conversations should become blog posts, blog posts should become books. Test your ideas at every stage to make sure you're writing something that will have an impact.

This seems basically right to me.

In addition to comments working as an early proving ground for an ideas' merit, comments make it easier to focus on the idea, instead of getting wrapped up in writing something Good™.

I notice essays on the front page starting with flowery words and generally trying to justify themselves as an essay, when all they actually needed was to be a couple short paragraphs. Sometimes even a sentence.

So I think it might be better if the default way of contributing to LW was via comments (maybe using something shaped sort of like this feed), which then appears on the front page, and if you end up writing a comment that's basically an essay, then you can turn it into an essay later if you want.

comment by Raemon · 2018-01-07T06:18:23.782Z · score: 21 (None votes) · LW · GW

Relatedly, though, I kinda want aspiring writers on LW to read this Scott Alexander Post on Nonfiction Writing.

comment by Hazard · 2018-02-04T14:32:44.449Z · score: 11 (None votes) · LW · GW

I ended up back here because I just wrote a short post that was an idea, and then went, "Hmmm, didn't Raemon do a Short Form feed thing? How did that go?"

It might be nice if one could pin their short form feed to their profile.

comment by Raemon · 2018-02-04T22:49:36.388Z · score: 17 (None votes) · LW · GW

Yeah, I'm hoping in the not-too-distant future we can just make shortform feeds an official part of less wrong. (Although, I suppose we may also want users to be able to sticky their own posts on their profile page, for various reasons, and this would also enable anyone who wants such a feed to create one, while also being able to create other things like "important things you know about me if you're going to read my posts" or whatever.)

comment by Raemon · 2018-02-09T18:55:08.633Z · score: 30 (None votes) · LW · GW

So, AFAICT, rational!Animorphs is the closest thing CFAR has to publicly available documentation. (The characters do a lot of focusing, hypothesis generation-and-pruning. Also, I just got to the Circling Chapter)

I don't think I'd have noticed most of it if I wasn't already familiar with the CFAR material though, so not sure how helpful it is. If someone has an annotated "this chapter includes decent examples of Technique/Skill X, and examples of characters notably failing at Failure Mode Y", that might be handy.

comment by Raemon · 2018-06-25T23:39:29.991Z · score: 29 (None votes) · LW · GW

Periodically I describe a particular problem with the rationalsphere with the programmer metaphor of:

"For several years, CFAR took the main LW Sequences Git Repo and forked it into a private branch, then layered all sorts of new commits, ran with some assumptions, and tweaked around some of the legacy code a bit. This was all done in private organizations, or in-person conversation, or at best, on hard-to-follow-and-link-to-threads on Facebook.

"And now, there's a massive series of git-merge conflicts, as important concepts from CFAR attempt to get merged back into the original LessWrong branch. And people are going, like 'what the hell is focusing and circling?'"

And this points towards an important thing about _why_ think it's important to keep people actually writing down and publishing their longform thoughts (esp the people who are working in private organizations)

And I'm not sure how to actually really convey it properly _without_ the programming metaphor. (Or, I suppose I just could. Maybe if I simply remove the first sentence the description still works. But I feel like the first sentence does a lot of important work in communicating it clearly)

We have enough programmers that I can basically get away with it anyway, but it'd be nice to not have to rely on that.

comment by Raemon · 2018-07-15T15:05:12.763Z · score: 24 (None votes) · LW · GW

I disagree with this particular theunitofcaring post "what would you do with 20 billion dollars?", and I think this is possibly the only area where I disagree with theunitofcaring overall philosophy and seemed worth mentioning. (This crops up occasionally in her other posts but it is most clear cut here).

I think if you got 20 billion dollars and didn't want to think too hard about what to do with it, donating to OpenPhilanthropy project is a pretty decent fallback option.

But my overall take on how to handle the EA funding landscape has changed a bit in the past few years. Some things that theunitofcaring doesn't mention here, which seem at least warrant thinking about:

[Each of these has a bit of a citation-needed, that I recall hearing or reading in reliable sounding places, but correct me if I'm wrong or out of date]

1) OpenPhil has (at least? I can't find more recent data) 8 billion dollars, and makes something like 500 million a year in investment returns. They are currently able to give 100 million away a year.

They're working on building more capacity so they can give more. But for the foreseeable future, they _can't_ actually spend more money than they are making.

2) OpenPhil doesn't want to be responsible for more than 50% of an orgs' budget, because being fully dependent on a single foundation creates a distorted relationship (where the org feels somewhat constrained by the foundation's own goals/whims/models).

If you have a diverse funding base, you can just do the thing you think is best. If you have a small funder base, if you aren't perfectly aligned with the funder, there is pressure to shift towards projects that they think are best (even if the funder is trying _not_ to put such pressure on you)

I'm not sure how big a concern this _should_ be, but AFAICT it's currently their policy.

This means there's a fair bit of value, if you had $20 billion, to setting up an alternative foundation to OpenPhil, just from the perspective of making sure the best orgs can _actually_ get fully funded.

3) OpenPhil has high standards for where to donate.

This is good.

But the fact that they have 8 billion, make another 500 million a year and spend down only around 100 million, means that the funding niche that actually needs filling right now is not more-of-OpenPhils-strategy.

There's a weird situation in the current landscape where it feels like money is unlimited... but there are still EA-relevant projects that need money. Typically ones that are younger, where data is more scarce.

Figuring out which of those actually deserve money is hard (esp. without creating weird incentivizes down the line, where _anyone_ with a half-baked project can show up and get your money). But this seems more like the domain where another major funder would be valuable.

...

Now, this is all hypothetical (theunitofcaring doesn't have 20 billion and neither do I). But this does point at an important shift on how to think about donating, if you're a small-to-medium sized donor.

Awhile ago I wrote "Earning to Give is Costly Signalling". Power laws mean that the richest people dwarf the donations of smaller time donors. Therefore, most of the value of EA donors is convincing rich people to give (and think) in an EA fashion.

Now I think it's weirder than that.

Given that EA collectively has access to billions of dollars (plus smaller-but-still-larger-than-we-know-what-to-do-with donor pools from a few other multi-millionaires)...

If you're a small-to-medium donor, roles that make sense to play are:

  • Provide a costly signal for _new_ charities that aren't already on OpenPhil, BERI et al's radar.
  • Help seed-fund new projects that you have a lot of local information on, that you think make a credible case for being high impact
  • Donate to existing orgs, to help fill out the 50% funding gap (this is still partly about making sure they get funded, and also a sort of continued costly signal of the org's value to the larger funders). Many orgs also have tax-relevant status where it matters what proportion of their budget comes from private vs public donations, so making sure they have a diverse donor base is helpful.

This last option is basically EA business as usual, which is still important, but it's only one of several possible strategies that should be under consideration.

I also think it's important to consider using the money for your own personal development, or the development of people you know who you think could do well. Hire a personal trainer, or a tutor. Save up runway so that you can afford to take time off to think, and plan, or start a project.

comment by Raemon · 2018-04-18T20:47:20.253Z · score: 24 (None votes) · LW · GW

Issues with Upvoting/Downvoting

We've talked in the past about making it so that if you have Karma Power 6, you can choose whether to give someone anywhere from 1-6 karma.

Upvoting

I think this is an okay solution, but I also think all meaningful upvotes basically cluster into two choices:

A. "I think this person just did a good thing I want to positively reinforce"

B. "I think this person did a thing important enough that everyone should pay attention to it."

For A, I don't think it obviously matters that you award more than 1 karma, and definitely never more than 3 karma. The karma should be mostly symbolic. For B, I'd almost always want to award them maximum karma. The choice of "well, do they really deserve 1, 2 or 3 karma for their pat-on-the-head?" doesn't seem like a choice we should be forcing people to make.

The value in giving 1, 2 or 3 karma for a "small social reinforcement" is mostly about communicating "Social rewards from longtime trusted community members should feel better to get than social rewards from random newbies." I'm not sure how strong a signal this is.

For "Pay Attention To This" upvotes, similarly, if you have 6 karma power, I don't think it's that interesting a choice to assign 4, 5 or 6.

And, you know, Choices Are Bad [LW · GW].

So, I currently support a paradigm where you just have Big Upvote and Small Upvote. I'm neutral between "small upvote is always 1" or "small upvote grows from 1 to 3 as you gain karma

This feels elegant. The problem is downvoting.

Downvoting

When downvoting, there's a few different things I might be wanting to do (note: I don't endorse all of these, this is just what my S1 is wanting to do).

A. This person made a small mistake, and should be mildly socially punished

B. This person was deeply wrong and should heavily punished

C. This post is getting too much attention relative to how good it is. It's at 25 karma. I want to try to bring to around 15 or something.

D. This content should not be on the site (for any one of a number reasons), should not show up on the frontpage (meaning the karma should be at most zero) or the comment should be autocollapsed (karma should be -5)

When a newcomer shows up and does something I don't like, my natural instinct is to try to keep their comment at 0 (which feels like the right level of "your thing was bad", but in a way that feels more like an awkward silence than a slap in the face. I definitely need to be able to downvote by less than 6. The problem is as a user gains karma power, the amount I need to downvote just scales linearly.

This is all incompatible with the simple "Big Vote, Small Vote" paradigm. Which feels sad from an elegance/symmetry perspective.

So, that's a thing I'm thinking about.

comment by Wei_Dai · 2018-04-19T03:24:25.925Z · score: 11 (None votes) · LW · GW

There's another issue with voting, which is that I sometimes find a comment or post on the LW1 part of the site that I want to vote up or down, but I can't because my 5 points of karma power would totally mess up the score of that comment/post in relation to its neighbors. I haven't mentioned this before because I thought you might already have a plan to address that problem, or at worst I can wait until the variable upvote/downvote feature comes in. But if you didn't have a specific plan for that and adopted "small upvote grows from 1 to 3 as you gain karma" then the problem wouldn't get solved.

Also, is there an issue tracker for LW2? I wanted to check it to see if there's an existing plan to address the above problem, but couldn't find it through Google, from the About page, or by typing in "issue tracker" in the top right search box. There's the old issue tracker at https://github.com/tricycle/lesswrong/issues but it doesn't look like that's being used anymore?

ETA: I found the issue tracker at https://github.com/Discordius/Lesswrong2/issues by randomly coming across a comment that linked to it. I'm still not sure how someone is supposed to find it.

comment by Elo · 2018-04-18T21:19:19.510Z · score: 7 (None votes) · LW · GW

How do I "small up vote" for "keep thinking about it".

comment by Raemon · 2018-04-18T21:56:10.362Z · score: 8 (None votes) · LW · GW

For now, I guess just do the thing you just did? :)

comment by Raemon · 2018-04-18T21:57:27.397Z · score: 8 (None votes) · LW · GW

(that said I'd be interested in an unpacked version of your comment, sounded like the subtext was something like "this line of thinking is pointing somewhere useful but it doesn't seem like you're done thinking about it". If that's not the case, curious what you meant. If it is the case, curious about more detailed concerns about what would make for good or bad implementations of this)

comment by Elo · 2018-04-19T06:25:18.450Z · score: 7 (None votes) · LW · GW

It is clear that more thought I'd needed for a satisfactory answer here and I would encourage you to keep seeking a satisfactory solution.

comment by gwillen · 2018-04-19T04:28:42.821Z · score: 6 (None votes) · LW · GW

I liked the idea I think you mentioned in an earlier thread about this, where each click increases vote weight by one. It's conceptually very simple, which I think is a good property for a UI. It does involve more clicks to apply more voting power, but that doesn't seem bad to me. How often does one need to give something the maximum amount of votes, such that extra clicks are a problem? It seems to me this would tend to default to giving everyone the same voting power, but allow users with more karma to summon more voting power with very slightly more effort if they think it's warranted. That feels right to me.

comment by TheWakalix · 2018-05-01T15:37:41.865Z · score: 6 (None votes) · LW · GW

If this is implemented, I think there should be a dot between the two vote buttons to reset the vote to 0.

comment by gwillen · 2018-04-19T04:30:47.537Z · score: 6 (None votes) · LW · GW

(A possible downside I see is that it might somehow do the opposite -- that voting will feel like something that is reinforced in a conditioning sense, so that users with more voting power will get more reinforcers since they do click->reward more times, and that this will actually give them a habit of wanting to apply the maximum vote more than they otherwise would because it feels satisfying to vote repeatedly. This isn't clearly a lot worse than the situation we have now, where you always vote maximum with no option.)

comment by Raemon · 2018-01-20T23:49:22.151Z · score: 22 (None votes) · LW · GW

I think learning-to-get-help is an important, often underdeveloped skill. You have to figure out what *can* be delegated. In many cases you may need to refactor your project such that it's in-principle possible to have people help you.

Some people I know have tried consciously developing it by taking turns being a helper/manager. i.e. spend a full day trying to get as much use out of another person as you can. (i.e. on Saturday, one person is the helper. The manager does the best they can to ask the helper for help... in ways that will actually help. On Sunday, they reverse)

The goal is not just to get stuff done for a weekend, but to learn how ask for help, to help, to be helped.

(Some people I know did this for a full day, others did it for an hour. The people who did it for an hour said it didn't quite feel that useful. A person who did it for a full day said that an hour was nowhere near enough time to make it through the initial learning curve of "I don't even know what sort of things are useful to ask for help with.")

So, this is a thing I'm interested in trying.

I think it requires some existing trust and being able to work side-by-side, so I'm mostly extending a request/offer to do this for a weekend with people who already know me and live near me, but am curious if other people try it and get benefit out of it.

comment by Raemon · 2018-04-04T22:08:22.765Z · score: 20 (None votes) · LW · GW

Failure Modes of Archipelago

(epistemic status: off the cuff, maybe rewriting this as a post later. Haven't discussed this with other site admins)

In writing Towards Public Archipelago, I was hoping to solve a couple problems:

  • I want authors to be able to have the sort of conversational space that they actually want, to incentivize them to participate more
  • I want LW's culture to generally encourage people to grow. This means setting standards that are higher than what-people-do-by-default. But, people will disagree about what standards are actually good. So, having an overarching system whereby people can try out and opt-into higher-level-standards that they uphold each other to seems better than fighting what the overall standards of the site should be.

But, I've noticed an obvious failure mode. For Public Archipelago to work as described, you need someone who is:

  • willing to enforce rules
  • writes regularly, in a way that lends itself towards being a locus of conversation.

(In non-online spaces, you have a different issue, where you need someone who runs some kind of physical in-person space who is willing to enforce norms who is also capable of attracting people to their space)

I have a particular set of norms I'd like to encourage, but most of the posts I write that would warrant enforcing norms are about meta-stuff-re-Less-Wrong. And in those posts, I'm speaking as site admin, which I think makes it important for me to instead be enforcing a somewhat different set of norms with a higher emphasis on fairness.

(i.e. if site admins start deleting your comments on a post about what sort of norms a site should have, that can easily lead to some real bad chilling effects. I think this can work if you're very specific about what sort of conversation you want to have, and make your reasons clear, but there's a high risk of it spilling into other kinds of damaged trust that you didn't intend)

My vague impression is that most of the people who write posts that would benefit from some kind of norm-enforcing are somewhat averse to having to be a norm-enforcer.

Some people are willing to do both, but they are rare.

So the naive implementation of Public Archipelago doesn't work that well.

Problematic Solution #1: Subreddits

Several people suggested subforums as an alternative to author-centric Islands.

First, I think LW is still too small for this to make sense – I've seen premature subreddits kill a forum, because it divided everyone's attention and made it harder to find the interesting conversation.

Second, I don't think this accomplishes the same thing. Subforurms are generally about topics, and the idea I'm focusing on here is norms. In an AI or Math subforum, are you allowed to ask newbie questions, or is the focus on advanced discussion? Are you allowed to criticize people harshly? Are you expected to put in a bunch of work to answer a question yourself before you answer it?

These are questions that don't go away just because you formed a subforum. Reasonable people will disagree on them. You might have five people who all want to talk about math, none of who agree on all three of those questions. Someone has to decide what to enforce.

I'm very worried that if we try to solve this problem with subreddits, people will run into unintentional naming collisions where someone sets up a space with a generic name like "Math", but with one implicit set of answers to norm-questions, and then someone else wants to talk about math with a different set of answers, and they get into a frustrating fight over which forum should have the simplest name (or force all subforums to have oddly specific names, which still might not address all the nuances someone meant to convey)

For this reason, I think managing norms by author(s), or by individual-post makes more sense.

Problematic Solution #2: Cooperation with Admins

If a high-karma user sets their moderation-policy, they have an option to enable "I'm happy for admins to help enforce my policy." This allows people to have norms but outsource the enforcing of them.

We haven't officially tried to do this yet, but in the past month I've thought about how I'd respond in some situations (both on LW and elsewhere) where a user clearly wanted a particular policy to be respected, but where I disagreed with that policy, and/or thought the user's policy wasn't consistent enough for me to enforce it. At the very least, I wouldn't feel good about it.

I could resolve this with a simple "the author is always right" meta-policy, where even if an author seems (to me) to be wanting unfair or inconsistent things, I decide that giving authors control over their space is more important than being fair. This does seem reasonable-ish to me, at least in principle. I think it's good, in broader society, to have police who enforce laws even when they disagree with them. I think it's good, say, to have a federal government or UN or UniGov that enforces the right of individual islands to enforce their laws, and maybe this includes helping them do so.

But I think, at the very least, this requires a conversation with the author in question. I can't enforce a policy I don't understand, and I think policies that may seem simple-to-the-author will turn out to have lots of edge-cases.

The issue is that having that conversation is a fairly non-trivial-inconvenience, which I think will prevent most instances of admin-assisted-author-norms from coming to fruition.

Variant Solution #2B: Cooperation with delegated lieutenants

Instead of relying on admins to support your policy with a vaguely-associated halo of "official site power structure", people could delegate moderation to specific people they trust to understand their policy (either on a per-post or author-wide system).

This involves a chain-of-trust. (The site admins have to make an initial decision about who gains the power to moderate their posts, and if this also includes delegating moderation rights the admins also need to trust the person to choose good people to enforce a policy). But I think that's probably fine?

Variant Solution #2C: Shared / Open Source Norms

Part of the problem with enforcing norms is that you need to first think a bunch about what norms are even good for and which ones you want. This is a hugely non-trivial inconvenience.

A thing that could help this a bunch is to have people who think a lot about norms posting more about their thought process, and which norms they'd like to see enforced and why. People who are then interested in having norms enforced on their post, and maybe even willing to enforce those norms themselves, could have a starting point to describe which ones they care about.

comment by Ben Pace (Benito) · 2018-04-04T23:53:48.992Z · score: 14 (None votes) · LW · GW

Variant Solution #2D: Norm Groups ( intersection of solutions 1 and 2B): There are groups of authors and lieutenants who enforce a single set of norms, you can join them, and they'll help enforce the norms on your posts too.

You can join the sunshine regiment, the strict-truth-team, the sufi-buddhist team, and you can start your own team, or you can just do what the current site does where you run your own norms on your post and there's no team.

This is like subreddits except more implicit - there's no page for 'all the posts under these norms', it's just a property of posts.

comment by clone of saturn · 2018-10-31T04:58:14.820Z · score: 13 (None votes) · LW · GW

Idea: moderation by tags. People (meaning users themselves, or mods) could tag comments with things like #newbie-question, #harsh-criticism, #joke, etc., then readers could filter out what they don't want to see.

comment by Wei_Dai · 2018-04-18T06:39:10.740Z · score: 12 (None votes) · LW · GW

Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1? I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further. So it seems like before these experiments can start, we need to figure out why people aren't commenting much, and do something about that.

comment by Raemon · 2018-04-18T17:23:40.119Z · score: 12 (None votes) · LW · GW
I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further.

That is a good point, to at least keep in mind. I hadn't explicitly been weighing that cost. I do think I mostly endorse have more barriers to commenting (and fewer comments), but may not be weighing things right.

Off the cuff thoughts:

Fractal Dunbar

Part of the reason I comment less now (or at least feel like I do? maybe should check the data) than I did 5 months ago is that the site is now large enough that it's not a practical goal to read everything and participate in every conversation without a) spending a lot of time, b) feeling lost/drowned out in the noise.

(In particular, I don't participate in SSC comments despite having way more people due to the "drowned out in the noise" thing).

So, one of the intended goals underlying the "multiple norms" thingy is to have a sort of fractal structure, where sections of the site tend to cap out around Dunbar-number of people that can actually know each other and expect each other to stick to high-quality-discussion norms.

Already discouraging comments that don't fit

I know at least some people are not participating in LW because they don't like the comment culture (for various reasons outlined in the Public Archipelago post). So the cost of "the norms are causing some people to bounce off" is already being paid, and the question is whether the cost is higher or lower under the overlapping-norm-islands paradigm.

comment by Qiaochu_Yuan · 2018-04-18T18:27:57.838Z · score: 12 (None votes) · LW · GW

I mostly stopped commenting and I think it's because 1) the AI safety discussion got higher cost to follow (more discussion happening faster with a lot of context) and 2) the non-AI safety discussion seems to have mostly gotten worse. There seem to be more newer commenters writing things that aren't very good (some of which are secretly Eugine or something?) and people seem to be arguing a lot instead of collaboratively trying to figure out what's true.

comment by Elo · 2018-04-18T21:03:08.178Z · score: 7 (None votes) · LW · GW

If the site is too big it could be divided in one sections. That would effectively make it smaller.

I believe the content do far is a bit different. Worth being curious about what changed.

Yes we have less comments about day on lw2.

comment by ESRogs · 2018-04-18T06:58:15.952Z · score: 7 (None votes) · LW · GW
we need to figure out why people aren't commenting much

My hypothesis would be that a) the ratio of post/day to visitors/day is higher on LW2 than it was on LW1, and so b) the comments are just spread more thin.

Would be curious whether the site stats bear that out.

comment by Said Achmiz (SaidAchmiz) · 2018-04-18T07:09:49.212Z · score: 12 (None votes) · LW · GW

See the graphs I posted on this month’s open thread for some relevant data.

comment by Raemon · 2018-04-18T17:29:56.580Z · score: 29 (None votes) · LW · GW

To save everyone else some time, here's the relevant graph, basically showing that amount of comments has remained fairly constant for the past 4 months at least (while a different graph showed traffic as rising, suggesting ESRog's hypothesis seems true)

Graph

comment by ESRogs · 2018-04-18T19:16:58.262Z · score: 13 (None votes) · LW · GW

This is great. Would love to see graphs going back further too, since Wei was asking about LW2 vs LW1, not just since earlier in the LW2 beta.

comment by Wei_Dai · 2018-05-17T00:46:56.043Z · score: 5 (None votes) · LW · GW

Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1?

One hypothesis I thought of recently for this is that there are now more local rationalist communities where people can meet their social needs, which reduces their motivations for joining online discussions.

comment by Raemon · 2018-05-12T03:01:38.215Z · score: 19 (None votes) · LW · GW

[cn: spiders I guess?]

I just built some widgets for the admins on LW, so that posts by newbies and reported comments automatically show up in a sidebar where moderators automatically have to pay attention to them, approving or deleting them or sometimes taking more complicated actions.

And... woahman, it's like shining a flashlight into a cave that you knew was going to be kinda gross, but you weren't really prepared to a million spiders suddenly illuminated. The underbelly of LW, posts and comments you don't even see anymore because we installed karma filters on the frontpage.

There's a webcomic called Goblins, where one goblin decided to become a paladin, and gains the ability to Detect Evil. And suddenly is confronted with all the evil lurking about, in the shadows of people's hearts, literal shadows, and sometimes in broad daylight. And he's describing this to a fellow goblin, and they're like "Holy hell, how can you live like that!? Why would you choose to _force_ yourself to see the evil around you?"

And Goblin A nods gravely and says "so that you don't have to."

comment by Elo · 2018-05-12T04:06:13.324Z · score: 11 (None votes) · LW · GW

You realise that I read every comment in the rss feed right?

comment by Raemon · 2018-03-24T08:02:38.272Z · score: 19 (None votes) · LW · GW

Recently watched Finding Dory. Rambly thoughts and thorough spoilers to follow.

I watched this because of a review by Ozy a long while ago, noting that the movie is about character with a mental disability that has major affects on her. And at various key moments in the movie, she finds herself lost and alone, her mental handicap playing a major role in her predicament. And in other movies they might given her some way to... willpower through her disability, or somehow gain a superpower that makes the disability irrelevant or something.

And instead, she has to think, and figure out what skills she does have she can use to resolve her predicaments. And that this was beautiful/poignant from the standpoint of getting to see representation of characters with disabilities getting to be protagonists in a very real way.

I think the movie generally lived up to that review (with some caveats, see below). But I was also found myself looking at it through the recent "Elephant" and "Mythic" lens. This is "Self-Identify-As-An-Elephant" and "Live In Mythic Mode" The Movie.

Dory has a "rider", maybe, but the rider can't form longterm memories, which makes it much less obvious as the seat-of-identity.

She seems to have the ability to form system-1 impressions that gradually accumulate into familiarity, useful intuitions that help her find her way around, and the ability to form friends after prolonged exposure to them. (My understanding is that this is not a realistic depiction of humans with short term memory loss, but since the movie is about a talking fish I'm willing to cut it some slack).

Her intuition-powers strain credibility a bit. I'm also willing to cut the movie some slack here from the standpoint of "in most Everett branches, Dory dies very young, and the interesting story worth telling was about the Dory who had just enough natural skill and luck to skate by early on, and then develop S1 associations useful enough to continue surviving.

(Aside: this movie has loads of places where Jesus Christ everyone should have just died, and for some reason this was the most stressful cinematic experience I've had in a living memory)

The thing I found most interesting about the movie is the scene where she's lost and alone and sad, and has to figure out what to do, and starts sharing her thought process outside, making it legible to both herself and the audience for the first time.

comment by Raemon · 2018-04-26T06:15:58.515Z · score: 18 (None votes) · LW · GW

We've been getting increasing amounts of spam, and occasionally dealing with Eugins. We have tools to delete them fairly easily, but sometimes they show up in large quantities and it's a bit annoying.

One possible solution is for everyone's first comment to need to be approved. A first stab at the implementation for this would be:

1) you post your comment as normal

2) it comes with a short tag saying "Thanks for joining less wrong! Since we get a fair bit of spam, first comments need to be approved by a moderator, which normally takes [N hours, whatever N turns out to be]. Sorry about that, we'll be with you soon!"

3) Comments awaiting approval show up on moderator's screen at the top of the page or something, with a one-click approval, so that it's very unlikely to be missed. I think this could get the wait time down pretty low even with a smallish number of moderators.

The main downside here is that people's first commenting experience wouldn't be as good. My intent with step #2 was to smooth it over as much as possible. (i.e. if it just said "comment awaiting approval", I'd think it be much worse).

I'm curious a) how bad people think this experience would be, and b) any other issues that seem relevant?

comment by Elo · 2018-04-28T22:54:30.707Z · score: 18 (None votes) · LW · GW

If in the first 10 comments of a user and including a link, hold for moderation.

Also make a safe list and anyone on the safe list is fine to post.

comment by Raemon · 2018-04-29T04:21:35.094Z · score: 14 (None votes) · LW · GW

Hmm. Doing it only for links would def solve for spammers, which I think hits roughly 60% of the problem and is pretty good. Doesn't solve for Eugins. Not sure how to weigh that.

(Still interested in a literal answer to my question "how bad is it to have your first post need to be approved?" which I don't have much of an intuition for)

comment by Elo · 2018-04-29T05:04:20.640Z · score: 7 (None votes) · LW · GW

The other option is to hold comments from new accounts (or accounts with low posts) with certain keywords - for moderation.

I.e. "plumber", a phone number etc.

I think if you specify "you have less than 10 comments and you posted a link" to let people know why their comment is being held for "a day" or so. It's not a big deal.

If it was not explained then it would be more frustrating.

If you capture all comments while an account is suspected spam, that would be okay.

comment by clone of saturn · 2018-04-26T07:55:02.622Z · score: 11 (None votes) · LW · GW

As long as LW isn't high-profile enough to attract custom-written spambots, a possible easier alternative would be to combine a simple test to deter human spammers with an open proxy blacklist like SORBS. This strategy was very effective on a small forum I used to run.

comment by Raemon · 2018-04-29T04:25:14.755Z · score: 8 (None votes) · LW · GW

Using a list like SORBS sounds good. I actually think the test might be more annoying than waiting to get your post approved. (or, maybe less annoying, but causing more of a trivial inconvenience)

comment by Elo · 2018-04-28T22:55:56.444Z · score: 7 (None votes) · LW · GW

Also some of them are businesses. Like plumbers. You could call them up and tell them that they are paying spammers to post in irrelevant places and they should ask for their money back.

comment by Raemon · 2018-02-04T22:51:22.588Z · score: 17 (None votes) · LW · GW

Looking at how facebook automatically shows particular subcomments in a thread, that have a lot of likes/reacts.

And then looking at how LW threads often become huge and unwieldy when there's 100 comments.

At first I was annoyed by that FB mechanic, but it may in fact be a necessary thing for sufficiently large threads, to make it easy to find the good parts.

comment by Raemon · 2018-01-14T22:24:35.733Z · score: 16 (None votes) · LW · GW

Social failure I notice in myself: there'll be people at a party I don't know very well. My default assumption is "talk to them with 'feeler-outer-questions' to figure out what what they are interested in talking about". (i.e. "what do you do?"/"what's your thing?"/"what have you been thinking about lately?"/"what's something you value about as much as your right pinky?"/"What excites you?").

But this usually produces awkward, stilted conversation. (of the above, I think "what have you been thinking about lately?" produces the best outcomes most of the time)

Recently, I was having that experience, and ended up talking to a nearby person I knew better about a shared interested (videogames in this case). And then the people nearby who I didn't know as well were able to join in the conversation and it felt much more natural.

Part of the problem is that if there is no person-I-know nearby, I have to take a random guess at a thing to talk about that the person is interested in talking about.

In this case, I had various social cues that suggested video games would be a plausible discussion prompt, but not enough context to guess which sorts of games were interesting, and not enough shared background knowledge to launch into a discussion of a game I thought was interesting without worrying a bunch about "is this too much / wrong sort of conversation for them."

Not sure what lesson to learn, but seemed noteworthy.

comment by Qiaochu_Yuan · 2018-01-18T22:19:00.031Z · score: 19 (None votes) · LW · GW

I really dislike the pinky question for strangers (I think it's fine for people you know, but not ideal). It's an awkward, stilted question and it's not surprising that it produces awkward, stilted responses. Aimed at a stranger it is very clearly "I am trying to start a reasonably interesting conversation" in a way that is not at all targeted to the stranger; that is, it doesn't require you to have seen and understood the stranger at all to say it, which they correctly perceive as alienating.

It works on a very specific kind of person, which is the kind of person who gets so nerdsniped wondering about the question that they ignore the social dynamic, which is sometimes what you want to filter for but presumably not always.

comment by Raemon · 2018-01-18T23:11:45.622Z · score: 12 (None votes) · LW · GW

A noteworthy thing from the FB version of this thread was that people radically varied in which question seemed awkward to them. (My FB friends list is sharply distorted by 'the sort of friends Ray is likely to have', so I'm not sure how much conclusion can be drawn from this, but at the very least it seemed that typical minding abounds all around re: this class of question)

comment by Qiaochu_Yuan · 2018-01-18T23:49:12.958Z · score: 6 (None votes) · LW · GW

Sure, I think all of these questions would be awkward addressed to various kinds of strangers, which is part of my point: it's important to do actual work to figure out what kind of question a person would like to be asked, if any.

comment by Raemon · 2018-01-19T01:02:15.650Z · score: 16 (None votes) · LW · GW

So a reframing of this question is "what do you say/do/act to gain information about what a person would like to be asked without resorting to one of these sorts of questions?"

(With a side-note of "the hard mode for all of this is when you actually do kinda know the person, or have seen them around, so it is in fact 'legitimately' awkward' that you haven't managed to get to know them well enough to know what sorts of conversations to have with them.)

comment by gjm · 2018-01-19T00:12:17.019Z · score: 6 (None votes) · LW · GW

I have no idea how (a)typical this is, but I find it difficult to give quick answers for "global summary" type questions. What's the best book you've ever read? What do you spend most of your time doing? What are your two most important values? Etc. Those "feeler-outer questions" have that sort of quality to them, and if the people at those parties are like me I'm not surprised if conversation is sometimes slow to get started.

comment by Raemon · 2019-01-20T22:09:41.975Z · score: 14 (None votes) · LW · GW

My review of the CFAR venue:

There is a song that the LessWrong team listened to awhile back, and then formed strong opinions about what was probably happening during the song, if the song had been featured in a movie.

(If you'd like to form your own unspoiled interpretation of the song, you may want to do that now)

...

So, it seemed to us that the song felt like... you (either a single person or small group of people) had been working on an intellectual project.

And people were willing to give the project the benefit of the doubt, a bit, but then you fucked it up in some way, and now nobody believed in you and you were questioning all of your underlying models and maybe also your sanity and worth as a human. (vaguely A Beautiful Mind like)

And then you retreat to your house where you're pretty alone, and it's raining outside. And the house is the sort of house a moderately wealthy but sometimes-alone intellectual might live, with a fair bit of space inside, white walls, whiteboards, small intellectual toys scattered about. A nice carpet.

And you're pacing around the house re-evaluating everything, and it's raining and the rain dapples on the windows and light scatters in on your old whiteboard diagrams that no longer seem to quite make sense.

And then you notice a small mental click of "maybe, if I applied this idea in this slightly different way, that might be promising". And you clear off a big chunk of whiteboard and start to work again, and then over a several day montage you start to figure out a new version of your idea that somehow works better this time and you get into a flow state and then you're just in this big beautiful empty house in the rain, rebuilding your idea, and this time it's pretty good and maybe will be the key to everything.

So anyway the LW team listened to this song a year+ ago and we now periodically listen to it and refer to it as "Building LessWrong in the Rain."

And, last week we had the LW Team Retreat, which was located at the new(ish) CFAR venue, and... a) it was raining all week, b) we basically all agreed that the interior of the CFAR venue looked almost exactly like how we had all imagined it. (Except, I at least had been imagining it a bit more like a Frank Lloyd Wright house, so that from the outside it looked more rectangular instead of a more traditional house/big-cottage or whatever)

...

The house interior is quite well designed. Every room had a purpose, and I'd be mulling about a given room thinking "gee, I sure wish I had X", and then I'd rotate 30º and then X would be, like, within arm's reach.

Most rooms had some manner of delightful thing, whether that be cute magnet puzzles or a weird glowing flower that looked like if I touched it it'd disappear and then I'd start glowing an either be able to jump higher or spit fireballs (I did not touch it)

Small complaints include:

a) the vacuum was quite big and heavy, which resulted in me switching to using a broom when I was cleaning up,

b) the refrigerator was like 500x more dangerous than any other fridge I ever encountered. Normally the amount of blood a refrigerator draws when I touch it gently is zero. The bottom of this fridge cut me 3 times, twice on my toes, once on my thumb while I was trying to clean it.

c) the first aid kit was in a black toolbox with the red "+" facing away from the visible area which made it a bit more counterintuitive to discover than most of the other things in the house.

comment by Raemon · 2017-12-31T20:57:33.647Z · score: 13 (None votes) · LW · GW

Musings on ideal formatting of posts (prompted by argument with Ben Pace)

My thoughts:

1) Working memory is important.

If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head.

2) Less Wrong is for thinking

This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them.

3) You can expand working memory with visual reference

Having larger monitors or notebooks to jot down thoughts makes it easier to think.

The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain).

But regardless of font-size:

4) Optimizing a post for re-skimmability makes it easier to refer to.

This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Sunset at Noon for an example)

Ben's Counter:

Ben Pace noticed this while reviewing an upcoming post I was working on, and his feeling was "all this bold is making me skim the post instead of reading it."

To which all I have to say is "hmm. Yeah, that seems likely."

I am currently unsure of the relative tradeoffs.

comment by Zvi · 2018-01-01T00:59:52.948Z · score: 12 (None votes) · LW · GW

I pushed Oliver for smaller font size when I first saw the LW 2.0 design (I'd prefer something like the comments font), partly for the words-in-mind reason. I agree that bigger words work against complex and deep thinking, and also think that any time you force someone to scroll, you risk disruption (when you have kids you're trying to deal with, being forced to interact with the screen can be a remarkably large negative).

I avoid bold and use italics instead because of the skimming effect. I feel like other words are made to seem less important when things are bolded. Using it not at all is likely a mistake, but I would use it sparingly, and definitely not use it as much as in the comment above.

I do think that using variable font size for section headings and other similar things is almost purely good, and give full permission for admins to edit such things in if I'm being too lazy to do it myself.

comment by habryka (habryka4) · 2018-01-01T01:53:16.262Z · score: 12 (None votes) · LW · GW

The current plan is to allow the authors to choose between a smaller sans-serif that is optimized for skimmability, and a larger serif that is optimized for getting users into a flow of reading. Not confident about that yet though. I am hesitant about having too much variance in font-sizes on the page, and so don't really want to give authors the option to choose their own font-size from a variety of options, but having a conceptual distinction between "wiki-posts" that are optimized for skimmability and "essay-posts" that are optimized for reading things in a flow state seems good to me.

Also not sure about the UI for this yet, input is welcome. I want to keep the post-editor UI as simple as possible.

comment by Raemon · 2019-02-06T00:02:09.621Z · score: 2 (None votes) · LW · GW

FYI it's been a year and I still think this is pretty important

comment by Raemon · 2018-01-01T01:25:29.871Z · score: 7 (None votes) · LW · GW

Hmm. Here's the above post with italics instead, for comparison:

...

Musings on ideal formatting of posts (prompted by argument with Ben Pace)

My thoughts:

1) Working memory is important.

If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head.

2) Less Wrong is for thinking

This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them.

3) You can expand working memory with visual reference

Having larger monitors or notebooks to jot down thoughts makes it easier to think.

The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain).

But regardless of font-size:

4) Optimizing a post for re-skimmability makes it easier to refer to.

This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Sunset at Noon for an example)

comment by Raemon · 2018-01-01T01:30:29.465Z · score: 12 (None votes) · LW · GW

I think it works reasonably for the bulleted-number-titles. I don't personally find it working as well for interior-paragraph things.

Using the bold makes the document function essentially as it's own outline, whereas italics feels insufficient for that - when I'm actually in skimming/hold-in-working-memory mode, I really want something optimized for that.

The solution might just to provide actual outlines after-the-fact.

Part of what I liked with my use of bold and headers was that it'd be fairly easy to build a tool that auto-constructs an outline.

comment by gjm · 2018-01-19T00:09:37.077Z · score: 10 (None votes) · LW · GW

For what it's worth, my feeling is pretty much the opposite. I'm happy with boldface (and hence feel no need to switch to italics) for structural signposts like headings, but boldface is too prominent, relative to ordinary text, to use for emphasis mid-paragraph unless we actively want readers to read only the boldface text and ignore everything else.

I would probably not feel this way if the boldface text were less outrageously heavy relative to the body text. (At least for me, in the browser I'm using now, on the monitor I'm using now, where the contrast is really extreme.)

comment by Said Achmiz (SaidAchmiz) · 2018-01-19T01:39:40.527Z · score: 21 (None votes) · LW · GW

Some comparisons and analysis:

(1) Using bold for emphasis

Using bold for emphasis

When the font size is small, and the ‘bold’ text has a much heavier weight than the regular text (left-hand version), the eye is drawn to the bold text. This is both because (a) reading the regular text is effortful (due to the small size) and the bold stands out and thus requires greatly reduced effort, and (b) because of the great contrast between the two weights.

But when the font size is larger, and the ‘bold’ text is not so much heavier in weight than the regular text (right-hand version), then the eye does not slide off the regular text, though the emphasized lines retains emphasis. This means that emphasis via bolding does not seriously impact whether a reader will read the full text.

(2) Using italics for emphasis

Using italics for emphasis

Not much to say here, except that how different the italic variant of a font is from the roman variant is critical to how well italicizing works for the purpose of emphasis. It tends to be the case that sans-serif fonts (such as Freight Sans Pro, the font currently used for comments and UI elements on LW) have less distinctive italic variants than serif fonts (such as Charter, the font used in the right-hand part of the image above)—though there are some sans-serif fonts which are exceptions.

(3) Skimmability

Skimmability

Appropriate typography is one way to increase a post’s navigability/skimmability. A table of contents (perhaps an auto-generated one—see image) is another. (Note that the example post in this image has its own table of contents at the beginning, provided by Raemon, though few other posts do.)

(4) Bold vs. italic for emphasis

Bold vs. italic for emphasis

This is a perfect case study of points (1) and (2) above. Warnock Pro (the font you see in the left-hand part of the image above) has a very distinctive italic variant; it’s hard to miss, and works very well for emphasis. Charter (the font you see in the right-hand part of the image) has a somewhat less distinctive italic variant (though still more distinctive than the italic variants of most sans-serif fonts).

Meanwhile, the weight of Warnock Pro used for ‘bold’ text on the left is fairly heavy compared to the regular text weight. That makes the bolding work very well for emphasis, but can also generate the “people only read the bold text” effect. On the other hand, the bold weight of Charter is distinctive, but not distractingly so.

Finally, as in point (1), the larger the font size, the less distracting bold type is.

comment by Said Achmiz (SaidAchmiz) · 2018-01-19T06:07:37.996Z · score: 13 (None votes) · LW · GW

Here, for reference, is a brief list of reasonably readable sans-serif fonts with not-too-heavy boldface and a fairly distinctive italic variant (so as to be suitable for use as a comments text font, in accordance with the desiderata suggested in my previous comment):

(Fonts marked with an asterisk are those I personally am partial to.)

Edit: Added links to screenshots.

comment by Raemon · 2018-01-20T23:50:39.941Z · score: 8 (None votes) · LW · GW

One thing that's worth noting here is there's an actual difference of preference between me and (apparently a few, perhaps most) others.

When I use bold, I'm specifically optimizing for skimmability because I think it's important to reference a lot of concepts at once, and I'm not that worried about people reading every word. (I take on the responsibility of making sure that the parts that are most important not to miss are bolded, and the non-bold stuff is providing clarity and details for people who want them)

So, for my purposes I actually prefer bold that stands out well enough that my eyes easily can see it at a glance.

comment by Raemon · 2018-05-14T16:25:57.100Z · score: 12 (None votes) · LW · GW

A couple links that I wanted to refer to easily:

This post on Overcoming Bias – a real old Less Wrong progress report, is sort of a neat vantage point on the "interesting what's changed, what's stayed the same."

This particular quote from the comments was helpful orientation to me:

The general rule in groups with reasonably intelligent discussion and community moderation, once a community consensus is reached on a topic, is that
– Agreement with consensus, well articulated, will be voted up strongly
– Disagreement with consensus, well articulated, will be voted up and start a lengthy discussion
– Agreement with consensus, expressed poorly, will be voted up weakly or ignored
– Disagreement with consensus, expressed poorly, will be voted down viciously
People who complain about groupthink are typically in the habit of doing #4 and then getting upset because they don't get easy validation of their opinions the way people who agree inarticulately do.
As an example on LW, consider Annoyance, who does both #2 and #4 with some regularity and gets wildly varying comment scores because of it.

I was also reading through this old post of gwern's on wikipedia, which feels like it has some relevance for LessWrong.

comment by Raemon · 2018-05-14T17:08:11.173Z · score: 9 (None votes) · LW · GW

Apparently I'm on a gwern kick now.

His about page has a lot of interesting perspective on the Long Now, and designing Long Content that will remain valuable into the future.

Blog posts might be the answer. But I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again, and are quickly abandoned - and perhaps as Assange says, not a moment too soon. (But isn’t that sad? Isn’t it a terrible ROI for one’s time?) On the other hand, the best blogs always seem to be building something: they are rough drafts - works in progress19. So I did not wish to write a blog. Then what? More than just evergreen content, what would constitute Long Content as opposed to the existing culture of Short Content? How does one live in a Long Now sort of way?
My answer is that one uses such a framework to work on projects that are too big to work on normally or too tedious. (Conscientiousness is often lacking online or in volunteer communities22 and many useful things go undone.) Knowing your site will survive for decades to come gives you the mental wherewithal to tackle long-term tasks like gathering information for years, and such persistence can be useful23 - if one holds onto every glimmer of genius for years, then even the dullest person may look a bit like a genius himself24. (Even experienced professionals can only write at their peak for a few hours a day25.) Half the challenge of fighting procrastination is the pain of starting - I find when I actually get into the swing of working on even dull tasks, it’s not so bad.
So this suggests a solution: never start.
Merely have perpetual drafts, which one tweaks from time to time. And the rest takes care of itself.

I think this might be a helpful approach for LW, especially at it crosses the 10-year mark – it's now old enough that some of it's content is showing it's age.

This ties in with some of my thoughts in Musings on Peer Review [LW · GW], and in particular the notion that it feels "wrong" to update a blogpost after people have commented on it.

I find myself liking the idea of "creating a perpetual draft" rather than a finished product.

comment by Elo · 2018-05-14T21:15:19.090Z · score: 11 (None votes) · LW · GW

We need to encourage edit culture. Maybe bringing old posts to the top of the post list when edited. Or an optional checkbox to do so. Maybe we need a second feed for renewed content.

I will think about the tools needed to help edit culture develop.

comment by Hazard · 2018-07-12T13:30:26.583Z · score: 2 (None votes) · LW · GW

Has any more talk/development happened on this? I'm quite interested to know what you come up with. It's easy for me to imagine what it would be like to write in a wiki/perpetual draft style, I'm much fuzzier on what it might look like to read in that style.

comment by Elo · 2018-07-12T20:48:45.674Z · score: 2 (None votes) · LW · GW

No updates. Gwern writes perpetually in drafts.

comment by Said Achmiz (SaidAchmiz) · 2018-05-14T19:00:33.232Z · score: 9 (None votes) · LW · GW

I agree entirely with this, and (again) would like to suggest that a wiki is, perhaps, the perfect tool for precisely this sort of approach.

comment by Hazard · 2018-07-12T13:29:39.054Z · score: 2 (None votes) · LW · GW

Though I haven't acted on it, I do like the idea of the perpetual draft more than a bunch of discrete posts. I will try to write more in this manner.

comment by Raemon · 2018-05-04T00:20:16.584Z · score: 12 (None votes) · LW · GW

Jargon Quest:

There's a kind of extensive double crux that I want a name for. It was inspired by Sarah's Naming the Nameless [LW · GW] post, where she mentions Double Cruxxing on aesthetics. You might call it "aesthetic double crux" but I think that might lead to miscommunication.

The idea is to resolve deep disagreements that underlie your entire framing (of the sort Duncan touches on in this post on Punch Buggy. That post is also a reasonable stab at an essay-form version of the thing I'm talking about).

There are a few things that are relevant here, not quite the same thing but clustered together:

  • what counts as evidence?
  • what counts as good?
  • what counts as beautiful?

Each of them suggest a different name (epistemic double crux, values double crux, aesthetic double crux). Maybe a good common name is "Deep Double Crux" or "Framing Double Crux"

The main point is that when you hunker down for a deep double crux, you're expecting to spend a long while, and to try to tease some real subtle shit.

I liked the phrase Aesthetic Double Crux, suggested in the Naming the Nameless post, since it pointed at entire ways of thinking that had multiple facets, but seemed to orient most around what felt elegant and right. But the people who followed up on that focused most on the art interpretation, so it seemed ripe for misinterpretation.

(In the course of writing this I think I basically decided I liked Deep Double Crux best, but decided to leave the post up as a demonstration of thought process.)

comment by Hazard · 2018-07-06T02:18:16.247Z · score: 4 (None votes) · LW · GW
The main point is that when you hunker down for a deep double crux, you're expecting to spend a long while, and to try to tease some real subtle shit.

Yes! I feel like a lot of the time, the expectation of putting such sustained will attention is not there. Not to say that you should always be ready to hunker down at the drop of a hat. It seems like the default norm is closer to, "Giving up if it gets too hard."

comment by Raemon · 2018-01-09T01:31:05.312Z · score: 12 (None votes) · LW · GW

Some Meta Thoughts on Ziz's Schelling Sequence, and "what kind of writing do I want to see on LW?" [note: if it were possible, I'd like to file this under "exploring my own preferences and curious about others' take" rather than "attempting to move the overton window". Such a thing is probably not actually possible though]

I have a fairly consistent reaction to Ziz posts (as well as Michael Vassar posts, and some Brent Dill posts, among others) which is "this sure is interesting but it involves a lot of effort to read and interpret."

I think this is fine. I think a lot of interesting thoughts come out of frameworks that are deliberately living in weird, pseudo-metaphorical-but-not-quite worlds. I think being able to interpret and think about that is a useful skill (in general, and in particular for stepping out of social reality).

I think I have a preference for such posts to live in the community section, rather than front-page, but in my ideal world they'd go through a process of "explore things creatively in comments or community section", followed by "think more critically about what kind of jargon and opaqueness is actually useful and which was just an artifact of low-friction thinking", followed by "turn it into something optimized for public consumption"

comment by Raemon · 2018-01-09T02:28:36.975Z · score: 11 (None votes) · LW · GW

Kinda weird meta note: I find myself judging both my posts, and other people's, via how many comments they get. i.e. how much are people engaged. (Not aiming to maximize comments but for some "reasonable number").

However, on a post of mine, my own comments clearly don't count. And on another person's post, if there's a lot of comments but most of them are the original authors, it feels like some kind of red flag. Like they think their post is more important than other people do. (I'm not sure if I endorse this perception).

So, I have a weird sense of wanting to see a "comment count minus author's comments", for slightly different reasons. I don't think this is actually a good feature to have, but the fact that I want it feels like weird evidence of something.

comment by Said Achmiz (SaidAchmiz) · 2018-05-14T18:56:06.323Z · score: 5 (None votes) · LW · GW

However, on a post of mine, my own comments clearly don’t count. And on another person’s post, if there’s a lot of comments but most of them are the original authors, it feels like some kind of red flag. Like they think their post is more important than other people do. (I’m not sure if I endorse this perception).

There is definitely value to this heuristic, but note that, e.g., I have commented on my own posts with nitpicky counterpoints to my own claims, or elaborations/digressions that are related but don’t really fit into the structure/flow of the post, or updates, etc. It seems like we shouldn’t discourage such things—do you agree?

comment by Raemon · 2018-05-15T00:45:06.667Z · score: 6 (None votes) · LW · GW

So, this isn't an idea I still really endorse (partly because it doesn't seem worth the complexity cost, partly because I just don't think it was that important in the scheme of things), but I said this as someone who _also_ often makes additional comments on my post to expand ideas. And the point wasn't to discourage that at all – just to also showcase which posts are generating discussion _beyond_ the author fleshing out their own ideas.

comment by Raemon · 2018-01-09T01:31:49.558Z · score: 11 (None votes) · LW · GW

(Empirically, I post my meta thoughts here instead of in Meta. I think this might actually be fine, but am not sure)

comment by Raemon · 2018-07-16T03:44:13.009Z · score: 8 (None votes) · LW · GW

I notice that I'm increasingly confused that Against Malaria Foundation isn't just completely funded.

It made sense a few years ago. By now – things like Gates Foundation seem like they should be aware of it, and that it should do well on their metrics.

It makes (reasonable-ish) sense for Good Ventures not to fully fund it themselves. It makes sense for EA folk to either not have enough money to fully fund it, or to end up valuing things more complicated than AMF. But it seems like there should be enough rich people and governments for whom "end malaria" is a priority that the $100 million or so that it should just be done by now.

What's up with that?

comment by VipulNaik · 2018-07-16T05:11:18.628Z · score: 21 (None votes) · LW · GW

My understanding is that Against Malaria Foundation is a relatively small player in the space of ending malaria, and it's not clear the funders who wish to make a significant dent in malaria would choose to donate to AMF.

One of the reasons GiveWell chose AMF is that there's a clear marginal value of small donation amounts in AMF's operational model -- with a few extra million dollars they can finance bednet distribution in another region. It's not necessarily that AMF itself is the most effective charity to donate to to end malaria -- it's just the one with the best proven cost-effectiveness for donors at the scale of a few million dollars. But it isn't necessarily the best opportunity for somebody with much larger amounts of money who wants to end malaria.

For comparison:

The main difference I can make out between the EA/GiveWell-sphere and the general global health community is that malaria interventions (specifically ITNs) get much more importance in the EA/GiveWell-sphere, whereas in the general global health spending space, AIDS gets more importance. I've written about this before: http://effective-altruism.com/ea/1f9/the_aidsmalaria_puzzle_bleg/

comment by VipulNaik · 2018-07-29T20:13:53.440Z · score: 3 (None votes) · LW · GW

There is some related stuff by Carl Shulman here: https://www.greaterwrong.com/posts/QSHwKqyY4GAXKi9tX/a-personal-history-of-involvement-with-effective-altruism#comment-h9YpvcjaLxpr4hd22 that largely agrees with what I said.

comment by Raemon · 2018-07-16T05:59:18.718Z · score: 2 (None votes) · LW · GW

If Gates Foundation is actually funding constrained I guess that explains most of my confusion, although it still seems a bit weird not to "top it off" since it seems within spitting distance.

comment by Vaniver · 2018-07-16T17:36:36.734Z · score: 18 (None votes) · LW · GW

Check out Gates's April 2018 speech on the subject. Main takeaway: bednets started becoming less effective in 2016, and they're looking at different solutions, including gene drives to wipe out mosquitoes, which is a solution unlikely to require as much maintenance as bed nets.

comment by Raemon · 2018-07-16T03:46:44.465Z · score: 3 (None votes) · LW · GW

Like, I'm actually quite worried that we haven't hit the point where EA folk are weirdly bottlenecked on not having an obviously defensible charity to donate to as a gateway drug.

comment by Raemon · 2018-05-13T22:25:12.966Z · score: 6 (None votes) · LW · GW

In Varieties of Argument [LW · GW], Scott Alexander notes:

Sometimes meta-debate can be good, productive, or necessary.... If you want to maintain discussion norms, sometimes you do have to have discussions about who’s violating them. I even think it can sometimes be helpful to argue about which side is the underdog.
But it’s not the debate, and also it’s much more fun than the debate. It’s an inherently social question, the sort of who’s-high-status and who’s-defecting-against-group-norms questions that we like a little too much. If people have to choose between this and some sort of boring scientific question about when fetuses gain brain function, they’ll choose this every time; given the chance, meta-debate will crowd out everything else.

This is a major thing we're trying to address with LW2. But I notice a bit of a sense-of-doom about it, and just had some thoughts.

I was reading the Effective Altruism forum today, and saw a series of posts on the cost effectiveness of vaccines. It looked like decent original research, and in many senses it seems more important than most of the other stuff getting discussed (on either the EA forum or on LW). Outputting research like that seems like one of the core things EA should actually be trying to do. (More specifically – translating that sort of knowledge into impact.)

But, it's way less fun to talk about – you need to actually be a position to either offer worthwhile critiques of the information there, or to make use of the information.

(Did I read it myself? No. Lol)

And you can maybe try to fix this by making that sort of research high status – putting it in the curated section, giving out bonus karma, maybe even cash prizes. But I think it'll continue to *feel* less rewarding than something that results in actual comments.

My current thought is that the thing that's missing here is a part of the pipeline that clearly connects research to people who are actually going to do something with it. I'm not sure what to do with that

comment by Said Achmiz (SaidAchmiz) · 2018-05-14T18:53:27.066Z · score: 11 (None votes) · LW · GW

And you can maybe try to fix this by making that sort of research high status – putting it in the curated section, giving out bonus karma, maybe even cash prizes. But I think it’ll continue to feel less rewarding than something that results in actual comments.

Figure out what sorts of user behavior you wish to incentivize (reading posts people wouldn’t otherwise read? commenting usefully on those posts? making useful posts?), what sorts you wish to limit (posting, in general? snarky comments?), and apply EP/GP.

comment by Raemon · 2019-02-06T00:09:00.233Z · score: 4 (None votes) · LW · GW

Something I've recently updated heavily on is "Discord/Slack style 'reactions' are super important."

Much moreso than Facebook style reacts, actually.

Discord/Slack style reacts allow you to pack a lot of information into a short space. When coordinating with people "I agree/I disagree/I am 'meh'" are quite important things to be able to convey quickly. A full comment or email saying that takes up way too much brain space.

I'm less confident about whether this is good for LW. A lot of the current LW moderation direction is downstream of a belief: "it's harder to have good epistemics at the same time you're doing social coordination, especially for contentious issues." We want to make sure we're doing a good job at being a place for ideas to get discussed, and we've consciously traded for that against LW being a place you can socially coordinate.

I think discord-style reacts might still be relevant for seeing at a glance how people think about ideas. There are at least some classes of reacts like "this seems confused" or "this was especially clear" that *if* you were able to segregate them from social/politics, they'd be quite valuable. But I'm not sure if you can.

comment by romeostevensit · 2019-02-06T00:38:56.184Z · score: 7 (None votes) · LW · GW

I agree that slack is a better interaction modality for multiple people trying to make progress on problems. The main drawback is chaotic channel ontologies leading to too many buckets to check for users (though many obv. find this aspect addictive as well).

comment by Raemon · 2019-02-06T00:48:48.239Z · score: 2 (None votes) · LW · GW

How much of this has to do with "slack sort of deliberately gives you a bunch of lego blocks and lets you build whatever you want out of them, so of course people build differently shaped things out of them?".

I could imagine a middle ground where there's a bit more streamlining of possible interaction ontologies.

(If you meant channels specifically, it's also worth noting that right now I thinking about "reactions" specifically. Channels I think are particularly bad, wherein people try to create conversations with names that made sense at the time, but then turned into infinite buckets. Reacts seem to have much less confusion, and when they do it's because a given org/server needed to establish a convention, and when you visit another org they're using a different convention)

comment by romeostevensit · 2019-02-07T04:38:04.543Z · score: 2 (None votes) · LW · GW

would likely be solved if slack had a robust 3 level ontology rather than two level. Threaded conversations don't work very well.

comment by Raemon · 2019-01-21T05:04:19.790Z · score: 4 (None votes) · LW · GW

Beeminder, except instead of paying money if you fail, you pay the money when you create you account, and if you fail at your thingy, you can never use the app again.

comment by Elo · 2019-01-21T05:33:09.771Z · score: 2 (None votes) · LW · GW

That's beeminder except bm comes with one freebie

comment by Raemon · 2019-01-21T05:48:38.627Z · score: 2 (None votes) · LW · GW

I mean, at the very least, it's "Beeminder, except with a different pricing curve, and also every time you fail at everything you need to create a new email address, and recreate all your goals."

comment by Raemon · 2018-06-30T21:22:14.208Z · score: 3 (None votes) · LW · GW

I have a song gestating, about the "Dream Time" concept (in the Robin Hanson sense).

In the aboriginal mythology, the dreamtime is the time-before-time, when heroes walked the earth, doing great deeds with supernatural powers that allowed them to shape the world.

In the Robin Hanson sense, the dreamtime is... well, still that, but *from the perspective* of the far future.

For most of history, people lived on subsistence. They didn't have much ability to think very far ahead, or to deliberately steer their future much. We live right now in a time of abundance, where our capacity to produce significantly outstrips our drive to reproduce, and this gives us (among other things) time and slack to think and plan and do things other than what is the bare minimum for survival.

The song I have in mind is in the early stages before a few pieces click together. (Songwriting is a form of puzzle-solving, for those that don't know)

Constraints of the puzzle so far:

1. I want it to be more of a summer solstice song than winter solstice one, of the sort that you can easily sing while gathered around a campfire, _without_ having lyrics available.

2. Due to the above (and because of which non-lyric-requiring-songs I *already* have written), the verses have each line in two parts. The (A) part of each line is new each time. The (B) sections are consistent, such that even if you're hearing the song for the first time you can sing along with at least part of the verses (in addition to the chorus)

((#1 and #2 are the core requirements, and if I ended up having to sacrifice the dreamtime-concept for the song, would do so)

3. Summer Solstice is focused on the present moment (contrasted with winter solstice, which is very distant-past and far-future oriented). The dreamtime concept came to me as something that could be framed from within the far-future perspective, while still having the bulk of the song focusing on the present moment.

4. Aesthetically, my current thought is for the song to be kind of a mirror-image of Bitter Wind Blown:

– the singer is a child, asking her mother to tell stories of the Before Time
– Structurally, fairly similar to Bitter Wind Blown, except the "Little one, little one" equivalent is a bit more complex
– where Bitter Wind Blown is, well, bittersweet, this is dwells more on the positive, and when looking at the negative, does so through a lens of acceptance (not in the "this is okay", but "this is what is, and was.")

However:

As I reflect on what the platonic ideal of the song wants to be, I'm noticing a bit of tension between a few directions. Here we get to the "how do you slide the pieces around and solve the puzzle?" bit (this is at the higher level, before you start _also_ sliding around individual lyrics)

a. The theme of presentness, being mindful of the here and now

b. The subtheme of abundance – right now is the dreamtime because our capacity for production gives us the affordance to thrive, and to think

c. The subtheme of power/heroism – the dreamtime is when heroes walked the earth and shaped the world that will one day become "the normal world."

(a) feels a bit in tension with (b) and (c). I think it's possible to blend them but not sure it'll quite work out.

That's what I got so far. Interested in thoughts.

comment by Raemon · 2019-02-06T00:01:14.968Z · score: 2 (None votes) · LW · GW

I frequently feel a desire to do "medium" upvotes. Specifically, I want tiers of upvote for:

1) minor social approval (equivalent to smiling at a person when they do something I think should receive _some_ signal of reward, in particular if I think they were following a nice incentive gradient, but where I don't think the thing they were doing was especially important.

2) strong social reward (where I want someone to be concretely rewarded for having done something hard, but I still don't think it's actually so important that it should rank highly in other people's attention

3) "this is worth your time and attention", where the signal is more about other people than the post/comment author.

(It's possible you could split these into two entirely different schemas, but I think that'd result in unnecessary UI complexity without commensurate benefit)

comment by Raemon · 2018-10-31T01:42:25.614Z · score: 2 (None votes) · LW · GW

I notice that I often want to reply to LW posts with a joke, sometimes because it's funny, sometimes just as a way to engage a bit with the post when I liked it but don't otherwise have anything meaningful to say.

I notice that there's some mixed things going on here.

I want LW to be a place for high quality discussion.

I think it's actually pretty bad that comprehensive, high quality posts often get less engagement [LW · GW] because there's not much to add or contradict. I think authors generally are more rewarded by comments than by upvotes.

A potential solution is the "Offtopic" comment section we've been thinking about but haven't implemented yet, where either *I* can opt into marking a comment as "offtopic" (i.e. making less of a claim of other people finding it a good use of their time), or an author can if they don't like jokes.

comment by DanielFilan · 2018-10-31T05:21:12.526Z · score: 1 (None votes) · LW · GW

I think authors generally are more rewarded by comments than by upvotes.

Curious if you've done some sort of survey on this. My own feelings are that I care less about the average comment on one of my posts than 10 karma, and I care less about that than I do about a really very good comment (which might intuitively be worth like 30 karma) (but maybe I'm not provoking the right comments?). In general, I don't have an intuitive sense that comments are all that important except for the info value when reading, and I guess the 'people care about me' value as an incentive to write. I do like the idea of the thing I wrote being woven into the way people think, but I don't feel like comments are the best way for that to happen.

comment by Raemon · 2018-07-01T22:50:42.735Z · score: 2 (None votes) · LW · GW

Lately I've come to believe in the 3% rate of return rule.

Sometimes, you can self-improve a lot by using some simple hacks, or learning a new thing you didn't know before. You should be on the look out for such hacks.

But, once you've consumed all the low-hanging fruit, most of what there is to learn involves... just... putting in the work day-in-and-day-out. And you improve so slowly you barely notice. And only when you periodically look back do you realize how far you've come.

It's good to be aware of this, to set expectations.

I've noticed this re: habits, gratitude and exercise, after looking back on how I was 4 years ago.

But I hadn't noticed until recently that I'd made similar improvements at *improvising music on the spot*.

A few years ago I tried things in the genre of rap-battling, or making up songs on the fly, and it was quite hard and I felt bad when I did.

But a) recently I've noticed myself having an easier time doing this (to the extent that others are at least somewhat impressed)

And b), I encountered masters of the art. A friend-of-friend shared a podcast where they improvise *an entire musical* in realtime.

https://www.earwolf.com/show/off-book/

And it's *good*. They have the skill to make up rhymes on the fly *and* make up stories on the fly *and* have evolving characters undergoing emotional arcs on the fly and it all.

And it's all quite silly, but it still, like, fits together.

After listening to it, my housemates immediately gave it a try... and it actually basically _worked_. It was obviously way less good than the podcast, but it was good enough that we felt good about it, and I could see the gears of how to get better at it.

I think most of my own progress here came from practicing making NON-improvised songs. The skill still transfered in terms of finding good rhymes and structure.

If you _deliberate_ practice I'm sure you can progress much faster.