Raemon's Shortform

post by Raemon · 2017-12-30T21:09:29.890Z · LW · GW · 556 comments

Contents

557 comments

This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:

  1. don't feel ready to be written up as a full post
  2. I think the process of writing them up might make them worse (i.e. longer than they need to be)

I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.

556 comments

Comments sorted by top scores.

comment by Raemon · 2024-09-10T21:56:55.027Z · LW(p) · GW(p)

Reading through Backdoors as an analogy for deceptive alignment [LW · GW] prompted me to think about a LW feature I might be interested in. I don't have much math background, and have always found it very effortful to parse math-heavy posts. I expect there are other people in a similar boat.

In modern programming IDEs it's common to have hoverovers for functions and variables, and I think it's sort of crazy that we don't have that for math. So, I'm considering a LessWrong feature that:

  • takes in a post (i.e. when you save or go to publish a draft)
  • identifies the LaTeX terms in the post
  • creates a glossary for what each term means. (This should probably require confirmation by the author)
  • makes a hoverover for each term so when you mouseover it reminds you.

On "Backdoors", I asked the LessWrong-integrated LLM: "what do the Latex terms here mean"?

It replied :

The LaTeX symbols in this passage represent mathematical notations. Let me explain each of them:

  1.  : This represents a class of functions. The curly F denotes that it's a set or collection of functions.
  2.  : This means that  is a function that belongs to (is an element of) the class .
  3.  : The asterisk superscript typically denotes a modified or special version of something. Here, it represents the backdoored function.
  4.  : This Greek letter (epsilon) is often used to denote a small quantity or tolerance.
  5.  : Similar to , this represents a special input, in this case, the backdoor trigger.

These symbols are used to formally describe the game between the attacker and defender:

  •  is the set of all possible functions the attacker can choose from.
  •  is the original function chosen by the attacker.
  •  is the backdoored function created by the attacker.
  •  is the tolerance for how similar  and  must be on most inputs.
  •  is the backdoor trigger input, on which  and  behave differently.

This looks at least pretty close to the intended meaning, but curious if it feels like it's wrong or missing nuance in any places @Jacob_Hilton [LW · GW] @Mark Xu [LW · GW]. Also curious whether this would be helpful to people.

Replies from: alexander-gietelink-oldenziel, Jacob_Hilton, habryka4, RavenclawPrefect, Zolmeister, Thane Ruthenis
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-09-10T22:40:08.676Z · LW(p) · GW(p)

Wow! This would be awesome!

I didn't look at this case in particular. In my experience, LLMs are quite good at regurgitating definitions from long math texts - which otherwise can take some effort to find by hand. 

comment by Jacob_Hilton · 2024-09-11T03:50:25.836Z · LW(p) · GW(p)

The LLM output looks correct to me.

comment by habryka (habryka4) · 2024-09-10T23:15:48.638Z · LW(p) · GW(p)

This does seem pretty cool. I also think it wouldn't be too hard to personalize the LLM to whatever level of math background you do have. So the things that you wrote make sense with a basic undergraduate background, but people with more formal math experience might want something that is more tailored to their higher-level understanding. 

This does make confirmation by the author harder, which I do think seems kind of important.

Replies from: Raemon
comment by Raemon · 2024-09-10T23:21:03.434Z · LW(p) · GW(p)

In the examples above, there two pieces of "here's what this terminology typically means" (which seems less useful if you're already quite familiar), and "here's what it represents in this context". Would this be as simple as letting mathematicians toggle the first part off?

comment by Drake Thomas (RavenclawPrefect) · 2024-09-13T20:00:15.826Z · LW(p) · GW(p)

I've fantasized about a good version of this feature for math textbooks since college - would be excited to beta test or provide feedback about any such things that get explored! (I have a couple math-heavy posts I'd be down to try annotating in this way.)

comment by Zolmeister · 2024-09-11T01:52:29.361Z · LW(p) · GW(p)

Along the same lines, I found this analogy by concrete example [LW(p) · GW(p)] exceptionally elucidative.

comment by Thane Ruthenis · 2024-09-15T19:18:00.227Z · LW(p) · GW(p)

That seems like it'd be very helpful, yes!

Other related features that'd be easy to incorporate into this are John's ideas from here [? · GW]:

  • Imagine a tool in which I write out mathematical equations on the left side, and an AI produces prototypical examples, visuals, or stories on the right, similar to what a human mathematician might do if we were to ask what the mathematician were picturing when looking at the math. (Presumably the interface would need a few iterations to figure out a good way to adjust the AI's visualization to better match the user's.)
  • Imagine a similar tool in which I write on the left, and on the right an AI produces pictures of "what it's imagining when reading the text". Or predicted emotional reactions to the text, or engagement level, or objections, etc.
  • Debugger functionality in some IDEs shows variable-values next to the variables in the code. Imagine that, except with more intelligent efforts to provide useful summary information about the variable-values. E.g. instead of showing all the values in a big tensor, it might show the dimensions. Or it might show Fermi estimates of runtime of different chunks of the code.
  • Similarly, in an environment for writing mathematics, we could imagine automated annotation with asymptotic behavior, units, or example values. Or a sidebar with an auto-generated stack trace showing how the current piece connects to everything else I'm working on.

I think those would also be pretty useful, including for people writing the math-heavy posts.

comment by Raemon · 2024-08-14T23:41:43.953Z · LW(p) · GW(p)

The “prompt shut down” clause seemed like one of the more important clauses in the SB 1047 bill. I was surprised other people I talked to didn't think seem to think it mattered that much, and wanted to argue/hear-arguments about it.

The clauses says AI developers, and compute-cluster operators, are required to have a plan for promptly shutting down large AI models.

People's objections were usually:

"It's not actually that hard to turn off an AI – it's maybe a few hours of running around pulling plugs out of server racks, and it's not like we're that likely to be in the sort of hard takeoff scenario where the differences in a couple hours of manually turning it off will make the difference."

I'm not sure if this is actually true, but, assuming it's true, it still seems to me like the shutdown clause is the one of the more uncomplicatedly-good parts of the bill.

Some reasons:

1. I think the ultimate end game for AI governance will require being able to quickly notice and shut down rogue AIs. That's what it means for the acute risk period to end. 

2. In the more nearterm, I expect the situation where we need to stop running an AI to be fairly murky. Shutting down an AI is going to be very costly. People don't like doing costly things. People also don't like doing things that involve lots of undocumented, complex manual decisions that are going to be a pain. If a company (or compute cluster) doesn't have an explicit plan for how to shut down an AI, I think they're a lot less likely to do it. In particular if it'd be a big economic loss, and it's not entirely obvious they have to. 

If a government is trying to impose this cost from the outside, and a company doesn't want to, they'll probably make a bunch of arguments about how unreasonable and/or impossible the request is.

3. I also think "shut it all down" is something that might be important to do, and while not currently in the overton window, might be in the overton window later. 

I think making "prompt shutdown" a concrete task that companies and governments are thinking about make it significantly more likely to happen. And I think/hope it'll serve as a building-block scaffold, such that later both governments and companies will have an easier time considering plans that include "prompt shutdown" as a component.

More "straightforwardly good."

There's a lot in the bill that I think is probably good, but, does depend on how things get enforced. For example, I think it's good to require companies to have a plan to reasonably-assure that their AIs are good. But, I've heard some people be concerned "aren't basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?." And yeah, that does seem plausible. 

I'd take the risk of that on current margins. But "if you're running a big model, you need to have the capacity to turn it off quickly" seems like a just pretty reasonable, necessary piece of legislation?

Replies from: aysja, akash-wasil, michael-roe
comment by aysja · 2024-08-16T05:56:07.312Z · LW(p) · GW(p)

Largely agree with everything here. 

But, I've heard some people be concerned "aren't basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?." And yeah, that does seem plausible. 

I do think that all SSP-like plans are basically fake, and I’m opposed to them becoming the bedrock of AI regulation. But I worry that people take the premise “the government will inevitably botch this” and conclude something like “so it’s best to let the labs figure out what to do before cementing anything.” This seems alarming to me. Afaict, the current world we’re in is basically the worst case scenario—labs are racing to build AGI, and their safety approach is ~“don’t worry, we’ll figure it out as we go.” But this process doesn’t seem very likely to result in good safety plans either; charging ahead as is doesn’t necessarily beget better policies. So while I certainly agree that SSP-shaped things are woefully inadequate, it seems important, when discussing this, to keep in mind what the counterfactual is. Because the status quo is not, imo, a remotely acceptable alternative either.

Replies from: ricraz, Raemon
comment by Richard_Ngo (ricraz) · 2024-08-16T21:30:41.272Z · LW(p) · GW(p)

Afaict, the current world we’re in is basically the worst case scenario

the status quo is not, imo, a remotely acceptable alternative either

Both of these quotes display types of thinking which are typically dangerous and counterproductive, because they rule out the possibility that your actions can make things worse.

The current world is very far from the worst-case scenario (even if you have very high P(doom), it's far away in log-odds) and I don't think it would be that hard to accidentally make things considerably worse.

comment by Raemon · 2024-08-16T18:31:18.147Z · LW(p) · GW(p)

I think on alternative here that isn't just "trust AI companies" is "wait until we have a good Danger Eval, and then get another bit of legislation that specifically focuses on that, rather than hoping that the bureaucratic/political process shakes out with a good set of SSP industry standards."

I don't know that that's the right call, but I don't think it's a crazy position from a safety perspective.

comment by Akash (akash-wasil) · 2024-08-15T17:01:05.989Z · LW(p) · GW(p)

I largely agree that the "full shutdown" provisions are great. I also like that the bill requires developers to specify circumstances under which they would enact a shutdown:

(I) Describes in detail the conditions under which a developer would enact a full shutdown.

In general, I think it's great to help governments understand what kinds of scenarios would require a shutdown, make it easy for governments and companies to enact a shutdown, and give governments the knowledge/tools to verify that a shutdown has been achieved.

comment by Michael Roe (michael-roe) · 2024-08-15T12:44:00.922Z · LW(p) · GW(p)

If your AI is doing something that's causing harm to third parties that you are legally liable for .. chances are, whatever it is doing, it is doing it at Internet speeds, and even small delays are going to be very, very expensive.

 

I am imagining that all the people who got harmed after the first minute or so after the AI went rogue are going to be pointing at SB1047 to argue that you are negligent, and therefore liable for whatever bad thing it did.

Replies from: michael-roe
comment by Michael Roe (michael-roe) · 2024-08-15T12:55:31.569Z · LW(p) · GW(p)

With a nod to the recent Crowdstrike incident .... if your AI is sending out packets to other people;s Windows systems, and bricking them about as fast it can send packets through its ethernet interface, your liability may be expanding rapidly. An additional billion dollars for each hour you dont shut it down sounds possible.

comment by Raemon · 2024-11-27T20:22:34.592Z · LW(p) · GW(p)

Motif coming up for me: a lot of skill ceilings are much higher than you might think, and worth investing in.

Some skills that you can be way better at:

  • Listening to people, and hearing what they're actually trying to say, and gaining value from it
  • Noticing subtle things that are important. You can learn to notice like 5 different things happening inside you or around you, that occured in <1 second.
  • Being concrete, in ways that help you resolve confusion and gain momentum on solving problems.
  • Each stage of OODA Looping is quite deep
    • (i.e. "Observe", "Orient", "Decide", and "Act" each have a lot of deep subskills. The depth of "Noticing" is a subset of the overall set of "Observation" skills")
Replies from: Raemon, tommy-nguyen-1, kh, andrei-alexandru-parfeni
comment by Raemon · 2024-11-28T23:12:39.983Z · LW(p) · GW(p)

For people asking about ‘noticing 5 things happening within a second or so’, you can see at least one (fictional but representative) example in Scaffolding for "Noticing Metacognition" [LW · GW

comment by t14n (tommy-nguyen-1) · 2024-11-28T15:24:29.663Z · LW(p) · GW(p)

Skill ceilings across humanity is quite high. I think of super genius chess players, Terry Tao, etc.

A particular individual's skill ceiling is relatively low (compared to these maximally gifted individuals). Sure, everyone can be better at listening, but there's a high non-zero chance you have some sort of condition or life experience that makes it more difficult to develop it (hearing disability, physical/mental illness, trauma, an environment of people who are actually not great at communicating themselves, etc).

I'm reminded of what Samo Burja calls "completeness hypothesis":

> It is the idea that having all of the important contributing pieces makes a given effect much, much larger than having most of the pieces. Having 100% of the pieces of a car produces a very different effect than having 90% of the pieces. The four important pieces for producing mastery in a domain are good feedback mechanisms, extreme motivation, the right equipment, and sufficient time. According to the Completeness Hypothesis, people that stably have all four of these pieces will have orders-of-magnitude greater skill than people that have only two or three of the components.

This is not a fatalistic recommendation to NOT invest in skill development. Quite the opposite.

I recommend Dan Luu's 95th %-tile is not that good.

Most people do not approach anywhere near their individual skill ceiling because they lack the four things that Burja lists. As Luu points out, most people don't care that much to develop their skills. People do not care to find good feedback loops, cultivate the motivation, or carve out sufficent time to develop skills. Certain skills may be limited by resources (equipment), but there are hacks that can lead to skill development at a sub-optimal rate (e.g. calisthenics for muscle mass development vs weighted training. Maybe you can't afford a gym membership but push-ups are free).

As @sunwillrise mentioned, there are diminishing returns for developing a skill. The gap from 0th % -> 80th % is actually quite narrow. 80th % -> 98% requires work but is doable for most people, and you probably start to experience diminishing returns around this range.

98%+ results are reserved for those who can have long-term stable environments to cultivate the skill, or the extremely talented. 

Replies from: CstineSublime
comment by CstineSublime · 2024-11-29T04:41:11.311Z · LW(p) · GW(p)

Feedback loops I think are the principle bottleneck in my skill development, aside from the fact that if you're a novice you don't even know what you should be noticing (even if you have enough awareness to be cognizant of all signs and outputs of an act).

To give an example, I'm currently trying to learn how to generate client leads through video content for Instagram. Unless someone actually tells me about a video they liked and what they liked about it, figuring out how to please the algorithm to generate more engagement is hard. The only thing that "works" - tagging other people. Nothing about the type of content, the framing of the shots, the subject matter, the audio... nope... just whether or not one or more other Instagram accounts are tagged in it. (Of course since the end objective is - 'get commissioned' perhaps optimizing for Instagram engagement is not even the thing I should be optimizing at all... how would I know?)
Feedback loops are hard. A desirbale metaskill to have would be developing tight feedback loops

comment by Kaarel (kh) · 2024-11-28T16:13:26.522Z · LW(p) · GW(p)

there's imo probably not any (even-nearly-implementable) ceiling for basically any rich (thinking-)skill at all[1] — no cognitive system will ever be well-thought-of as getting close to a ceiling at such a skill — it's always possible to do any rich skill very much better (I mean these things for finite minds in general, but also when restricting the scope to current humans)

(that said, (1) of course, it is common for people to become better at particular skills up to some time and to become worse later, but i think this has nothing to do with having reached some principled ceiling; (2) also, we could perhaps eg try to talk about 'the artifact that takes at most bits to specify (in some specification-language) which figures out units of math the quickest (for some sufficiently large compared to )', but even if we could make sense of that, it wouldn't be right to think of it as being at some math skill ceiling to begin with, because it will probably very quickly change very much about its thinking (i.e. reprogram itself, imo plausibly indefinitely many times, including indefinitely many times in important ways, until the heat death of the universe or whatever); (3) i admit that there can be some purposes for which there is an appropriate way to measure goodness at some rich skill with a score in , and for such a purpose potential goodness at even a rich skill is of course appropriate to consider bounded and optimal performance might be rightly said to be approachable, but this somehow feels not-that-relevant in the present context)


  1. i'll try to get away with not being very clear about what i mean by a 'rich (thinking-)skill' except that it has to do with having a rich domain (the domain either effectively presenting any sufficiently rich set of mathematical questions as problems or relating richly to humans, or in particular just to yourself, usually suffices) and i would include all the examples you give ↩︎

comment by sunwillrise (andrei-alexandru-parfeni) · 2024-11-28T03:35:00.193Z · LW(p) · GW(p)

a lot of skill ceilings are much higher than you might think, and worth investing in

The former doesn't necessarily imply the latter in general, because even if we are systematically underestimating the realistic upper bound for our skill level in these areas, we would still have to deal with diminishing marginal returns to investing in any particular one. As a result, I am much more confident of the former claim being correct for the average LW reader than of the latter. In practice, my experience tells me that you often have "phase changes" of sorts, where there's a rather binary instead of continuous response to a skill level increase: either you've hit the activation energy level, and thus unlock the self-reinforcing loop of benefits that flow from the skill (once you can apply it properly and iterate on it or use it recursively), or you haven't, in which case any measurable improvement is minimal. It's thus often more important to get past the critical point than to make marginal improvements either before or after hitting it.

On the other hand, many of the skills you mentioned afterwards in your comment seem relatively general-purpose, so I could totally be off-base in these specific cases.

Replies from: Raemon, TsviBT
comment by Raemon · 2024-11-28T20:21:26.431Z · LW(p) · GW(p)

The "you need to hit a particular activation level" seems right to me. Generally when I'm trying to teach people skills, I try to get them to fluency-escape-velocity, where it is net-positive to apply the skill to their day-job.

There's additional important bits about hitting particular thresholds allow you to build engines out of multiple skills (I'll probably reply more to t14n's comment [LW · GW] about that)

comment by TsviBT · 2024-11-28T07:57:53.132Z · LW(p) · GW(p)

The former doesn't necessarily imply the latter in general, because even if we are systematically underestimating the realistic upper bound for our skill level in these areas, we would still have to deal with diminishing marginal returns to investing in any particular one.

On the other hand, even if what you say is true, skill headroom may still imply that it's worth building shared arts around such skills. Shareability and build-on-ability changes the marginal returns a lot.

comment by Raemon · 2019-07-07T22:22:08.913Z · LW(p) · GW(p)

There was a particular mistake I made over in this thread [LW(p) · GW(p)]. Noticing the mistake didn't change my overall position (and also my overall position was even weirder than I think people thought it was). But, seemed worth noting somewhere.

I think most folk morality (or at least my own folk morality), generally has the following crimes in ascending order of badness:

  • Lying
  • Stealing
  • Killing
  • Torturing people to death (I'm not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)

But this is the conflation of a few different things. One axis I was ignoring was "morality as coordination tool" vs "morality as 'doing the right thing because I think it's right'." And these are actually quite different. And, importantly, you don't get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool.

There's actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:

  • harming the ingroup
  • harming the outgroup (who you may benefit from trading with)
  • harming powerless people who don't have the ability to trade or collaborate with you

And you basically need to tackle these in this order. If you live in a world where even people in your tribe backstab each other all the time, you won't have spare resources to spend on the outgroup or the powerless until your tribe has gotten it's basic shit together and figured out that lying/stealing/killing each other sucks.

If your tribe has it's basic shit together, then maybe you have the slack to ask the question: "hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it'd be better if we traded with them instead of stealing their sheep?" and then begin to develop cosmopolitan norms.

If you eventually become a powerful empire (or similar), eventually you may notice that you're going around exploiting or conquering and... maybe you just don't actually want to do that anymore? Or maybe, within your empire, there's an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But... you just don't want to do that anymore, because empathy or because you've come to believe in principles that say not to, or something. Sometimes this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do.

Avoiding harming the ingroup and productive outgroup are things that you're locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you're incentived to do all the way along.

Avoiding harming the powerless is something that you are limited in your ability to do until the point where it starts making sense to cash in your victory points.

I think this is all pretty non-explicit in most discussions of morality/ethics/what-people-should-do, and conflation of "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are" is common.

Replies from: Benquo, Benquo
comment by Benquo · 2019-07-08T12:34:49.811Z · LW(p) · GW(p)

On the object level, the three levels you described are extremely important:

  • harming the ingroup
  • harming the outgroup (who you may benefit from trading with)
  • harming powerless people who don't have the ability to trade or collaborate with you

I'm basically never talking about the third thing when I talk about morality or anything like that, because I don't think we've done a decent job at the first thing. I think there's a lot of misinformation out there about how well we've done the first thing, and I think that in practice utilitarian ethical discourse tends to raise the message length of making that distinction, by implicitly denying that there's an outgroup.

I don't think ingroups should be arbitrary affiliation groups. Or, more precisely, "ingroups are arbitrary affiliation groups" is one natural supergroup which I think is doing a lot of harm, and there are other natural supergroups following different strategies, of which "righteousness/justice" is one that I think is especially important. But pretending there's no outgroup is worse than honestly trying to treat foreigners decently as foreigners who can't be counted on to trust us with arbitrary power or share our preferences or standards.

Sometimes we should be thinking about what internal norms to coordinate around (which is part of how the ingroup is defined), and sometimes we should be thinking about conflicts with other perspectives or strategies (how we treat outgroups). The Humility Argument for Honesty [LW · GW] and Against Neglectedness Considerations are examples of an idea about what kinds of norms constitute a beneficial-to-many supergroup, while Should Effective Altruism be at war with North Korea? [LW · GW] was an attempt to raise the visibility of the existence of outgroups, so we could think strategically about them.

Replies from: eukaryote, Raemon
comment by eukaryote · 2019-07-08T17:50:15.494Z · LW(p) · GW(p)

I'm basically never talking about the third thing when I talk about morality or anything like that, because I don't think we've done a decent job at the first thing.

Wait, why do you think these have to be done in order?

Replies from: Raemon, Benquo
comment by Raemon · 2019-07-08T20:09:49.921Z · LW(p) · GW(p)

Some beliefs of mine, I assume different from Ben's but I think still relevant to this question are:

At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.

There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you're a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.

I don't personally think you need to halt *all* helping of the powerless until you've solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.

A major suspicion/confusion I have here is that the two frames:

  • "Help the ingroup, so that the ingroup eventually has the bandwidth and slack to help the outgroup and the powerless", and
  • "Help the ingroup, because it's convenient and they're the ingroup"

Look very similar.

Or, alternately: Optimizing even for the welfare of the ingroup, vs the longterm production power of the ingroup are fairly different things. For example, say that income inequality leads to less welfare (because what people really care about is relative status). But, capitalism longterm yields way more resources, using mechanisms that specifically depend on income inequality.

An argument someone once made to me [I'm not sure if the actual facts here check out but the thought experiment was sufficient to change my outlook] was "look, 100 years ago Mexico made choices that optimized for more equality at the expense of 1% economic growth. Trading 1% economic growth for a lot of equality might sound like a good trade, but it means that 100 years later people in Mexico are literally dying to try to get into the US."

(This fits into the ingroup/outgroup/powerless schema if you think of the "trade 1% growth for equality" as a choice that elites (rich/wealthy/well-connected/intelligentsia] might make, as a pseudo-ingroup, in order to help the less fortunate in their own country, which are a pseudo-relative-outgroup)

comment by Benquo · 2019-07-09T01:18:19.140Z · LW(p) · GW(p)

Attention is scarce and there are lots of optimization processes going on, so if you think the future is big relative to the present, interventions that increase the optimization power serving your values are going to outperform direct interventions. This doesn't imply that we should just do infinite meta, but it does imply that the value of direct object-level improvements will nearly always be via how they affect different optimizing processes.

comment by Raemon · 2019-07-08T19:58:44.371Z · LW(p) · GW(p)

A lot of this makes sense. Some of it feels like I haven't quite understood the frame you're using (and unfortunately can't specify further which parts those are because it's a bit confusing)

One thing that seems relevant: My preference to "declare staghunts first and get explicit buy in before trying to do anything cooperatively-challenging" feels quite related to "ambiguity over who is in the ingroup causes problems" thing.

comment by Benquo · 2019-07-08T12:30:03.400Z · LW(p) · GW(p)

This feels like the most direct engagement I've seen from you with what I've been trying to say. Thanks! I'm not sure how to describe the metric on which this is obviously to-the-point and trying-to-be-pin-down-able, but I want to at least flag an example where it seems like you're doing the thing.

comment by Raemon · 2018-06-25T23:39:29.991Z · LW(p) · GW(p)

Periodically I describe a particular problem with the rationalsphere with the programmer metaphor of:

"For several years, CFAR took the main LW Sequences Git Repo and forked it into a private branch, then layered all sorts of new commits, ran with some assumptions, and tweaked around some of the legacy code a bit. This was all done in private organizations, or in-person conversation, or at best, on hard-to-follow-and-link-to-threads on Facebook.

"And now, there's a massive series of git-merge conflicts, as important concepts from CFAR attempt to get merged back into the original LessWrong branch. And people are going, like 'what the hell is focusing and circling?'"

And this points towards an important thing about _why_ think it's important to keep people actually writing down and publishing their longform thoughts (esp the people who are working in private organizations)

And I'm not sure how to actually really convey it properly _without_ the programming metaphor. (Or, I suppose I just could. Maybe if I simply remove the first sentence the description still works. But I feel like the first sentence does a lot of important work in communicating it clearly)

We have enough programmers that I can basically get away with it anyway, but it'd be nice to not have to rely on that.

comment by Raemon · 2024-04-17T19:12:17.760Z · LW(p) · GW(p)

There's a skill of "quickly operationalizing a prediction, about a question that is cruxy for your decisionmaking."

And, it's dramatically better to be very fluent at this skill, rather than "merely pretty okay at it."

Fluency means you can actually use it day-to-day to help with whatever work is important to you. Day-to-day usage means you can actually get calibrated re: predictions in whatever domains you care about. Calibration means that your intuitions will be good, and _you'll know they're good_.

Fluency means you can do it _while you're in the middle of your thought process_, and then return to your thought process, rather than awkwardly bolting it on at the end.

I find this useful at multiple levels-of-strategy. i.e. for big picture 6 month planning, as well as for "what do I do in the next hour."

I'm working on this as a full blogpost but figured I would start getting pieces of it out here for now.

A lot of this skill is building off on CFAR's "inner simulator" framing. Andrew Critch recently framed this to me as "using your System 2 (conscious, deliberate intelligence) to generate questions for your System 1 (fast intuition) to answer." (Whereas previously, he'd known System 1 was good at answering some types of questions, but he thought of it as responsible for both "asking" and "answering" those questions)

But, I feel like combining this with "quickly operationalize cruxy Fatebook predictions" makes it more of a power tool for me. (Also, now that I have this mindset, even when I can't be bothered to make a Fatebook prediction, I have a better overall handle on how to quickly query my intuitions)

I've been working on this skill for years and it only really clicked together last week. It required a bunch of interlocking pieces that all require separate fluency:

1. Having three different formats for Fatebook (the main website, the slack integration, and the chrome extension), so, pretty much wherever I'm thinking-in-text, I'll be able to quickly use it.

2. The skill of "generating lots of 'plans'", such that I always have at least two plausibly good ideas on what to do next.

3. Identifying an actual crux for what would make me switch to one of my backup plans.

4. Operationalizing an observation I could make that'd convince me of one of these cruxes.

Replies from: Viliam, romeostevensit
comment by Viliam · 2024-04-18T07:57:04.479Z · LW(p) · GW(p)

I'm working on this as a full blogpost but figured I would start getting pieces of it out here for now.

Looking forward to specific examples, pretty please.

Replies from: romeostevensit
comment by romeostevensit · 2024-04-19T05:07:15.319Z · LW(p) · GW(p)

Tracing out the chain of uncertainty. Lets say that I'm thinking about my business and come up with an idea. I'm uncertain how much to prioritize the idea vs the other swirling thoughts. If I thought it might cause my business to 2x revenue I'd obviously drop a lot and pursue it. Ok, how likely is that based on prior ideas? What reference class is the idea in? Under what world model is the business revenue particularly sensitive to the outputs of this idea? What's the most uncertain part of that model? How would I quickly test it? Who would already know the answer? etc.

comment by romeostevensit · 2024-04-19T05:04:16.612Z · LW(p) · GW(p)

My shorthand has been 'decision leverage.' But that might not hit the center of what you're aiming at here.

comment by Raemon · 2018-07-15T15:05:12.763Z · LW(p) · GW(p)

I disagree with this particular theunitofcaring post "what would you do with 20 billion dollars?", and I think this is possibly the only area where I disagree with theunitofcaring overall philosophy and seemed worth mentioning. (This crops up occasionally in her other posts but it is most clear cut here).

I think if you got 20 billion dollars and didn't want to think too hard about what to do with it, donating to OpenPhilanthropy project is a pretty decent fallback option.

But my overall take on how to handle the EA funding landscape has changed a bit in the past few years. Some things that theunitofcaring doesn't mention here, which seem at least warrant thinking about:

[Each of these has a bit of a citation-needed, that I recall hearing or reading in reliable sounding places, but correct me if I'm wrong or out of date]

1) OpenPhil has (at least? I can't find more recent data) 8 billion dollars, and makes something like 500 million a year in investment returns. They are currently able to give 100 million away a year.

They're working on building more capacity so they can give more. But for the foreseeable future, they _can't_ actually spend more money than they are making.

2) OpenPhil doesn't want to be responsible for more than 50% of an orgs' budget, because being fully dependent on a single foundation creates a distorted relationship (where the org feels somewhat constrained by the foundation's own goals/whims/models).

If you have a diverse funding base, you can just do the thing you think is best. If you have a small funder base, if you aren't perfectly aligned with the funder, there is pressure to shift towards projects that they think are best (even if the funder is trying _not_ to put such pressure on you)

I'm not sure how big a concern this _should_ be, but AFAICT it's currently their policy.

This means there's a fair bit of value, if you had $20 billion, to setting up an alternative foundation to OpenPhil, just from the perspective of making sure the best orgs can _actually_ get fully funded.

3) OpenPhil has high standards for where to donate.

This is good.

But the fact that they have 8 billion, make another 500 million a year and spend down only around 100 million, means that the funding niche that actually needs filling right now is not more-of-OpenPhils-strategy.

There's a weird situation in the current landscape where it feels like money is unlimited... but there are still EA-relevant projects that need money. Typically ones that are younger, where data is more scarce.

Figuring out which of those actually deserve money is hard (esp. without creating weird incentivizes down the line, where _anyone_ with a half-baked project can show up and get your money). But this seems more like the domain where another major funder would be valuable.

...

Now, this is all hypothetical (theunitofcaring doesn't have 20 billion and neither do I). But this does point at an important shift on how to think about donating, if you're a small-to-medium sized donor.

Awhile ago I wrote "Earning to Give is Costly Signalling". Power laws mean that the richest people dwarf the donations of smaller time donors. Therefore, most of the value of EA donors is convincing rich people to give (and think) in an EA fashion.

Now I think it's weirder than that.

Given that EA collectively has access to billions of dollars (plus smaller-but-still-larger-than-we-know-what-to-do-with donor pools from a few other multi-millionaires)...

If you're a small-to-medium donor, roles that make sense to play are:

  • Provide a costly signal for _new_ charities that aren't already on OpenPhil, BERI et al's radar.
  • Help seed-fund new projects that you have a lot of local information on, that you think make a credible case for being high impact
  • Donate to existing orgs, to help fill out the 50% funding gap (this is still partly about making sure they get funded, and also a sort of continued costly signal of the org's value to the larger funders). Many orgs also have tax-relevant status where it matters what proportion of their budget comes from private vs public donations, so making sure they have a diverse donor base is helpful.

This last option is basically EA business as usual, which is still important, but it's only one of several possible strategies that should be under consideration.

I also think it's important to consider using the money for your own personal development, or the development of people you know who you think could do well. Hire a personal trainer, or a tutor. Save up runway so that you can afford to take time off to think, and plan, or start a project.

comment by Raemon · 2024-03-04T00:52:36.371Z · LW(p) · GW(p)

A major goal I had for the LessWrong Review was to be "the intermediate metric that let me know if LW was accomplishing important things", which helped me steer.

I think it hasn't super succeeded at this.

I think one problem is that it just... feels like it generates stuff people liked reading, which is different from "stuff that turned out to be genuinely important."

I'm now wondering "what if I built a power-tool that is designed for a single user to decide which posts seem to have mattered the most (according to them), and, then, figure out which intermediate posts played into them." What would the lightweight version of that look like?

Another thing is, like, I want to see what particular other individuals thought mattered, as opposed to a generate aggregate that doesn't any theory underlying it. Making the voting public veers towards some kind of "what did the cool people think?" contest, so I feel anxious about that, but, I do think the info is just pretty useful. But like, what if the output of the review is a series of individual takes on what-mattered-and-why, collectively, rather than an aggregate vote?

Replies from: 1a3orn, pktechgirl, ryan_greenblatt
comment by 1a3orn · 2024-03-04T15:02:26.004Z · LW(p) · GW(p)

So Alasdair MacIntyre, says that all enquiry into truth and practical rationality takes place within a tradition, sometimes capital-t Tradition, that provides standards for things like "What is a good argument" and "What things can I take for granted" and so on. You never zoom all the way back to simple self-evident truths or raw-sense data --- it's just too far to go. (I don't know if I'd actually recommend MacIntyre to you, he's probably not sufficiently dense / interesting for your projects, he's like a weird blend of Aquinas and Kuhn and Lakatos, but he is interesting at least, if you have a tolerance for.... the kind of thing he is.)

What struck me with a fair number of reviews, at this point, was that they seemed... kinda resigned to a LW Tradition, if it ever existed, no longer really being a single thing? Like we don't have shared standards any more for what is a good argument or what things can be taken for granted (maybe we never did, and I'm golden-age fallacying). There were some reviews saying "idk if this is true, but it did influence people" and others being like "well I think this is kinda dumb, but seems important" and I know I wrote one being like "well these are at least pretty representative arguments of the kind of things people say to each other in these contexts."

Anyhow what I'm saying is that -- if we operate in a MacIntyrean frame -- it makes sense to be like "this is the best work we have" within a Tradition, but humans start to spit out NaNs / operation not defined if you try to ask them "is this the best work we have" across Traditions. I don't know if this is true of ideal reasoners but it does seem to be true of... um, any reasoners we've ever seen, which is more relevant.

comment by Elizabeth (pktechgirl) · 2024-03-04T03:50:48.905Z · LW(p) · GW(p)

I wonder if dramatically shrinking the review's winners' circle would help? Right now it feels huge to me. 

Replies from: Raemon
comment by Raemon · 2024-03-04T04:18:33.151Z · LW(p) · GW(p)

What do you mean by winner's circle? Like top 10 instead of top 50, or something else?

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2024-03-04T23:09:46.680Z · LW(p) · GW(p)

yeah, top 10 or even just top 5. 

comment by ryan_greenblatt · 2024-03-04T02:08:30.525Z · LW(p) · GW(p)

Skimming the review posts for 2022, I think about 5/50 taught me something reasonably substantial and useful. I think another 10/50 provide a useful short idea and a label/pointer for that idea, but don't really provide a large valuable lesson. Perhaps 20/50 are posts I might end up refering to at some point or recommending someone read.

Overall, I think I tend to learn way more in person talking to people than from LW posts, but I think LW posts are useful to reference reasonably often.

Replies from: Raemon
comment by Raemon · 2024-03-04T04:29:29.536Z · LW(p) · GW(p)

Those numbers sound reasonable to me (i.e. I might give similar numbers, although I'd probably list different posts than you)

Another angle I've had here: in my preferred world, the "Best of LessWrong" page leaves explicit that, in some sense, very few (possibly zero?) posts actually meet the bar we'd ideally aspire to. The Best of LessWrong page highlights the best stuff so far, but I think it'd be cool if there was a deliberately empty, aspirational section.

But, then I feel a bit stuck on "what counts for that tier?"

Here's another idea:

Open Problems 

(and: when voting on Best of LessWrong, you can 'bet' that a post will contribute to solving an Open Problem)

Open Problems could be a LessWrong feature which is basically a post describing an important, unsolved problem. They'd each be owned by a particular author or small group, who get to declare when they consider the problem "solved." (If you want people to trust/care about the outcome of particular Open Problem, you might choose two co-owners who are sort of adversarial collaborators, and they have to both agree it was solved)

Two use-cases for Open Problems could be:

  • As a research target for an individual researcher (or team), i.e. setting the target they're ultimately aiming for.
  • As a sort of X-Prize, for others to attempt to contribute to.

So we'd end up with problem statements like:

  • "AI Alignment for superintelligences is solved" (maybe Eliezer and Paul cosign a problem statement on that)
  • You (Ryan) and Buck could formulate some kind of Open Problem on AI Control
  • I'd like to be some kind of "we have a rationality training program that seems to demonstrably work"

And then there's a page that highlights "these are the open problems people on LessWrong have upvoted the most as 'important'", and "here are the posts that people are betting will turn out to be relevant to the final solution." (maybe this is operationalized as, like, a manifold market bet about whether the problem-author will eventually declare a given post to be an important contribution)

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-03-04T05:10:20.892Z · LW(p) · GW(p)

I don't think that a solution to open problems being posted on LW would indicate that LW (the website and org, not the surrounding community) was accomplishing something useful.

E.g., imagine using the same metric for arXiv. (This case is more extreme, but I think it corresponds somewhat.)

Awkwardly, I think the existence of good posts is unlikely to track LW's contribution. This seems especially true for posts about solutions to technical problems. The marginal contribution of LW is more in making it more likely that better posts are read and in making various conversations happen (with a variety of other diffuse potential advantages).

I don't know what a good metric for LW is.

Replies from: Raemon
comment by Raemon · 2024-03-04T05:24:28.328Z · LW(p) · GW(p)

I'm not 100% sure I got your point. 

I think (but am unsure) that what I care about is more like a metric for "is useful intellectual progress getting made" (whether or not LessWrong-the-website was causal in that progress). 

The point here is not to evaluate the Lightcone team's work, but for the community to have a better benchmark for it's collective progress (which then hopefully, like, improves credit-assignment which then hopefully improves our ability to collectively focus on useful stuff as the community scales)

This point does seem interesting though and maybe a different frame than I had previously been thinking in:

The marginal contribution of LW is more in making it more likely that better posts are read and in making various conversations happen (with a variety of other diffuse potential advantages).

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-03-04T05:34:03.683Z · LW(p) · GW(p)

I think (but am unsure) that what I care about is more like a metric for "is useful intellectual progress getting made" (whether or not LessWrong-the-website was causal in that progress).

Seems reasonable. From my perspective LW review is very bad for measuring overall (human) progress on achieving good things, though plausibly better than any other specific review or ranking process that has a considerable amount of buy in.

Replies from: Raemon
comment by Raemon · 2024-03-04T18:22:46.932Z · LW(p) · GW(p)

I wasn't quite sure from your phrasings: 

Do you think replacing (or at least combining) LW Review with the Open Problems frame would be an improvement on that axis?

Also: does it seem useful to you to measure overall progress on [the cluster of good things that the rationality and/or alignment community are pointed at?]?

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-03-04T21:35:19.069Z · LW(p) · GW(p)

Do you think replacing (or at least combining) LW Review with the Open Problems frame would be an improvement on that axis?

Uh, maybe for combining? I think my main complaint with LW review as a metric is more just that I disagree with the preferences of other people and think that a bunch of work is happening on places other than LW. I don't really think Open Problems helps much with this from my perspective. (In many cases I can't name a clear and operationalized open problem and more just think "more progress here would be good.)

comment by Raemon · 2017-12-31T08:02:54.527Z · LW(p) · GW(p)

Something struck me recently, as I watched Kubo, and Coco - two animated movies that both deal with death, and highlight music and storytelling as mechanisms by which we can preserve people after they die.

Kubo begins "Don't blink - if you blink for even an instant, if you a miss a single thing, our hero will perish." This is not because there is something "important" that happens quickly that you might miss. Maybe there is, but it's not the point. The point is that Kubo is telling a story about people. Those people are now dead. And insofar as those people are able to be kept alive, it is by preserving as much of their personhood as possible - by remembering as much as possible from their life.

This is generally how I think about death.

Cryonics is an attempt at the ultimate form of preserving someone's pattern forever, but in a world pre-cryonics, the best you can reasonably hope for is for people to preserve you so thoroughly in story that a young person from the next generation can hear the story, and palpably feel the underlying character, rich with inner life. Can see the person so clearly that he or she comes to live inside them.

Realistically, this means a person degrades with each generation. Their pattern is gradually distorted. Eventually it is forgotten.

Maybe this horrendously unsatisfying - it should be. Stories are not very high fidelity storage device. Most of what made the person an agent is gone.

But not necessarily - if you choose to not just remember humorous anecdotes about a person, but to remember what they cared about, you can be a channel by which that person continues to act upon the world. Someone recently pointed this out as a concrete reason to respect the wishes of the dead - as long as there are people enacting that person's will, there is some small way in which they meaningfully still exist.

This is part of how I chose to handle the Solstices that I lead myself: Little Echo, Origin of Stories, and Endless Lights are stories/songs touching on this theme. They don't work for everyone but they work for me. It's an unsatisfying concept but it's what we have.

This is what struck me:

I know no stories of my great great grandparents.

I do know stories of ancient generals and philosophers and artists and other famous people - people who lived such a captivating life that people wrote biographies about them.

I know stories about my grandmothers. I know stories about my great grandmothers. But one step beyond that... nothing. I never knew my great great grandparents, never had reason to ask about them. And I think it is probably too late - I think I could perhaps collect some stories of great-great-grandparents on my father's side. On my mothers... it's possible I could track them down but I doubt it.

And as things go, this isn't hugely upsetting to me. These are people I never met, and in all honesty it seems less pressing to preserve them than to cultivate the relationships I have in the near and now, and to save what lives I can who have yet to die in the first, physical fashion.

But, these are people who are dead forever. When fades at last the last lit sun, there will not be anyone to remember them.

And that's sad.

Replies from: weft
comment by weft · 2017-12-31T17:02:31.565Z · LW(p) · GW(p)

One of the things that makes Realistically Probably Not Having Kids sad is that I'm pretty much the last of the line on my Dad's side. And I DO know stories (not much, but some) of my great-great-grandparents. Sure, I can write them down, so they exist SOMEWHERE. But in reality, when I die, that line and those stories die with me.

Replies from: Raemon
comment by Raemon · 2017-12-31T17:51:40.038Z · LW(p) · GW(p)

I wanted to just reply something like "<3" and then became self-conscious of whether that was appropriate for LW.

Replies from: habryka4
comment by habryka (habryka4) · 2018-01-01T01:51:15.312Z · LW(p) · GW(p)

Seems good to me.

Replies from: Raemon
comment by Raemon · 2018-01-01T01:52:47.293Z · LW(p) · GW(p)

In particular, I think if we make the front-page comments section filtered by "curated/frontpage/community" (i.e. you only see community-blog comments on the frontpage if your frontpage is set to community), then I'd feel more comfortable posting comments like "<3", which feels correct to me.

comment by Raemon · 2024-04-28T23:50:09.868Z · LW(p) · GW(p)

Yesterday I was at a "cultivating curiosity" workshop beta-test. One concept was "there are different mental postures you can adopt, that affect how easy it is not notice and cultivate curiosities."

It wasn't exactly the point of the workshop, but I ended up with several different "curiosity-postures", that were useful to try on while trying to lean into "curiosity" re: topics that I feel annoyed or frustrated or demoralized about.

The default stances I end up with when I Try To Do Curiosity On Purpose are something like:

1. Dutiful Curiosity (which is kinda fake, although capable of being dissociatedly autistic and noticing lots of details that exist and questions I could ask)

2. Performatively Friendly Curiosity (also kinda fake, but does shake me out of my default way of relating to things. In this, I imagine saying to whatever thing I'm bored/frustrated with "hullo!" and try to acknowledge it and and give it at least some chance of telling me things)

But some other stances to try on, that came up, were:

3. Curiosity like "a predator." "I wonder what that mouse is gonna do?"

4. Earnestly playful curiosity. "oh that [frustrating thing] is so neat, I wonder how it works! what's it gonna do next?"

5. Curiosity like "a lover". "What's it like to be that you? What do you want? How can I help us grow together?"

6. Curiosity like "a mother" or "father" (these feel slightly different to me, but each is treating [my relationship with a frustrating thing] like a small child who is bit scared, who I want to help, who I am generally more competent than but still want to respect the autonomy of."

7. Curiosity like "a competent but unemotional robot", who just algorithmically notices "okay what are all the object level things going on here, when I ignore my usual abstractions?"... and then "okay, what are some questions that seem notable?" and "what are my beliefs about how I can interact with this thing?" and "what can I learn about this thing that'd be useful for my goals?"

comment by Raemon · 2019-07-31T01:46:37.665Z · LW(p) · GW(p)

I started writing this a few weeks ago. By now I have other posts that make these points more cleanly in the works, and I'm in the process of thinking through some new thoughts that might revise bits of this.

But I think it's going to be awhile before I can articulate all that. So meanwhile, here's a quick summary of the overall thesis I'm building towards (with the "Rationalization" and "Sitting Bolt Upright in Alarm" [LW · GW] post, and other posts and conversations that have been in the works).

(By now I've had fairly extensive chats with Jessicata and Benquo and I don't expect this to add anything that I didn't discuss there, so this is more for other people who're interested in staying up to speed. I'm separately working on a summary of my current epistemic state after those chats)

  • The rationalsphere isn't great at applying rationality to its own internal politics
    • We don't seem to do much better than average. This seems like something that's at least pretty sad, even if it's a true brute fact about the world.
    • There have been some efforts to fix this fact, but most of it has seemed (to me) to be missing key facts about game theory, common knowledge, theory of mind, and some other topics that I see as necessary to solve the problem.
  • Billions of dollars are at stake, which creates important distortions that need addressing
    • The rationality and EA communities are valuable, in large part, because there is an opportunity for important ideas to influence the world-stage, moving millions or billions of dollars (or causing millions of dollars worth of stuff happening). But, when billions of dollars are at stake, you start attract world-class opportunists trying to coopt you, (as well as community members start feeling pressure to conform to social reality on the world-stage), which demands world-class ability to handle subtle political pressures to preserve that value.
      • [epistemic status: I'm not sure whether I endorse the rhetoric here. Maybe you don't need to be world class, but you probably need to be at least 75th percentile, and/or become more illegible to the forces that would try to coopt you]
  • By default, we don't seem very good at allocating attention re: these issues. But, the attempts to address that I've seen seem doomy.
    • One of the default failure modes that I've seen is, when people don't pay attention to a given call-for-clarity about "hey, we seem to be acting in ways that distort truth in predictable ways", is to jump all the way to statements like "EA has a lying problem," which I think is both untrue and anti-helpful for preserving a truthseeking space.
      • In that case Sarah later wrote up a followup post that was more reasonable and Benquo wrote up a post that articulated the problem more clearly. [Can't find the links offhand]. But it was a giant red flag for me that getting people to pay attention required sensationalizing the problem. It seemed to me that this was following an incentive gradient identical to political news. This seemed roughly as bad for truthseeking as the original problem Sarah was trying to address was, both because:
    • The "Rationalization/Sitting-bolt-upright" post was intended to provide an outlet for that sort of impulse that was at less counterproductive (in the interim before figuring out a more robust solution).
    • A first guess at a "robust solution" is something like "develop clear norms for good, fair practices to critiquing organizations." If you meet those norms, posts on LessWrong that deal with local politics can get curated.
  • By default, people use language for both truthseeking and for politics. It takes special effort to keep things truth-focused
    • A primary lesson I learned from the sequences is that most people's beliefs and statements are not about truth at all. ("Science as attire [LW · GW]", "Fable of Science and Politics [LW · GW]", etc. Most of the places where the rationalsphere seems most truth-tracking are where it sidesteps this issue, rather than really solving it. Attempting to directly jump towards "well we just use words for truth, not politics", sound to me about as promising as writing the word 'cold' on a fridge [LW · GW].
    • Relatedly, I think people struggle to stay in a truthseeking frame when they are feeling defensive. One person being defensive makes it 2-30x harder to remain truth-oriented. Multiple people being defensive at least add up that difficulty linearly, and potentially compound in weirder ways. I think this is challenging enough that it requires joint effort to avoid.
      • A truthseeking space that can actually discuss politics sanely needs both individuals who are putting special effort to avoid being defensive, and conversation partners that practice avoiding unnecessarily* provoking defensiveness.
        • *where by "unnecessary" I mean: "if your subject matter is inherently difficult to hear, you shouldn't avoid saying it. But you should avoid saying it with rhetoric that is especially liable to inflame the conversation. (i.e. "i think your project is net-harmful" is fine. "I think your project is stupid and can't believe you wasted our time on it" is making the conversation 20x harder, unnecessarily.)
          • Yes, this is hard and doesn't come naturally to everyone. But I think it's at least approximately as hard as learning to avoid getting defensive is (and I would guess the low-hanging fruit is actually comparatively easy). I think if a truthseeking space doesn't ask people to at least pick up the low-hanging fruit here, it will be less effective as a truthseeking space.
      • I don't think this is necessary for all conversations, but it's escalatingly important the less the participants trust each other and the higher the stakes.
      • If conversation participants are not making efforts to pass each other's Ideological Turing Test, my default assumption is no progress will get made
  • Communicating between frames/aesthetics/ontologies are very hard
    • Common knowledge of 'Double Crux [LW · GW]' has made it somewhat easier to resolve gnarly disagreements, but I still frequently observe rationalists (myself included) just completely talking past each other, not noticing, and then either getting really frustrated, or assuming bad faith when the actual problem is significantly different world models.
    • There's something of a skill to identifying what framework you are working under, which is somewhat separate from the usual doublecrux process.
    • I also think there's a skill to figuring out how to backpropagate facts into your frame/taste/and-or/ontology, which I think helpful for resolving major frame disagreements. (But dramatically more helpful if both parties are doing it)
  • Difficulty communicating between frames exacerbates the difficulty of discussing local politics sanely
    • Different frames have quite different takes on which rocks and hard places [LW · GW] are more scary. By default, when the frames bump into each other, they see each other as trampling all over each other's obvious needs.
    • Meanwhile, someone communicating in a different frame from you will seem to be missing the point, or subtly off, in a way that is hard to put your finger on, which makes the whole communicating process feel like moving through molasses.
    • I think having more people with the taste/ontology doublecrux skill would enable more trust that conversations across frames are happening in good faith
  • Counterfactual revolutions are super important. Real revolutions are mostly bad.
    • Despite all the above, we're unlikely to reach a state where everyone can easily communicate across frames. Even if we did, it wouldn't guarantee that people actually were talking in good faith – sometimes people actually are actively-deceptive, or stuck in a harmful pattern that they can't escape from. This is particularly problematic when they're in power.
    • I think we're in a rare, precious world where it's actually achievable for the major power centers in the EA space to communicate sanely to each other.
    • So, it's simultaneously important to have a gameboard with rules that everyone respects, but simultaneously, it's important that there be a real threat of people kicking the gameboard over if the game seems rigged.
Replies from: Zack_M_Davis, Dagon
comment by Zack_M_Davis · 2019-07-31T02:40:28.846Z · LW(p) · GW(p)

In that case Sarah later wrote up a followup post that was more reasonable and Benquo wrote up a post that articulated the problem more clearly. [Can't find the links offhand].

"Reply to Criticism on my EA Post", "Between Honesty and Perjury"

Replies from: Raemon
comment by Raemon · 2019-07-31T04:05:18.931Z · LW(p) · GW(p)

Thanks! I do still pretty* much endorse "Between Honesty and Perjury."

*avoiding making a stronger claim here since I only briefly re-read it and haven't re-thought-through each particular section and claim. But the overall spirit it's pointing to is quite important.

[Edit: Ah, well, in the comments there I apparently expressed some specific agreements and disagreements that seems... similar in shape to my current agreement and disagreement with Ben. But I think in the intervening years I've updated a bit towards "EA's epistemic standards should be closer to Ben's standards than I thought in 2017."]

comment by Dagon · 2019-07-31T16:13:51.776Z · LW(p) · GW(p)

Thank you for the effort and clarity of thought you're putting into this. One thing you may already be considering, but I haven't seen it addressed directly:

Hobbyists vs fanatics vs professionals (or core/periphery, or founders/followers/exploiters, or any other acknowledgement of different individual capabilities and motives). What parts of "the community" are you talking about when you address various issues? You hint at this in the money/distortion topic, but you're in danger of abstracting "motivation" way too far, and missing the important details of individual variation.

Also, it's possible that you're overestimating the need for legibility of reasoning over correctness of action (in the rational sense, of furthering one's true goals). I very much dispute "We don't seem to do much better than average", unless you're seriously cherry-picking your reference set. We do _WAY_ better than average both in terms of impact and in terms of transparency of reasoning. I'd love to explore some benchmarks (and copy some behaviors) if you can identify groups with similar composition and similar difficult-to-quantify goals, that are far more effective

comment by Raemon · 2018-01-27T07:34:15.539Z · LW(p) · GW(p)

Conversation with Andrew Critch today, in light of a lot of the nonprofit legal work he's been involved with lately. I thought it was worth writing up:

"I've gained a lot of respect for the law in the last few years. Like, a lot of laws make a lot more sense than you'd think. I actually think looking into the IRS codes would actually be instructive in designing systems to align potentially unfriendly agents."

I said "Huh. How surprised are you by this? And curious if your brain was doing one particular pattern a few years ago that you can now see as wrong?"

"I think mostly the laws that were promoted to my attention were especially stupid, because that's what was worth telling outrage stories about. Also, in middle school I developed this general hatred for stupid rules that didn't make any sense and generalized this to 'people in power make stupid rules', or something. But, actually, maybe middle school teachers are just particularly bad at making rules. Most of the IRS tax code has seemed pretty reasonable to me."

Replies from: Jiro
comment by Jiro · 2019-07-23T16:01:48.593Z · LW(p) · GW(p)

I think there's a difference between "Most of the IRS tax code is reasonable" and "Most of the instances where the IRS tax code does something are instances where it does reasonable things." Not all parts of the tax code are used equally often. Furthermore, most unreasonable instances of a lot of things will be rare as a percentage of the whole because there is a large set of uncontroversial background uses. For instance, consider a completely corrupt politician who takes bribes--he's not going to be taking a bribe for every decision he makes and most of the ones he does make will be uncontroversial things like "approve $X for this thing which everyone thinks should be approved anyway".

comment by Raemon · 2019-12-06T02:37:14.283Z · LW(p) · GW(p)

Over in this thread, Said asked [LW(p) · GW(p)] the reasonable question "who exactly is the target audience with this Best of 2018 book?"

By compiling the list, we are saying: “here is the best work done on Less Wrong in [time period]”. But to whom are we saying this? To ourselves, so to speak? Is this for internal consumption—as a guideline for future work, collectively decided on, and meant to be considered as a standard or bar to meet, by us, and anyone who joins us in the future? 

Or, is this meant for external consumption—a way of saying to others, “see what we have accomplished, and be impressed”, and also “here are the fruits of our labors; take them and make use of them”? Or something else? Or some combination of the above?

I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable. 

But, a quick "best guess" answer for now.

I see the overall review process as having two "major phases":

  • Phase 1: Nomination/Review/Voting/Post-that-summarizes-the-voting
  • Phase 2: Compilation and Publication

I think the first phase should be oriented entirely around "internal consumption" – figuring out what epistemic standard to hold ourselves to, and how, so that we can do better in the future. (As well as figuring out what ideas we've developed that should be further built upon). Any other benefits are incidental.

The final book/sequence is at least somewhat externally facing. I do expect it to be some people's first introduction to LessWrong, and other people's "one thing they read from LW this year". And at least some consideration should be given to those people's reading experience (which will be lacking a lot of context). But my guess is that should come more in the form of context-setting editor commentary than in decisions about what to include.

I think “here are the fruits of our labors; take them and make use of them” is more of what I was aiming for. (Although "what standards are we internally holding ourselves to, and what work should we build towards?" is still an important function of the finished product). It'd be nice if people were impressed, but a better frame for that goal is "Outsiders looking in can get an accurate picture of how productive our community is, and what sort of things we do", and maybe they are impressed by that or maybe not. 

(I realize this comment doesn't really address the broader questions underlying the discussion, but wanted to at least get this out there. My preference is to spend the next few days actually attempting to write a variety of reviews, so that week I have more examples to point to when writing up a more comprehensive post about what sorts of reviews I think are most useful and why)

Replies from: SaidAchmiz, None
comment by Said Achmiz (SaidAchmiz) · 2019-12-06T03:39:42.670Z · LW(p) · GW(p)

Thank you, this is a useful answer.

comment by [deleted] · 2019-12-06T21:57:10.410Z · LW(p) · GW(p)

I'm looking forward to a bookshelf with LW review books in my living room. If nothing else, the very least this will give us is legitimacy, and legitimacy can lead to many good things.

Replies from: Hazard
comment by Hazard · 2019-12-06T23:08:48.578Z · LW(p) · GW(p)

+1 excitement about bookshelves :)

comment by Raemon · 2019-08-17T03:34:04.860Z · LW(p) · GW(p)

I've posted this on Facebook a couple times but seems perhaps worth mentioning once on LW: A couple weeks ago I registered the domain LessLong.com and redirected it to LessWrong.com/shortform. :P

comment by Raemon · 2021-06-17T06:43:21.292Z · LW(p) · GW(p)

A thing I might have maybe changed my mind about:

I used to think a primary job of a meetup/community organizer was to train their successor, and develop longterm sustainability of leadership.

I still hold out for that dream. But, it seems like a pattern is:

1) community organizer with passion and vision founds a community

2) they eventually move on, and pass it on to one successor who's pretty closely aligned and competent

3) then the First Successor has to move on to, and then... there isn't anyone obvious to take the reins, but if no one does the community dies, so some people reluctantly step up. and....

...then forever after it's a pale shadow of its original self.

For semi-branded communities (such as EA, or Rationality), this also means that if someone new with energy/vision shows up in the area, they'll see a meetup, they'll show up, they'll feel like the meetup isn't all that good, and then move on. Wherein they (maybe??) might have founded a new one that they got to shape the direction of more.

I think this also applies to non-community organizations (i.e. founder hands the reins to a new CEO who hands the reins to a new CEO who doesn't quite know what to do)

So... I'm kinda wondering if second-generation successors should just... err on the side of shutting the thing down when they leave, rather than trying desperately to find a replacement.

The answer isn't obvious. There is value that continues to be created by the third+ generation. I think I've mostly gone from "having a firm opinion that you should be proactively training your successor" to "man, I dunno, finding a suitable successor is actually pretty hard, mrrr?"

Replies from: Pattern, mikkel-wilson, Dagon, mikkel-wilson
comment by Pattern · 2021-06-18T21:04:06.893Z · LW(p) · GW(p)

What if the replacement isn't a replacement? If only a different person/people with a different vision/s can be found then...why not that?

Or, what does the leader do, that can't be carried on?

comment by MikkW (mikkel-wilson) · 2021-06-17T18:37:04.634Z · LW(p) · GW(p)

Reading this makes me think of organizations which manage to successfully have several generations of  competent leadership. Something that has struck me for a while is the contrast in long-term competence between republics (not direct democracies) and hereditary monarchies.

Reading through history, hereditary monarchies always seem to fall into the problem you describe, of incompetent and (physically and mentally) weak monarchs being placed at the head of a nation, leading to a lot of problems. Republics, in contrast, almost always have competent leaders - one might disagree with their goals, and they are too often appointed after their prime, when their health is declining [1], but the leaders of republics are almost always very competent people.

This makes life much better for the people in the republic, and may be in part responsible for the recent proliferation of republics (though it does raise the question of why that hasn't happened sooner. Maybe the robust safeguards implemented by the Founding Fathers of the USA in their constitution were a sufficiently non-obvious, but important, social technology, to be able to make republics viable on the world stage? [2]).

A key difference between monarchies and republics is that each successive generation of leadership in a republic must win an intense competition to secure their position, unlike the heirs of a monarchy. Not only this, but the competitions are usually held quite often (for example, every 4 years in Denmark, every 3 years in New Zealand), which ensures that the competitive nature of the office is kept in the public mind very frequently, making it hard to become a de facto hereditary position. By holding a competition to fill the office, one ensures that, even if the leader doesn't share the same vision as the original founder, they still have to be very competent to be appointed to the position.

I contend that the usual way of appointing successors to small organizations (appointment by the previous leader) and to corporations (elected, but by a small body in a usually non-competitive fashion that is more similar to being appointed on a personal basis) is insufficiently competitive, and so is more similar to a hereditary monarchy than a republic, in this way.

--- 

[1] - This (The fact that the leaders of republics are often elected when their health is in decline) makes me think it may be a good idea to have a constitutional maximum age, after which individuals cannot be elected to certain important offices, to ensure that only people who are in their prime (and hence likely sufficiently healthy) can lead the nation.

[2] - The existence of elective monarchies also is suggestive that the theory may be meaningful, but it again raises the question of why elective monarchies weren't more prominent. Maybe in practice elective monarchies were too likely to become effectively hereditary monarchies in all but name (c.f. the Hungarian kingdom and the Holy Roman Empire), that they didn't distinguish themselves enough to have a competitive advantage.

Replies from: Pattern
comment by Pattern · 2021-06-18T21:09:09.911Z · LW(p) · GW(p)

[1]

Does this demonstrate:

  • a lack of younger leaders
  • older people have better shown themself (more time in which to do so, accumulate trust, etc.)
  • ?
  • Elections (by means of voters) intentionally choose old leaders because that limits how long they can hold the position, or forces them to find a successor or delegate?

[2]

George Washington's whole, only twice thing, almost seems more deliberate here. Wonder what would have happened if a similar check had been placed on political parties.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-18T23:53:39.356Z · LW(p) · GW(p)

Regarding [1], people tend to vote for candidates they know, and politicians start out with 0 name recognition, which increases monotonically with age, always increasing but never decreasing, inherently biasing the process towards older candidates.

The two-term limit was actually not intended by Washington to become a tradition, he retired after his second term because he was declining in health. It was only later that it became expected for presidents not to serve more than 2 terms. I do think the term limit on the presidency is an important guard in maintaining the competitive and representative nature of the office, and I think it's good to wonder if extending term limits to other things can be beneficial, though I am also aware of arguments pushing in the opposite direction

Replies from: Raemon
comment by Raemon · 2021-06-19T07:23:05.748Z · LW(p) · GW(p)

The two-term limit was actually not intended by Washington to become a tradition, he retired after his second term because he was declining in health.

Citation? (I've only really read American Propaganda about this so not very surprised if this is the case, but hadn't heard it before)

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-19T08:18:20.669Z · LW(p) · GW(p)

From Wikipedia: George Washington, which cites Korzi, Michael J. (2011). Presidential Term Limits in American History: Power, Principles, and Politics page 43, -and- Peabody, Bruce G. (September 1, 2001). "George Washington, Presidential Term Limits, and the Problem of Reluctant Political Leadership". Presidential Studies Quarterly. 31 (3): 439–453:

At the end of his second term, Washington retired for personal and political reasons, dismayed with personal attacks, and to ensure that a truly contested presidential election could be held. He did not feel bound to a two-term limit, but his retirement set a significant precedent. Washington is often credited with setting the principle of a two-term presidency, but it was Thomas Jefferson who first refused to run for a third term on political grounds.

A note on the part that says "to ensure that a truly contested presidential election could be held": at this time, Washington's health was failing, and he indeed died during what would have been his 3rd term if he had run for a 3rd term. If he had died in office, he would have been immediately succeeded by the Vice President, which would set an unfortunate precedent of presidents serving until they die, then being followed by an appointed heir until that heir dies, blurring the distinction between the republic and a monarchy.

Replies from: Raemon
comment by Raemon · 2021-06-19T11:09:14.342Z · LW(p) · GW(p)

Thanks!

comment by Dagon · 2021-06-17T16:29:21.257Z · LW(p) · GW(p)

What's different for the organizer and first successor, in terms of their ability to do the primary job of finding their successor?  I also note the pattern you mention (one handoff mostly succeeds, community degrades rapidly around the time the first successor leaves with no great second successor).  But I also have seen a lot of cases where the founder fails to hand off in the first place, and some where it's handed off to a committee or formal governance structure, and then eventually dies for reasons that don't seem caused by succession.

I wonder if you've got the causality wrong - communities have a growth/maintenance/decline curve, which varies greatly in the parameters, but not so much in the shape.  It seems likely to me that the leaders/organizers REACT to changes in the community by joining, changing their involvement, or leaving, rather than causing those changes.

Replies from: lincolnquirk
comment by lincolnquirk · 2021-06-17T16:45:46.370Z · LW(p) · GW(p)

I'm not Ray, but I'll take a stab --

The founder has a complete vision for the community/meetup/company/etc. They were able to design a thing that (as long as they continue putting in energy) is engaging, and they instinctively know how to change it so that it continues being great for participants.

The first successor has an incomplete, operational/keep-things-running-the-way-they-were type vision. They cargo-cult whatever the founder was doing. They don't have enough vision to understand the 'why' behind all the decisions. But putting your finger on their precise blind spot is quite hard. It's their "fault" (to the extent that we can blame anyone) that things go off the rails, but their bad decision-making doesn't actually have short term impacts that anyone can see. Instead, the impacts come all at once, once they disappear, and there becomes common knowledge that it was a house of cards the whole time.

(or something. my models are fairly imprecise on this.)

Anyway, why did the founder get fooled into anointing the first successor even though they don't have the skills to continue the thing? My guess is that there's a fairly strong selection effect for founders combined with "market fit" -- founders who fail to reach this resonant frequency don't pick successors, they just fail. Whatever made them great at building this particular community doesn't translate into skills at picking a successor, and that resonance may not happen to exist in any other person. Another founder-quality person would not necessarily have resonated with the existing community's frequency, so there could also be an anti-selection effect there.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-06-17T17:42:05.358Z · LW(p) · GW(p)

My model differs from yours. In my view, the first successor isn't the source of most problems. The first successor usually has enough interaction and knowledge transfer from the founder, that they are able to keep things working more-or-less perfectly fine during their tenure, but they aren't able to innovate and create substantial new value, since they lack the creativity and vision of the founder. In your terms, they are cargo-culting, but they are able to cargo-cult sufficiently well to keep the organization running smoothly; but when the second (and nth) successor comes in, they haven't interacted much directly with the original founder, but instead are basing their decisions based, at most, on a vague notion of what the founder was like (though are often better served when they don't even try to follow in the footsteps of the founder), and so are unable to keep things working according to the original vision. They are cargo-culting a cargo-cult, which isn't enough to keep things working the way they need to work, at which point the organization stops being worth keeping around.

During the reign of the founder, the slope of the value created over time is positive, during the reign of the first successor, the slope is approximately zero, but once the second successor and beyond take over, the slope will be negative.

comment by MikkW (mikkel-wilson) · 2021-06-17T17:47:08.045Z · LW(p) · GW(p)

My read on this is that it's still obviously worthwhile to train a successor, but to consider giving them clear instructions to shut down the group when it's time for them to move on, to avoid the problems that come with 3rd-generational leadership.

comment by Raemon · 2022-09-06T23:42:26.753Z · LW(p) · GW(p)

Posts I vaguely want to have been written so I can link them to certain types of new users:

  • "Why you can chill out about the basilisk and acausal blackmail." (The current Roko's Basilisk [? · GW] kinda tries to be this, but there's a type of person who shows up on LessWrong regularly who's caught in an anxious loop that keeps generating more concerns, and I think the ideal article here is more trying to break them out of the anxious loop than comprehensively explain the game theory.)
  • "FAQ: Why you can chill out about quantum immortality and everything adds up to normality." (Similar, except the sort of person who gets worked up about this is usually having a depressive spiral and worried about being trapped in an infinite hellscape)
comment by Raemon · 2019-08-21T03:38:26.081Z · LW(p) · GW(p)

Crossposted from my Facebook timeline (and, in turn, crossposted there from vaguely secret, dank corners of the rationalsphere)

“So Ray, is LessLong ready to completely replace Facebook? Can I start posting my cat pictures and political rants there?”

Well, um, hmm....

So here’s the deal. I do hope someday someone builds an actual pure social platform that’s just actually good, that’s not out-to-get you, with reasonably good discourse. I even think the LessWrong architecture might be good for that (and if a team wanted to fork the codebase, they’d be welcome to try)

But LessWrong shortform *is* trying to do a bit of a more nuanced thing than that.

Shortform is for writing up early stage ideas, brainstorming, or just writing stuff where you aren’t quite sure how good it is or how much attention to claim for it.

For it to succeed there, it’s really important that it be a place where people don’t have to self-censor or stress about how their writing comes across. I think intellectual progress depends on earnest curiosity, exploring ideas, sometimes down dead ends.

I even think it involves clever jokes sometimes.

But... I dunno, if looked ahead 5 years and saw that the Future People were using shortform mostly for cat videos and political rants, I’d have a sense that we fucked up somehow.

Hopefully the karma system will naturally reward shortform that’s thought provoking and interesting. If we get deluged in politics and cats and worse things I can’t predict, we might add some features that affect what shortform content gets most highly promoted.

But the bottom line is the whole thing is an experiment. The important bits to know for now are:

1) use some good judgment

2) don’t stress too much about it – if it turns out to be a problem we’ll most likely try to solve it via positive-selection tech

3) Shortform is one piece of the overall intellectual pipeline. We’re also looking into things we can also do to improve LessWrong’s reward for the upper end of the rigor spectrum

comment by Raemon · 2019-07-14T18:36:44.494Z · LW(p) · GW(p)

Just spent a weekend at the Internet Intellectual Infrastructure Retreat. One thing I came away with was a slightly better sense of was forecasting and prediction markets, and how they might be expected to unfold as an institution.

I initially had a sense that forecasting, and predictions in particular, was sort of "looking at the easy to measure/think about stuff, which isn't necessarily the stuff that connected to stuff that matters most."

Tournaments over Prediction Markets

Prediction markets are often illegal or sketchily legal. But prediction tournaments are not, so this is how most forecasting is done.

The Good Judgment Project

Held an open tournament, the winners of which became "Superforecasters". Those people now... I think basically work as professional forecasters, who rent out their services to companies, NGOs and governments that have a concrete use for knowing how likely a given country is to go to war, or something. (I think they'd been hired sometimes by Open Phil?)

Vague impression that they mostly focus on geopolitics stuff?

High Volume and Metaforecasting

Ozzie described a vision where lots of forecasters are predicting things all the time, which establishes how calibrated they are. This lets you do things like "have one good forecaster with a good track record make lots of predictions. Have another meta-forecaster evaluate a small sample of their predictions to sanity check that they are actually making good predictions", which could get you a lot of predictive power for less work than you'd expect."

This seemed interesting, but I still had some sense of "But how do you get all these people making all these predictions? The prediction markets I've seen don't seem to accomplish very interesting things, for reasons Zvi discussed here." Plus I'd heard that sites like Metaculus end up mostly being about gaming the operationalization rules than actually predicting things accurately.

Automation

One thing I hadn't considered is that Machine Learning is already something like high volume forecasting, in very narrow domains (i.e. lots of bots predicting which video you'll click on next). One of Ozzie's expectations is that over time, as ML improves, it'll expand the range of things that bots can predict. So some of the high volume can come from automated forecasters.

Neural nets and the like might also be able to assist in handling the tricky "operationalization bits", where you take a vague prediction like "will country X go to war against country Y" and turn that into the concrete observations that would count for such a thing. Currently this takes a fair amount of overhead on Metaculus. But maybe at some point this could get partly automated.

(there wasn't a clear case for how this would happen AFAICT, just 'i dunno neural net magic might be able to help.' I don't expect neural-net magic to help here in the next 10 years but I could see it helping in the next 20 or 30. I'm not sure if it happens much farther in advance than "actual AGI" though)

I [think] part of the claim was that for both the automated-forecasting and automated-operationalization, it's worth laying out tools, infrastructure and/or experiments now that'll set up our ability to take advantage of them later.

Sweeping Visions vs Near-Term Practicality, and Overly Narrow Ontologies

An aesthetic disagreement I had with Ozzie was:

My impression is that Ozzie is starting with lots of excitement for forecasting as a whole, and imagining entire ecosystems built out of it. And... I think there's something important and good about people being deeply excited for things, exploring them thoroughly, and then bringing the best bits of their exploration back to the "rest of the world."

But when I look at the current forecasting ecosystem, it looks like the best bits of it aren't built out of sweeping infrastructural changes, they're built of small internal teams building tools that work for them, or consulting firms of professionals that hire themselves out. (Good Judgment project being one, and the How To Measure Anything guy being another)

The problem with large infrastructural ecosystems is this general problem you also find on Debate-Mapping sites – humans don't actually think in clean boxes that are easy to fit into database tables. They think in confused thought patterns that often need to meander, explore special cases, and don't necessarily fit whatever tool you built for them to think in.

Relatedly: every large company I've worked at has built internal tools of some sort, even for domains that seem like they sure out to be able to be automated and sold at scale. Whenever I've seen someone try to purchase enterprise software for managing a product map, it's either been a mistake, or the enterprise software has required a lot of customization before it fit the idiosyncratic needs of the company.

Google sheets is really hard to beat as a coordination tool (but a given google sheet is hard to scale)

So for the immediate future I'm more excited by hiring forecasters and building internal forecasting teams than ecosystem-type websites.

comment by Raemon · 2018-01-18T20:30:00.312Z · LW(p) · GW(p)

More in neat/scary things Ray noticed about himself.

I set aside this week to learn about Machine Learning, because it seemed like an important thing to understand. One thing I knew, going in, is that I had a self-image as a "non technical person." (Or at least, non-technical relative to rationality-folk). I'm the community/ritual guy, who happens to have specialized in web development as my day job but that's something I did out of necessity rather than a deep love.

So part of the point of this week was to "get over myself, and start being the sort of person who can learn technical things in domains I'm not already familiar with."

And that went pretty fine.

As it turned out, after talking to some folk I ended up deciding that re-learning Calculus was the right thing to do this week. I'd learned in college, but not in a way that connected to anything and gave me a sense of it's usefulness.

And it turned out I had a separate image of myself as a "person who doesn't know Calculus", in addition to "not a technical person". This was fairly easy to overcome since I had already given myself a bunch of space to explore and change this week, and I'd spent the past few months transitioning into being ready for it. But if this had been at an earlier stage of my life and if I hadn't carved out a week for it, it would have been harder to overcome.

Man. Identities. Keep that shit small yo.

Replies from: Zvi, pamelafox
comment by Zvi · 2019-07-03T12:19:59.920Z · LW(p) · GW(p)

Also important to note that learn Calculus this week is a thing a person can do fairly easily without being some sort of math savant.

(Presumably not the full 'know how to do all the particular integrals and be able to ace the final' perhaps, but definitely 'grok what the hell this is about and know how to do most problems that one encounters in the wild, and where to look if you find one that's harder than that.' To ace the final you'll need two weeks.)

Replies from: Raemon
comment by Raemon · 2019-07-03T18:20:08.880Z · LW(p) · GW(p)

Very confused about why this was downvoted.

Replies from: habryka4, Elo
comment by habryka (habryka4) · 2019-07-03T20:17:17.137Z · LW(p) · GW(p)

Maybe someone thinks that the meme of "everyone can learn calculus" is a really bad one? I remember you being similarly frustrated at the "everyone can be a programmer" meme.

Replies from: SatvikBeri
comment by SatvikBeri · 2019-07-03T22:16:35.871Z · LW(p) · GW(p)

I didn't downvote, but I agree that this is a suboptimal meme – though the prevailing mindset of "almost nobody can learn Calculus" is much worse.

As a datapoint, it took me about two weeks of obsessive, 15 hour/day study to learn Calculus to a point where I tested out of the first two courses when I was 16. And I think it's fair to say I was unusually talented and unusually motivated. I would not expect the vast majority of people to be able to grok Calculus within a week, though obviously people on this site are not a representative sample.

Replies from: Raemon
comment by Raemon · 2019-07-03T22:24:04.235Z · LW(p) · GW(p)

Quite fair. I had read Zvi as speaking to typical LessWrong readership. Also, the standard you seem to be describing here is much higher than the standard Zvi was describing.

comment by Elo · 2019-07-03T21:41:34.555Z · LW(p) · GW(p)

That's not fair.

I don't believe you as a moderator, who can see who's voted, should ever have the right to make the comment that solicits a user to justify their voting behaviour in the way you've done.

Let alone on your own short form feed. Seems a bit selfish, with asymmetric information here.

What's it like for you to be very confused? How's that for you? How did the (confusion) comment add to the discussion?

Replies from: Raemon
comment by Raemon · 2019-07-03T21:46:30.833Z · LW(p) · GW(p)

I can't see whose voted.

(Admins have the power to look at who's voted, but it happens very rarely, and typically* only after checking with another team member that the situation is important enough to warrant it [the most common case being 'someone looks like they're probably a Eugine_Nier sockpuppet])

I think it's bad form for a person who wrote a post to complain about it getting downvoted. It seems less obviously bad to me for a different person to express confusion about it.

*when I say "typically" I mean "we talked about this being the norm, and everyone agreed to it. Later we onboarded a new person and forgot to initially talk to them about that norm, so they may have looked at some of the votes, but we have since talked about the norm with them. So I can't promise it happens never but it's definitely not a thing that casually happens by default.

Replies from: Ruby, Elo
comment by Ruby · 2019-07-04T02:21:15.864Z · LW(p) · GW(p)

If Ray's talking about me as the newly onboarded member, I can say I didn't examine any individual votes outside of due process. (I recall one such case of due process where multiple users were reporting losing karma on multiple posts and comments - we traced it back to a specific cause.)

I do a lot of the analytics, so when I first joined I was delving into the data, but mostly at the aggregate metrics level. Since I was creating new ways to query the data, Ray correctly initiated a conversation to determine our data handling norms. I believe this was last September.

For further reassurance, I can say that vote data is stored only with inscrutable ID numbers for comments, posts, and users. We have to do multiple lookups/queries if we want to figure who voted on something, which is more than enough friction to ensure we don't ever accidentally see individual votes.

We do look at aggregate vote data that isn't about a user voting on a specific thing, e.g. overall number of votes, whether a post is getting a very large proportion of downvotes (anyone can approximately infer this by comparing karma and number of votes via the hover-over).

comment by Elo · 2019-07-03T22:22:47.592Z · LW(p) · GW(p)

I'd appreciate this information (about looking at votes) being published in meta.

The difference between "confusion" and "complain" is a grey area. I've heard people exclaim, "I'm so confused. This is exciting!" and other times people exclaim, "I'm so confused, this is frustrating".

I suspect you weren't sharing your confusion because you had a fun and jolly sentiment behind it. But being text, it's very hard to tell. (hence the follow up question, "how was that confusion for you?" - which I assume you weren't taking seriously and weren't going to answer, particularly because I put you on the defensive about mod culture and powers)

Two separate comments here:

  1. If users knew more about what the mods were or were not doing, there would be less to bring up in my original comment.
  2. Unclear about why you shared your confusion. What are your motives and in having those motives from a mod-power position, how does that shape the culture around here?
Replies from: Raemon
comment by Raemon · 2019-07-03T22:50:18.397Z · LW(p) · GW(p)

My intent was "I'd be interested in knowing what the reasoning was, but also it's important for downvoters to not feel obligation to share their reasoning if they don't feel like it." That's a bit of a handful to type out every single time I experience it.

I updated the FAQ. But an important note about how I think about all of this is it's *not* actually possible or tractable for everyone to have read everything there is to know about the LW moderation team, nor is it possible/tractable for the LW team to keep everyone on the site fully informed about all of our philosophical and ethical positions.

We've been trying recently to publicly post our most important positions, promises, deliberate-not-promises, etc. But we can't cover everything.

comment by Pamela Fox (pamelafox) · 2018-06-30T22:12:24.411Z · LW(p) · GW(p)

I went on a 4-month Buddhist retreat, and one week covered "Self-images". We received homework that week to journal our self-images - all of them. Every time I felt some sense of self, like "The self that prides itself on being clean" or "The self that's playful and giggly", I'd write it down in my journal. I ended up filling 20 pages over a month period, and learning so much about the many selves my mind/body were trying to convey to the world. I also discovered how often two self-images would compete with each other. Observing the self-images helped them to be less strongly attached.

It sounds like you discovered that yourself this week. You might find such an exercise useful for discovering more of that.

comment by Raemon · 2023-08-14T02:20:53.591Z · LW(p) · GW(p)

High Stakes Value and the Epistemic Commons

I've had this in my drafts for a year. I don't feel like the current version of it is saying something either novel or crisp enough to quite make sense as a top-level post, but wanted to get it out at least as a shortform for now.

There's a really tough situation I think about a lot, from my perspective as a LessWrong moderator. These are my personal thoughts on it.

The problem, in short: 

Sometimes a problem is epistemically confusing, and there are probably political ramifications of it, such that the most qualified people to debate it are also in conflict with billions of dollars on the line and the situation is really high stakes (i.e. the extinction of humanity) such that it really matters we get the question right.

Political conflict + epistemic murkiness means that it's not clear what "thinking and communicating sanely" about the problem look like, and people have (possibly legitimate) reasons to be suspicious of each other's reasoning.

High Stakes means that we can't ignore the problem.

I don't feel like our current level of rationalist discourse patterns are sufficient for this combo of high stakes, political conflict, and epistemic murkiness. 

Spelling out some concrete examples

Interventions that help with AI extinction risk are often hard to evaluate. Reasonable people can disagree whether a project ranges from "highly net positive" to "highly net negative". Smart people I know have fairly different strategic perspectives on how humanity can survive the 21st century.

Sometimes these disagreements are political – is pivotal acts [LW · GW] a helpful frame or a harmful one [LW · GW]? How suspicious should we be of safetywashing [LW · GW]? 

Sometimes these disagreements are more technical. How will differential technology play out? I've heard some arguments that improving alignment techniques on current-generation ML systems may be negative, because a) it won't actually help align powerful AI systems past the sharp left turn, and meanwhile b) makes it easier and more profitable to deploy AI in ways that could start to escalate beyond our control (killing us in slightly more [LW · GW] mundane ways than the fast takeoff scenarios).

I've heard arguments that even interpretability, which you'd think is a purely positive source of information, is also helpful for capabilities (in particular if the interpretability is actually any good). And maybe you actually need a lot of interpretability before the alignment benefits outweigh the capability gains.

Some disagreements are the intersection of political, technical, and psychological. 

You might argue that people in AGI companies are motivated by excitement over AI, or making money, and are only paying lip service to safety. Your beliefs about this might include "their technical agenda doesn't make any sense to you" as well as "you have a strong guess about what else might be motivating their agenda." [LW(p) · GW(p)] 

You might think AI Risk advocates are advocating pivotal acts because of a mix of trauma [LW · GW], finding politics distasteful [LW · GW], and technical mistakes regarding the intersection of boundaries and game theory [LW · GW].

This is all pretty gnarly, because 

  • The stakes here matter a lot. We're talking about the end of the world and/or the cosmic endowment.
  • Common professional politeness norms typically paper over conflict rather than leaning into it. There aren't (as much) consensus "professional" norms for dealing directly with high stakes conflict that preserve epistemic.
  • A lot of conversation has been going on for a long time, but not everyone's participated in the same arguments, so some people feel like "We've had the object-level arguments and I don't really know where to go other than to say 'it sure looks to me like you're psychologically motivated here'", and others are like "why are you jumping to assumptions about me when AFAICT we haven't even hashed out the object level?"
  • People disagree about what counts as good concrete technical arguments, and I think at least some (thought IMO not all) of that disagreement is for fairly reasonable reasons.

I have some guesses about how to think about this, but I feel some confusion about them. And it feels pedagogically bad for this to end with "and therefore, [Insert some specific policy or idea]" rather than "okay, what is the space of considerations and desiderata here?" a la Hold Off On Proposing Solutions [LW · GW].

Replies from: 1a3orn, Chris_Leong
comment by 1a3orn · 2023-08-17T01:21:23.494Z · LW(p) · GW(p)

This intersects sharply with your prior post about feedback loops, I think.

As it is really hard / maybe impossible (???) for individuals to reason well in situations where you do not have a feedback loop, it is really hard / maybe impossible to make a community of reasoning well in a situation without feedback loops.

Like at some point, in a community, you need to be able to point to (1) canonical works that form the foundation of further thought, (2) examples of good reasoning to be imitated by everyone. If you don't have those, you have a sort of glob of memes and ideas and shit that people can talk about to signal that they "get it," but it's all kinda arbitrary and conversation cannot move on because nothing is ever established for sure.

And like -- if you never have clear feedback, I think it's hard to have canonical works / examples of good reasoning other than by convention and social proof. There are works in LW which you have to have read in order to continue various conversations, but whether these works are good or not is highly disputed.

I of course have some proposed ideas for how to fix the situation -- this -- but my proposed ideas would clean out the methods of reasoning and argument with which I disagree, which is indeed the problem.

Replies from: Raemon
comment by Raemon · 2023-08-17T01:35:10.830Z · LW(p) · GW(p)

I of course have some proposed ideas for how to fix the situation -- this -- but my proposed ideas would clean out the methods of reasoning and argument with which I disagree, which is indeed the problem.

I don't have a super strong memory of this, did you have a link? (not sure how directly relevant but was interested)

Replies from: 1a3orn
comment by 1a3orn · 2023-08-17T12:55:18.294Z · LW(p) · GW(p)

Your memory is fine, I was writing badly -- I meant the ideas I would propose rather than the ideas I have proposed by "proposed ideas." The flavor would be something super-empiricist like this [LW · GW], not that I endorse that as perfect. I do think ideas without empirical restraint loom too large in the collective.

comment by Chris_Leong · 2023-08-14T09:16:51.336Z · LW(p) · GW(p)

Have you considered hosting a discussion on this topic? I'm sure you've already had some discussions on this topic, but a public conversation could help surface additional ideas and/or perspectives that could help you make sense of this.

comment by Raemon · 2021-01-18T22:27:14.923Z · LW(p) · GW(p)

Seems like different AI alignment perspectives sometimes are about "which thing seems least impossible."

Straw MIRI researchers: "building AGI out of modern machine learning is automatically too messy and doomed. Much less impossible to try to build a robust theory of agency first."

Straw Paul Christiano: "trying to get a robust theory of agency that matters in time is doomed, timelines are too short. Much less impossible to try to build AGI that listens reasonably to me out of current-gen stuff."

(Not sure if either of these are fair, or if other camps fit this)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-01-19T14:54:51.643Z · LW(p) · GW(p)

'Straw MIRI researchers' seems basically right to me. Though if I were trying to capture all MIRI research I'd probably replace "try to build a robust theory of agency" with "try to get deconfused about powerful general-purpose intelligence/optimization" or "try to ensure that the future developers of AGI aren't flying blind; less like the black boxes of current ML, more like how NASA has to deal with some chaotic wind and weather patterns but the principles and parts of the rocket are fundamentally well-understood".

'Straw Paul Christiano' doesn't sound right to me, but I'm not sure how to fix it. Some things that felt off to me (though maybe I'm wrong about this too):

  • Disagreements about whether MIRI's approach is doomed or too-hard seem smaller and less cruxy to me than disagreements about whether prosaic AGI alignment is doomed.
  • "Timelines are too short" doesn't sound like a crux I've heard before.
  • A better example of a thing I think Paul thinks is pretty doomed is "trying to align AGI in hard-takeoff scenarios". I could see takeoff speed/continuity being a crux: either disagreement about the likelihood of hard takeoff, or disagreement about the feasibility of alignment given hard takeoff.
Replies from: Scott Garrabrant, Raemon
comment by Scott Garrabrant · 2021-01-19T21:27:35.123Z · LW(p) · GW(p)

(I got nerd-sniped by trying to develop a short description of what I do. The following is my stream of thought)

+1 to replacing "build a robust theory" with "get deconfused," and with replacing "agency" with "intelligence/optimization," although I think it is even better with all three. I don't think "powerful" or "general-purpose" do very much for the tagline.

When I say what I do to someone (e.g. at a reunion) I say something like "I work in AI safety, by doing math/philosophy to try to become less confused about agency/intelligence/optimization." (I dont think I actually have said this sentence, but I have said things close.)

I specifically say it with the slashes and not "and," because I feel like it better conveys that there is only one thing that is hard to translate, but could be translated as "agency," "intelligence," or "optimization."

I think it is probably better to also replace the word "about" with the word "around" for the same reason.

I wish I had a better word for "do." "Study" is wrong. "Invent" and "discover" both seem wrong, because it is more like "invent/discover", but that feels like it is overusing the slashes. Maybe "develop"? I think I like "invent" best. (Note that not knowing whether to say "invent" or "discover" is an example of being confused around agency/intelligence/optimization).

I also think I'll replace "try to become" with "make myself."

So, that leads me to "I invent math/philosophy to make myself less confused around agency/intelligence/optimization."

I have no idea what to do with the first part. The first part feels political. In practice, I often say something like "I work in AI safety (so like trying to prevent the robot apocalypse) by...", and I often try to make it boring and just say "AI safety," depending on whether the audience is such that I want them to get the takeaway "Scott has a weird and mathy job that may or may not be about saving the world" vs I want them to bite on the agency part and talk to me about it. 

I also think I jump sometimes between saying alignment, sometimes saying safety, and sometimes saying X-risk, and I am not sure why. I should probably pick one. For some reason I feel much less invested in getting the first half right. Maybe that is just because it is fun to say the robot apocalypse thing, and if I think too hard about it I will realize that is a bad idea.

comment by Raemon · 2021-01-19T17:56:45.800Z · LW(p) · GW(p)

The thing the "timelines are too short" was trying to get at was "it has to be competitive with mainstream AI in order to work" (pretty sure Paul has explicitly said this), with, what I thought was basically a followup assumption of "and timelines are too short to have time to get a competitive thing based off the kind of deconfusion work that MIRI does."

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-01-19T18:36:36.020Z · LW(p) · GW(p)

I'd have thought the Paul-argument is less timeline-dependent than that -- more like 'even if timelines are long, there's no reason to expect any totally new unexplored research direction to pay off so spectacularly that it can compete with the state of the art n years from now; and prosaic alignment seems like it may work, so we should focus more on that until we're confident it's a dead end'.

The base rate of new ideas paying off in a big way, even if they're very promising-seeming at the outset, is super low. It may be useful for some people to pursue ideas like this, but (on my possibly-flawed Paul-model) the bulk of the field's attention should be on AI techniques that already have a proven track record of competitiveness, until we know this is unworkable.

Whereas if you're already confident that scaled-up deep learning in the vein of current ML is unalignable, then base rates are a bit of a moot point; we have to find new approaches one way or another, even if it's hard-in-expectation. So "are scaled-up deep nets a complete dead end in terms of alignability?" seems like an especially key crux to me.

Replies from: RobbBB, Raemon
comment by Rob Bensinger (RobbBB) · 2021-01-19T18:53:34.318Z · LW(p) · GW(p)

Caveat: I didn't run the above comments by MIRI researchers, and MIRI researchers aren't a monolith in any case. E.g., I could imagine people's probabilities in "scaled-up deep nets are a complete dead end in terms of alignability" looking like "Eliezer ≈ Benya ≈ Nate >> Scott >> Abram > Evan >> Paul", or something?

comment by Raemon · 2021-01-19T18:51:15.888Z · LW(p) · GW(p)

Okay, that is compatible with the rest of my Paul model. Does still seem to fit into the ‘what’s least impossible’ frame.

comment by Raemon · 2024-10-11T00:24:31.429Z · LW(p) · GW(p)

Using "cruxiness" instead of operationalization for predictions.

One problem with making predictions is "operationalization." A simple-seeming prediction can have endless edge cases.

For personal predictions, I often think it's basically not worth worrying about it. Write something rough down, and then say "I know what I meant." But, sometimes this is actually unclear, and you may be tempted to interpret a prediction in a favorable light. And at the very least it's a bit unsatisfying for people who just aren't actually sure what they meant.

One advantage of cruxy predictions (aside from "they're actually particularly useful in the first place), is that if you know what decision a prediction was a crux for, you can judge ambiguous resolution based on "would this actually have changed my mind about the decision?"

("Cruxiness instead of operationalization" is a bit overly click-baity. Realistically, you need at least some operationalization, to clarify for yourself what a prediction even means in the first place. But, I think maybe you can get away with more marginal fuzziness if you're clear on how the prediction was supposed to inform your decisionmaking)

⚖ A year from now, in the three months prior, will I have used "cruxiness-as-operationalization" on a prediction, and found it helpful. (Raymond Arnold: 50%)

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-13T18:30:25.852Z · LW(p) · GW(p)

I would phrase this another way, which is that when making a prediction, you need to satisfice operationalization, but should seek to maximize cruxiness. Operationalization just needs to be good enough for the readers (including your future self) to get a good grasp of what you mean. Cruxiness is what makes the prediction worth thinking about. 

comment by Raemon · 2022-09-12T07:55:40.426Z · LW(p) · GW(p)

My personal religion involves two* gods – the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil"). 

When I'm facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises [LW · GW] and to notice subtle things [LW · GW], which you might need, even in emergencies [LW · GW])

If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here [EA · GW]).

But Humo and Robutil in fact disagree on some things, and disagree on emphasis. 

They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for [EA · GW].

They disagree on how many high schoolers it's acceptable to accidentally fuck up psychologically, while you experiment with a new program to get them into.

They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so.

Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn't occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying "I told you so." (Robutil responds "but you didn't provide any arguments about how that maximized utility!". Humo responds "but I said it was obviously unhealthy!" Robutil says "wtf does 'unhealthy' even mean?")

It took Robutil longer still to consider that perhaps you not only need to prioritize your own wellbeing and your friendships, but you need to prioritize them for their own sake, not just as part of a utilitarian calculus

Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this is in fact measured in real human beings who suffer and die because you took longer to scale up your program. 

In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don't disagree as much as you'd naively expect, but they speak different languages and emphasize things differently. 

Humo might acknowledge that I can't take care of everyone, or even respond compassionately to all the people who show up in my life I don't have time to help. But he says so with a warm, mournful [LW · GW] compassion, whereas Robutil says in with brief, efficient ruthlessness [LW(p) · GW(p)].

I find it useful to query them independently, and to imagine the wise version of each of them as best I can – even if my imagining is but a crude shadow of their idealized platonic selves.

Replies from: Dagon
comment by Dagon · 2022-09-12T20:36:56.718Z · LW(p) · GW(p)

prioritize your own wellbeing and your friendships, but you need to prioritize them for their own sake, not just as part of a utilitarian calculus

Hmm.  Does this fully deny utilitarianism?  Are these values sacred (more important that calculable tradeoffs), in some way?

I'm not utilitarian for other reasons (I don't believe in comparability of utility, and I don't value all moral patients equally, or fairly, or objectively), but I think you COULD fit those priorities into a utilitarian framework, not by prioritizing them for their own sake, but acknowledging the illegibility of the values and taking a guess at how to calculate with them, and then adjusting as circumstances change.

comment by Raemon · 2020-12-14T02:00:06.528Z · LW(p) · GW(p)

I’ve noticed myself using “I’m curious” as a softening phrase without actually feeling “curious”. In the past 2 weeks I’ve been trying to purge that from my vocabulary. It often feels like I'm cheating, trying to pretend like I'm being a friend when actually I'm trying to get someone to do something. (Usually this is a person I'm working with it and it's not quite adversarial, we're on the same team, but it feels like it degrades the signal of true open curiosity)

Replies from: mr-hire, Zack_M_Davis
comment by Matt Goldenberg (mr-hire) · 2020-12-14T18:55:06.766Z · LW(p) · GW(p)

Have you tried becoming curious each time you feel the urge to say it? Seems strictly better than not being curious.

Replies from: Raemon
comment by Raemon · 2020-12-14T19:39:55.092Z · LW(p) · GW(p)

Dunno about that. On one hand, being curious seems nice on the margin. But, the whole deal here is when I have some kinda of agenda I'm trying to accomplish. I do care about accomplishing the agenda in a friendly way. I don't obviously care about doing it in a curious way – the reason I generated the "I'm curious" phrase is because it was an easy hack for sounding less threatening, not because curiosity was important. I think optimizing for curiosity here is more likely to fuck up my curiosity than to help with anything.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-14T19:49:32.192Z · LW(p) · GW(p)

I went through something similar with phrases like "I'm curious if you'd be willing to help me move." While I really meant "I hope that you'll help me move."

My personal experience was that shifting this hope/expectation toba real sense of curiosity "Hmm, Does this person want to help me move?" Made it more pleasant for both of us. I became genuinely curious about their answer, and there was less pressure both internally and externally.

comment by Zack_M_Davis · 2020-12-14T02:03:40.434Z · LW(p) · GW(p)

The direct approach: "I'm curious [if/why ...]" → "Tell me [if/why ...]"

Replies from: Raemon
comment by Raemon · 2020-12-14T02:09:23.165Z · LW(p) · GW(p)

I do still feel flinchy about that because it does come across less friendly / overly commanding to me. (For the past few weeks I've been often just deciding the take the hit of being less friendly, but am on the lookout for phrases that feel reasonable on all dimensions)

Replies from: DanielFilan, deluks917
comment by DanielFilan · 2020-12-14T06:29:44.807Z · LW(p) · GW(p)

"Can you tell me [if/why]..."?

comment by sapphire (deluks917) · 2020-12-14T04:53:03.199Z · LW(p) · GW(p)

It basically is a command. So maybe it's a feature that the phrase feels commanding. Though it is a sort of 'soft command' in that you would accept a good excuse to not answer (like 'I am too busy, I will explain later').

Replies from: Raemon
comment by Raemon · 2020-12-14T05:18:23.510Z · LW(p) · GW(p)

I think it's not the case that I really want it to be a command, I want it to be "reveal culture", where, it is a fact that I want to know this thing, and that I think it'd be useful if you told me. But, it's also the case that we are friends and if you didn't want to tell me for whatever reason I'd find a way to work with that.

(the line is blurry sometimes, there's a range of modes I'm in when I make this sort of phrase, some more commandlike than others. But, I definitely frequently want to issue a non-command. The main thing I want to fix is that "I'm curious" in particular is basically a lie, or at least has misleading connotes)

comment by Raemon · 2020-03-04T22:24:55.089Z · LW(p) · GW(p)

Hmm, sure seems like we should deploy "tagging" right about now, mostly so you at least have the option of the frontpage not being All Coronavirus All The Time.

comment by Raemon · 2018-01-07T05:12:24.188Z · LW(p) · GW(p)

So there was a drought of content during Christmas break, and now... abruptly... I actually feel like there's too much content on LW. I find myself skimming down past the "new posts" section because it's hard to tell what's good and what's not and it's a bit of an investment to click and find out.

Instead I just read the comments, to find out where interesting discussion is.

Now, part of that is because the front page makes it easier to read comments than posts. And that's fixable. But I think, ultimately, the deeper issue is with the main unit-of-contribution being The Essay.

A few months ago, mr-hire said (on writing that provokes comments)

Ideas should become comments, comments should become conversations, conversations should become blog posts, blog posts should become books. Test your ideas at every stage to make sure you're writing something that will have an impact.

This seems basically right to me.

In addition to comments working as an early proving ground for an ideas' merit, comments make it easier to focus on the idea, instead of getting wrapped up in writing something Good™.

I notice essays on the front page starting with flowery words and generally trying to justify themselves as an essay, when all they actually needed was to be a couple short paragraphs. Sometimes even a sentence.

So I think it might be better if the default way of contributing to LW was via comments (maybe using something shaped sort of like this feed), which then appears on the front page, and if you end up writing a comment that's basically an essay, then you can turn it into an essay later if you want.

Replies from: Raemon, Hazard
comment by Raemon · 2018-01-07T06:18:23.782Z · LW(p) · GW(p)

Relatedly, though, I kinda want aspiring writers on LW to read this Scott Alexander Post on Nonfiction Writing.

comment by Hazard · 2018-02-04T14:32:44.449Z · LW(p) · GW(p)

I ended up back here because I just wrote a short post that was an idea, and then went, "Hmmm, didn't Raemon do a Short Form feed thing? How did that go?"

It might be nice if one could pin their short form feed to their profile.

Replies from: Raemon
comment by Raemon · 2018-02-04T22:49:36.388Z · LW(p) · GW(p)

Yeah, I'm hoping in the not-too-distant future we can just make shortform feeds an official part of less wrong. (Although, I suppose we may also want users to be able to sticky their own posts on their profile page, for various reasons, and this would also enable anyone who wants such a feed to create one, while also being able to create other things like "important things you know about me if you're going to read my posts" or whatever.)

Replies from: Raemon
comment by Raemon · 2019-07-04T05:42:09.206Z · LW(p) · GW(p)

(It's now the distant future, and... maybe we'll be finally gettin around to this!)

comment by Raemon · 2019-08-07T05:41:04.249Z · LW(p) · GW(p)

Is... there compelling difference between stockholm syndrome and just, like, being born into a family?

Replies from: ChristianKl, leggi, Pattern
comment by ChristianKl · 2019-08-08T09:39:15.028Z · LW(p) · GW(p)

There's little evidence for the stockholm syndrome effect in general. I wonder whether there's evidence that being born in a family does something.

comment by leggi · 2019-08-07T14:11:45.434Z · LW(p) · GW(p)

That made me laugh! Can't think of much difference in the early years.

comment by Pattern · 2019-08-07T17:19:21.797Z · LW(p) · GW(p)

Perhaps degree of investment. Consider the amount of time it takes for someone to grow up, and the effort involved in teaching them (how to talk, read, etc.). (And before that, pregnancy.)

There is at least one book that plays with this - the protagonist finds out they were stolen from 'their family' as a baby (or really small child), and the people who stole them raised them, and up to that point they had no idea. I don't remember the title.

comment by Raemon · 2020-11-18T03:20:11.307Z · LW(p) · GW(p)

I notice that academic papers have stupidly long, hard-to-read abstracts. My understanding is that this is because there is some kind of norm about papers having the abstract be one paragraph, while the word-count limit tends to be... much longer than a paragraph (250 - 500 words).

Can... can we just fix this? Can we either say "your abstract needs to be a goddamn paragraph, which is like 100 words", or "the abstract is a cover letter that should be about one page long, and it can have multiple linebreaks and it's fine."

(My guess is that the best equilibrium is "People keep doing the thing currently-called-abstracts, and start treating them as 'has to fit on one page', with paragraph breaks, and then also people start writing a 2-3 sentence thing that's more like "the single actual-paragraph that you'd read if you were skimming through a list of papers.")

Replies from: avturchin, adamShimi, DanielFilan
comment by avturchin · 2020-11-18T10:47:09.796Z · LW(p) · GW(p)

Some journals, like Futures, require 5 short phrases as highlights summarising key ideas as addition to the abstract. See e.g. here: https://www.sciencedirect.com/science/article/pii/S0016328719303507?via%3Dihub

 

"Highlights

 

The stable climate of the Holocene made agriculture and civilization possible. The unstable Pleistocene climate made it impossible before then.

Human societies after agriculture were characterized by overshoot and collapse. Climate change frequently drove these collapses.

Business-as-usual estimates indicate that the climate will warm by 3°C-4 °C by 2100 and by as much as 8°–10 °C after that.

Future climate change will return planet Earth to the unstable climatic conditions of the Pleistocene and agriculture will be impossible.

Human society will once again be characterized by hunting and gathering."

comment by adamShimi · 2020-11-18T14:56:06.752Z · LW(p) · GW(p)

Another reason is that you're not supposed to put references in the abstract. So if you want people outside your narrow subfield to have a chance at understanding the abstract, you need to reexplain the basic ideas behind the whole research approach. That takes space, and is usually very weird. 

comment by DanielFilan · 2020-11-19T04:41:46.355Z · LW(p) · GW(p)

My sense is that they are not that hard to read for people in the relevant discipline, and there's absolutely no pressure for the papers to be legible to people outside the relevant discipline.

Replies from: Raemon
comment by Raemon · 2020-11-19T04:44:22.211Z · LW(p) · GW(p)

I feel like paragraph breaks in a 400 word document seem straightforwardly valuable for legibility, however well versed you are in a field. In someone posts a wall of text in LW I tell them to break it up even if it's my field.

Replies from: Raemon
comment by Raemon · 2020-11-19T04:53:18.322Z · LW(p) · GW(p)

Okay it looks like for the particular thing I most recently was annoyed by, it's 150 words.

This thing:

Although much recent attention has focused on identifying domain-specific taxonomic   differences   in   cognition,   little   effort   has   been   directed   towards   investigating whether domain-general differences also exist. We therefore conducted a   meta-analysis   of   published   nonhuman   primate   cognition   studies,   testing   the   prediction that some taxa outperform others across a range of testing situations. First, within each of nine experimental paradigms with interspecific variation, we grouped studies by their procedures and the characteristics of their study subjects. Then, using Bayesian   latent   variable   methods,   we   tested   whether   taxonomic   differences   consistently  held  within  or across  paradigms.  No  genus  performed  especially  well  within particular paradigms, but genera differed significantly in overall performance.  In addition, there was evidence of variation at higher taxonomic levels; most notably, great  apes  significantly  outperformed  other  lineages. These  results  cannot  be  readily  explained by perceptual biases or any other contextual confound and instead suggest that primate taxa differ in some kind of domain-general ability

Really seems to me like it's supposed to be this thing:

Although much recent attention has focused on identifying domain-specific taxonomic   differences   in   cognition, little   effort   has   been   directed   towards   investigating whether domain-general differences also exist. We therefore conducted a   meta-analysis   of   published   nonhuman   primate   cognition studies, testing the prediction that some taxa outperform others across a range of testing situations. 

First, within each of nine experimental paradigms with interspecific variation, we grouped studies by their procedures and the characteristics of their study subjects. Then, using Bayesian latent   variable   methods, we tested whether taxonomic differences   consistently  held  within  or  across paradigms.  

No  genus  performed  especially  well  within particular paradigms, but genera differed significantly in overall performance.  In addition, there was evidence of variation at higher taxonomic levels; most notably, great  apes  significantly  outperformed  other  lineages. These  results  cannot  be  readily explained by perceptual biases or any other contextual confound and instead suggest that primate taxa differ in some kind of domain-general ability

Replies from: DanielFilan, DanielFilan
comment by DanielFilan · 2020-11-19T05:49:20.218Z · LW(p) · GW(p)

RIP the concept of copy-pasting from a PDF.

comment by DanielFilan · 2020-11-19T05:51:08.738Z · LW(p) · GW(p)

I admit that that is a little more legible to me, although I'm not a researcher in the field of primatology.

Replies from: Raemon
comment by Raemon · 2020-11-19T06:08:06.022Z · LW(p) · GW(p)

I do think, like, man, I wanted to know about primatology, and it seems pretty silly to assume that science should only be relevant to specialists in a field. Especially when the solution is literally just inserting two paragraph breaks.

(I might also make claims that academic papers should be doing more effortful things to be legible, but this just seemed like a fairly straightforward thing that was more of an obviously-bad-equilibrium than a "there's a big effortful thing I think other people should do for other-other-people's benefit.")

comment by Raemon · 2020-05-28T06:48:06.402Z · LW(p) · GW(p)

I had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!).

Disclaimer: I am not making much effort to not ramble in this post.

A couple takeaways:

1. Working Memory Limits

One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks").

It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress.

So, one new habit I have is "whenever I've introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4.

(Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I'm going to make, or why I'm talking about the things I'm talking about)

...

2. I just don't finish sente

I frequently don't finish sentences, whether in person voice or in text (like emails). I've known this for awhile, although I kinda forgot recently. I switch abruptly to a new sentence when I realize the current sentence isn't going to accomplish the thing I want, and I have a Much Shinier Sentence Over Here that seems much more promising.

But, people don't understand why I'm making the leap from one half-finished thought to another.

So, another simple habit is "make sure to finish my god damn sentences, even if I become disappointed in them halfway through"

...

3. Use Mindful Cognition Tuning to train on *what is easy for people to follow*, as well as to improve the creativity/usefulness of my thoughts.

I've always been rambly. But a thing that I think has made me EVEN MORE rambly in the past 2 years is a mindful-thinking-technique, where you notice all of your thoughts on the less-than-a-second level, so that you can notice which thought patterns are useful or anti-useful.

This has been really powerful for improving my thought-quality. I'm fairly confident that I've become a better programmer and better thinker because of it.

But, it introduces even more meta-thoughts for me to notice while I'm articulating a sentence, which distract me from the sentence itself.

What I realized last weekend was: I can use Mindful Cognition to notice what types of thoughts/sentences are useful for *other people's comprehension of me*, not just how useful m original thought processes are.

The whole point of the technique is to improve your feedback loop (both speed and awareness), which makes it easier to deliberate practice. I think if I just apply that towards Being More Comprehensible, it'll change from being a liability in rambliness to an asset.

Replies from: mtrazzi, Alaric
comment by Michaël Trazzi (mtrazzi) · 2020-05-28T21:13:09.951Z · LW(p) · GW(p)

re working memory: never thought of it during conversations, interesting. it seems that we sometime hold the nodes of the conversation tree to go back to them afterward. and maybe if you're introducing new concepts while you're talking people need to hold those definitions in working memory as well.

comment by Alaric · 2020-06-17T12:44:45.931Z · LW(p) · GW(p)

Could you explain (or give a link) what is "Mindful Cognition Tuning"?

Replies from: Raemon
comment by Raemon · 2020-03-11T21:36:43.970Z · LW(p) · GW(p)

[not trying to be be comprehensible people that don't already have some conception of Kegan stuff. I acknowledge that I don't currently have a good link that justifies Kegan stuff within the LW paradigm very well]

Last year someone claimed to me is that a problem with Kegan is that there really are at least 6 levels. The fact that people keep finding themselves self-declaring as "4.5" should be a clue that 4.5 is really a distinct level. (the fact that there are at least two common ways to be 4.5 also is a clue that the paradigm needs clarification)

My garbled summary of this person's conception is:

  • Level 4: (you have a system of principles you are subject to, that lets you take level 3 [social reality??] as object)
  • Level 5: Dialectic. You have the ability to earnestly dialogue between a small number of systems (usually 2 at a time), and either step between them, or work out new systems that reconcile elements from the two of them.
  • Level 6: The thing Kegan originally meant by "level 5" – able to fluidly take different systems as object.

Previously, I had felt something like "I basically understand level 5 fine AFAICT, but maybe don't have the skills do so fluidly. I can imagine there being some special sauce that I don't currently have but it doesn't feel very mysterious. It's not obvious what would be different about me that made me level 5."

Once I heard this description, I thought "oh, yeah this new 4.5/5 makes a lot of sense, and clearly describes where I'm currently at. Like, the Noticing Frames subsequence (and past year of my life) was basically me gaining the skill of doing Dialectic reasonably well. And from that vantage point, it makes more sense that there's a step beyond that that's something like "the process of noticing frames/systems and dialoguing between them eventually becomes a simpler object in my suite of available actions, such that I can do it seamlessly."

Replies from: romeostevensit, gworley
comment by romeostevensit · 2020-03-12T06:31:53.127Z · LW(p) · GW(p)

I think the 4.5 thing splits based on whether you mostly skipped 3 or 4.

Replies from: Raemon, gworley
comment by Raemon · 2020-03-12T16:11:09.554Z · LW(p) · GW(p)

Which is which?

Replies from: romeostevensit
comment by romeostevensit · 2020-03-12T18:30:49.063Z · LW(p) · GW(p)

I don't know how others are splitting 4.5 so I don't know mapping.

comment by Gordon Seidoh Worley (gworley) · 2020-03-13T01:48:46.139Z · LW(p) · GW(p)

I'm not sure what you have in mind by "skipping" here, since the Kegan and other developmental models explicitly are based on the idea that there can be no skipping because each higher level is built out of new ways of combining abstractions from the lower levels.

I have noticed ways in which people can have lumpy integration of the key skills of a level (and have noticed this in various ways in myself); is that the sort of thing you have in mind by "skipping", like made it to 4 without ever having fully integrated the level 3 insights.

Replies from: mr-hire, Kaj_Sotala
comment by Matt Goldenberg (mr-hire) · 2020-03-13T02:03:57.051Z · LW(p) · GW(p)

I generally think that mindspace is pretty vast, and am predisposed to be skeptical of the claim that there's only one path to a certain way of thinking. I buy that most people follow a certain path, but wouldn't be suprised if for instance there's a person in history who never went directly from Kegan 3 to 4.5 by never finding a value system that could stand up to their chaotic environment.

comment by Kaj_Sotala · 2021-03-28T09:41:46.362Z · LW(p) · GW(p)

David Chapman says that achieving a particular level means that the skills associated with it become logically possible for you, which is distinct from actually mastering those skills; and that it's possible for you to e.g. get to stage 4 while only having poor mastery of the skills associated with stage 3. So I would interpret "skipped stage N" as shorthand for "got to stage N+X without developing any significant mastery of stage N skills".

comment by Gordon Seidoh Worley (gworley) · 2020-03-12T00:09:10.197Z · LW(p) · GW(p)

I think this is right, although I stand by the existing numbering convention. My reasoning is that the 4.5 space is really best understood in the paradigm where the thing that marks a level transition is gaining a kind of naturalness with that level, and 4.5 is a place of seeing intellectually that something other than what feels natural is possible, but the higher level isn't yet the "native" way of thinking. This is not to diminish the in between states because they are important to making the transition, but also to acknowledge that they are not the core thing as originally framed.

For what it's worth I think Michael Common's approach is probably a bit better in many ways, especially in that Kegan is right for reasons that are significantly askew of the gears in the brain that make his categories natural. Luckily there's a natural and straightforward mapping between different developmental models (see Integral Psychology and Ken Wilber's work for one explication of this mapping between these different models), so you can basically use whichever is most useful to you in a particular context without missing out on pointing at the general feature of reality these models are all convergent to.

Also perhaps interestingly, there's a model in Zen called the five ranks that has an interpretation that could be understood as a developmental model of psychology, but it also suggests an inbetween level, although between what we might call Kegan 5 and a hypothetical Kegan 6 if Kegan had described such a level. I don't think there's much to read into this, though, as the five ranks is a polymorphic model that explains multiple things in different ways using the same structure, so this is as likely an artifact as some deep truth that there is something special about the 5 to 6 transition, but it is there so it suggests others have similarly noticed it's worth pointing out cases where there are levels between the "real" levels.

Similarly it's clear from Common's model that Kegan's model is woefully under describing the pre-3 territory, and it's possible that due to lack of data all models are failing to describe all the meaningful transition states between the higher levels. As I recall David Chapman wrote something once laying out 10 sublevels between each level, although I'm not sure how much I would endorse that approach.

comment by Raemon · 2019-08-30T21:39:30.866Z · LW(p) · GW(p)

After a recent 'doublecrux meetup' (I wasn't running it but observed a bit), I was reflecting on why it's hard to get people to sufficiently disagree on things in order to properly practice doublecrux.\

As mentioned recently, it's hard to really learn doublecrux unless you're actually building a product that has stakes. If you just sorta disagree with someone... I dunno you can do the doublecrux loop but there's a sense where it just obviously doesn't matter.

But, it still sure is handy to have practiced doublecruxing before needing to do it in an important situation. What to do?

Two options that occur to me are

  • Singlecruxing
  • First try to develop a plan for building an actual product together, THEN find a thing to disagree about organically through that process.

[note: I haven't actually talked much with the people who's major focus is teaching doublecrux, not sure how much of this is old hat, or if there's a totally different approach that sort of invalidates it]

SingleCruxing

One challenge about doublecrux practice is that you have to find something you have strong opinions about and also someone else has strong opinions about. So... just sidestep that problem but only worrying about something that you have strong opinions about.

Pick a belief that is actually relevant to your plans (such as where you're planning to go to college, or what kind of career to go into, or ideally a project you're actually working on that you're excited about.

What beliefs are you confident in, that are underpinning your entire approach? (i.e. "going to college in the first place is the right move" or "A job in this industry will make me happier than this other industry" or "this project is a good idea because people will buy the product I'm building.")

Instead of practicing discussing this with someone else, you can just ask yourself, with no one else around you, why you believe what you believe, and what would change your mind about it.

Having considered this, I think I like it a lot as an "doublecruxing 101" skill.

One problem with learning doublecrux is that doing it properly takes awhile, and in my experience starts with a phase that's more about model-sharing, before moving to the "actually find your own cruxes and figure out what would change your mind." But, the first part isn't actually all that different from regular debate, or discussion. And it's not quite clear when to transition to the second part (or, it naturally interweaves with the first part. See postformal doublecrux).

This makes it hard to notice and train the specific skills that are unique to doublecrux.

I like the notion that "first you learn singlecrux, then doublecrux" because a) it's just generally a useful skill to ask why you actually believe the things you do and what would change your mind, and b) I think it's much easier to focus on the active, unique ingredients when the topic isn't getting blurred with various other conversational skills, and/or struggling to find a thing that's worth disagreeing about in the first place.

It'd also have the advantage that you can think about something that's actually slightly triggering/uncomfortable for you to consider (which I think is pretty valuable for actually learning to do the skill "for real"), but where you only have to worry about how you feel about it, rather than also have to figure out how to relate to someone else who might also feel strongly.

I think this'd be particular good for local meetups that don't have the benefit of instructors with a lot of practice helping people learn the doublecrux skill.

(I might still have people pair up, but only after thinking privately about it for 5 minutes, and the pairs of people would not be disagreeing with each other, just articulating their thought processes to each other. At any given time, you'd have one"active participant" talking through why they believe what they believe and how they might realistically change their mind about it, and another partner acting more as a facilitator to notice if they're stuck in weird thought pattern loops or something)

Finding a product you both might actually want to build, then disagreeing about it

But, still, sooner or later you might want to practice doublecruxing. What then? How do you reliably find things you disagree about?

The usual method I've observed is have people write down statements that they believe in confidently, that they think other people might disagree with, and then pair up based on shared disagreement. This varies in how well it produces disagreements that feel really 'alive' and meaningful.

But if doublecrux actually is mostly for building products, a possible solution might be instead to pair up based on shared interests in the sorts of projects you might want to build. You (probably?) won't actually build the product, but it seems important that you be able to talk about it as realistically as possible.

Then, you start forming a plan about how to go about building it, while looking for points of disagreement. Lean into that disagreement when you notice it, and explore the surrounding concept-space.

At last night's meetup, I paired with someone and suggested this idea to them. We ended up with the (somewhat meta) actual shared product of "how to improve the Berkeley rationality community." We discussed that for 20 minutes, and eventually found disagreement about whether communities require costly signals of membership to be any good, or whether they could instead be built off other human psychological quirks.

This was not a disagreement I think we would have come up with if we "listed a bunch of things we felt strongly about." And it felt a lot more real.

(I do think there's a risk of most pairs of peoples ending up being "the local community" or something similarly meta as their 'product', but the actual disagreements I expect to still be fairly unique and dependent on the people in question)

Replies from: mr-hire, matthew-barnett
comment by Matt Goldenberg (mr-hire) · 2019-08-30T22:23:35.502Z · LW(p) · GW(p)

Another useful skill you can practice is *actually understanding people's models*. Like, find something someone else believes, guess what their model, is then ask them "so your model is this?", then repeat until they agree that you understand their model. This sort of active listening around models is definitely a prerequisite doublecrux skill and can be practiced without needing someone else to agree to doublecrux with you.

Replies from: Raemon
comment by Raemon · 2019-08-30T22:33:34.617Z · LW(p) · GW(p)

Nod. I haven't actually been to CFAR recently, not sure how they go about it there. But I think for local meetups doing practice breaking it down into subskills seems pretty useful and I agree with active listening being another key one.

comment by Matthew Barnett (matthew-barnett) · 2019-08-30T22:52:37.001Z · LW(p) · GW(p)

As someone who may or may not have been part of the motivation for this shortform, I just want to say that it was my first time doing double crux and so I'm not sure whether I actually understood it.

Replies from: Raemon, Raemon
comment by Raemon · 2019-08-30T22:54:18.590Z · LW(p) · GW(p)

Heh, you were not the motivating person, and more generally this problem has persisted on most doublecrux meetups I've been to. (There were at least 3 people having this issue yesterday)

comment by Raemon · 2019-08-31T02:37:18.512Z · LW(p) · GW(p)

I'm also curious, as a first-time-doublecruxer, what ended up being particular either confusions or takeaways or anything like that.

comment by Raemon · 2024-12-20T20:33:59.711Z · LW(p) · GW(p)

I’d like to hire cognitive assistants and tutors more often. This could (potentially) be you, or people you know. Please let me know if you’re interested or have recommendations.

By “cognitive assistant” I mean a range of things, but the core thing is “sit next to me, and notice when I seem like I’m not doing the optimal thing, and check in with me.” I’m interested in advanced versions who have particular skills (like coding, or Applied Quantitivity, or good writing, or research taste) who can also be tutoring me as we go.

I’d like a large rolodex of such people, both for me, and other people I know who could use help. Let me know if you’re interested.

I was originally thinking "people who live in Berkeley" but upon reflection this could maybe be a remote role.

Replies from: Viliam
comment by Viliam · 2024-12-22T16:19:57.890Z · LW(p) · GW(p)

Sounds like pair programming, except the programming part is optional.

I’d like a large rolodex of such people, both for me, and other people I know who could use help.

Maybe different people need different assistants.

Seems to me that being a good assistant has two components: good communication skills (patience, clarity of explaining, adjusting the advice to target's current skills and knowledge), and skills in the specific thing you want to assist with. With the communication skills, different people may prefer different styles, but there probably would be a general consensus on what is better. With the task-specific skills, it depends on what you already know. Someone could provide useful advice to beginners, but have nothing useful to say to an expert.

I guess, if you make a list for other people, it should make clear what is the level of your skill where the assistant will be useful for you. There is nothing wrong with only being useful to beginners, if there are beginners who will use the list; and in a large group there will probably be more beginners than experts on any specific topic.

comment by Raemon · 2024-08-07T01:50:21.894Z · LW(p) · GW(p)

I notice some people go around tagging posts with every plausible tag that possible seems like it could fit. I don't think this is a good practice – it results in an extremely overwhelming and cluttered tag-list, which you can't quickly skim to figure out "what is this post actually about"?, and I roll to disbelieve on "stretch-tagging" actually helping people who are searching tag pages.

Replies from: Josephm
comment by Joseph Miller (Josephm) · 2024-08-07T06:53:20.079Z · LW(p) · GW(p)

There should probably be guidance on this when you go to add a tag. When I write a post I just randomly put some tags and have never previously considered that it might be prosocial to put more or less tags on my post.

Replies from: Viliam
comment by Viliam · 2024-08-08T13:03:08.468Z · LW(p) · GW(p)

I think people vote on tags, so if more people agree that the tag is relevant, the article gets higher in the list. So extra tags (that people won't vote for) do create some noise, but only at the bottom of the list.

This is how I think this works; I may be wrong.

comment by Raemon · 2022-04-13T00:30:02.495Z · LW(p) · GW(p)

I just briefly thought you could put a bunch of AI researchers on a spaceship, and accelerate it real fast, and then they get time dilation effects that increase their effective rate of research.

Then I remembered that time dilation works the other way 'round – they'd get even less time.

This suggested a much less promising plan of "build narrowly aligned STEM AI, have it figure out how to efficiently accelerate the Earth real fast and... leave behind a teeny moon base of AI researchers who figure out the alignment problem."

Replies from: gwern, Dagon
comment by gwern · 2022-04-13T00:30:50.264Z · LW(p) · GW(p)

More or less the plot of https://en.wikipedia.org/wiki/Orthogonal_(series) incidentally.

comment by Dagon · 2022-04-13T20:14:39.299Z · LW(p) · GW(p)

+1 for thinking of unusual solutions.  If it's feasible to build long-term very-fast-relative-to-earth habitats without so much AI support that we lose before it launches, we should do that for random groups of humans.  Whether you call them colonies or backups doesn't matter.  We don't have to save all people on earth, just enough of humanity that we can expand across the universe fast enough to rescue the remaining victims of unaligned AI sometime.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-04-17T15:28:59.923Z · LW(p) · GW(p)

I think an unaligned AI would have a large enough strategic advantage that such attempt is hopeless without aligned AI. So these backup teams would need to contain alignment researchers. But we don't have enough researchers to crew a bunch of space missions, all of which need to have a reasonable chance of solving alignment. 

comment by Raemon · 2022-02-13T21:29:55.493Z · LW(p) · GW(p)

Man, I watched The Fox and The Hound a few weeks ago. I cried a bit.

While watching the movie, a friend commented "so... they know that foxes are *also* predators, right?" and, yes. They do. This is not a movie that was supposed to be about predation except it didn't notice all the ramifications about its lesson. This movie just isn't taking a stand about predation.

This is a movie about... kinda classic de-facto tribal morality. Where you have your family and your tribe and a few specific neighbors/travelers that you welcomed into your home. Those are your people, and the rest of the world... it's not exactly that they aren't *people*, but, they aren't in your circle of concern. Maybe you eat them sometimes. That's life.

Copper the hound dog's ingroup isn't even very nice to him. His owner, Amos, leaves him out in a crate on a rope. His older dog friend is sort of mean. Amos takes him out on a hunting trip and teaches him how to hunt, conveying his role in life. Copper enthusiastically learns. He's a dog. He's bred to love his owner and be part of the pack no matter what.

My dad once commented that this was a movie that... seemed remarkably realistic about what you can expect from animals. Unlike a lot of other disney movies it didn't require suspending disbelief much.

A baby fox and hound might totally play together because they haven't figured out yet that they're supposed to be enemies.

If that hound then went away for 6 months to learn to hunt, and came back, it might initially be hesitant to hunt down its fox friend, out of a vague/confused memory.

But interspecies friendship isn't *that* strong, and doing-what-your-species/tribe does is often stronger, and yeah later on the hound is like "okay I guess we're hunting this fox now. It's what master wants. I do what master says, that's who I am."

And then...

...well, and then the hound gets attacked by a bear. And the fox comes back to save him. And on one hand, I bet 99+% of foxes would not do that. But, having seen a bunch of youtubes of animals doing impressive things, I'm willing to buy that the occasional hero foxes exist, and I'm willing to buy a fox who remembers enough of his interspecies friend to intervene bravely.

(weakly held, if someone who knew a lot more about foxes than me was like "nope, this is outside the space of what foxes do", I'd believe them)

And what gets me is... man, this is what my morality is built out of. People who were mostly doing what nature of society incentived, who are still on the verge of eating their friends, but there's little sparks of friendship/compassion/abstract-reasoning that build up along the way, combined with the abundance necessary for them to grow. And that's why my morality comes from. Man.

comment by Raemon · 2021-03-25T20:57:17.646Z · LW(p) · GW(p)

Sometimes the subject of Kegan Levels comes up and it actually matters a) that a developmental framework called "kegan levels" exists and is meaningful, b) that it applies somehow to The Situation You're In.

But, almost always when it comes up in my circles, the thing under discussion is something like "does a person have the ability to take their systems as object, move between frames, etc." And AFAICT this doesn't really need to invoke developmental frameworks at all. You can just ask if a person has a the "move between frames" skill.*

This still suffers a bit from the problem where, if you're having an argument with someone, and you think the problem is that they're lacking a cognitive skill, it's a dicey social move to say "hey, your problem is that you lack a cognitive skill." But, this seems a lot easier to navigate than "you are a Level 4 Person in this 5 Level Scale".

(I have some vague sense that Kegan 5 is supposed to mean something more than "take systems as object", but no one has made a great case for this yet, and in case it hasn't been the thing I'm personally running into)

Replies from: Richard_Kennaway, Viliam, gworley
comment by Richard_Kennaway · 2021-03-27T14:36:30.165Z · LW(p) · GW(p)

Kegan levels lend themselves to being used like one of those irregular verbs, like "I am strong minded, you are stubborn, he is a pig-headed fool."

"I am Kegan level 5, you are stuck on Kegan level 4, and all those dreadful normies and muggles around us are Kegan 3 or worse."

comment by Viliam · 2021-03-26T23:14:36.730Z · LW(p) · GW(p)

Seems to me that the main problem with linear systems where you put yourself at the top (because, who doesn't?), is that the only choice it gives everyone else is either to be the same as you, or to be inferior. Disagreeing with the system probably makes one inferior, too.

Feels a bit ironic, if this is considered to be a pinnacle of emotional development...

But of course now I am constructing a frame where I am at the top and those people who like the Kegan scale are silly, so... I guess this is simply what humans do: invent classifications that put them on the top. ;)

And it doesn't even mean that those frames are wrong; if there is a way to put people on a linear scale, then technically, someone has to be on the top. And if the scale is related to understanding, then your understanding of the scale itself probably should correlate with your position on it.

So, yes, it is better to not talk about the system itself, and just tell people where specifically they made a mistake.

comment by Gordon Seidoh Worley (gworley) · 2021-03-26T01:36:07.549Z · LW(p) · GW(p)

The original formulation definitely mixes in a bunch of stuff along with it, the systems as object thing is meant to be characteric, but it's not all of the expected stuff. Most people don't push the hard version that taking systems as object is not just characteric but causally important (I say this even though I do push this version of the theory).

It is actually kinda rude to psychologize other people, especially if you miss the mark, and especially especially if you hit the mark and they don't like it, so it's probably best to just keep your assessment of their Kegan level to yourself unless it's explicitly relevant since bringing it up will probably work against you even if in a high-trust environment it wouldn't (and you are unlikely to be in a high-trust enough environment for it to work even if you think you are).

As for asking people if they have the skill, I don't expect that to work since it's easy to delude yourself that you do because you can imagine doing it or can do it in an intellectual way, which is better than not being able to do it at all but is also not the real deal and will fall apart the moment anything overloads global memory or otherwise overtaxes the brain.

Replies from: Raemon
comment by Raemon · 2021-03-26T04:34:52.745Z · LW(p) · GW(p)

As for asking people if they have the skill,

I actually was not expecting the process to be "ask if they have the skill", I was expecting the sequence to be:

  1. get into an argument
  2. notice it feels stuck
  3. notice that your conversation partner seems stuck in a system
  4. make some effort to convey that you're trying to talk about a different system
  5. say (some version of) "hey man, it looks like you don't have the 'step outside your current frame' skill, and I don't think the argument is worth having until you do."

(well, that's probably an unproductive way to go about it, but, I'm assuming the 'notice they don't have the skill' part comes from observations while arguing rather than something you ask them and they tell you about.')

Replies from: Viliam, gworley
comment by Viliam · 2021-03-26T21:24:06.885Z · LW(p) · GW(p)

say (some version of) "hey man, it looks like you don't have the 'step outside your current frame' skill, and I don't think the argument is worth having until you do."

Maybe a more diplomatic way could be: "hey man, for the sake of thought experiment, could we for a moment consider this thing from a different frame?" They may agree or refuse, but probably won't feel offended.

comment by Gordon Seidoh Worley (gworley) · 2021-03-26T20:44:03.089Z · LW(p) · GW(p)

Something about this feels like what I used to do but don't do now, and I realized what it is.

If they're stuck I don't see it as their problem, I see it as my problem that I can't find a way to take my thing and make it sensible to them within their system, or at least find an entry point, since all systems are brittle and you just have to find the right thread to pull if you want to untangle it so they can move towards seeing things in ways beyond what their current worldview permits.

But maybe my response looks the same if I can't figure it out and/or don't feel like putting in the energy to do that, which is some version of "hey, looks like we just disagree in some fundamental way here I'm not interested in trying to resolve, sorry", which I regret is kinda rude still and wish I could find a way to be less rude about.

Replies from: Raemon
comment by Raemon · 2021-03-26T21:47:02.438Z · LW(p) · GW(p)

I think I don't feel too bad about "hey, looks like we just disagree in some fundamental way here I'm not interested in trying to resolve, sorry". It might be rude in some circles but I think I'm willing to bite the bullet on "it's pretty necessary for that to be an okay-move to pull on LW and in rationalist spaces."

I think "we disagree in a fundamental way" isn't quite accurate, and there's a better version that's something like "I think we're thinking in pretty different frames/paradigms and I don't think it makes sense to bridge that disconnect."

A thing making it tricky (also relevant to Viliam's comment) is that up until recently there wasn't even a consensus that different-frames were a thing, that you might need to translate between.

comment by Raemon · 2020-05-24T04:03:20.285Z · LW(p) · GW(p)

There's a problem at parties where there'll be a good, high-context conversation happening, and then one-too-many-people join, and then the conversation suddenly dies.

Sometimes this is fine, but other times it's quite sad.

Things I think might help:

  • If you're an existing conversation participant:
    • Actively try to keep the conversation small. The upper limit is 5, 3-4 is better. If someone looks like they want to join, smile warmly and say "hey, sorry we're kinda in a high context conversation right now. Listening is fine but probably don't join."
    • If you do want to let a newcomer join in, don't try to get them up to speed (I don't know if I've ever seen that actually work). Instead, say "this is high context so we're not gonna repeat the earlier bits, maybe wait to join in until you've listened enough to understand the overall context", and then quickly get back to the conversation before you lose the Flow.
  • If you want to join a conversation:
    • If there are already 5 people, sorry, it's probably too late. Listen if you find it interesting, but if you actively join you'll probably just kill the conversation.
    • Give them the opportunity to gracefully keep the conversation small if they choose. (say something like "hey can I join? It sounds like maybe a high context conversation, no worries if you wanna keep it small.")
    • Listen for longer before joining. Don't just wait till you understand the current topic – try to understand the overall vibe, and what previous topics might be informing the current one. Try to get a sense of what each current participant is getting out the conversation. When you do join, do so in a small way that gives them affordance to shift back to an earlier topic if your new contribution turned out to be not-actually-on-topic.
Replies from: Dagon, mr-hire
comment by Dagon · 2020-05-24T15:53:43.732Z · LW(p) · GW(p)

+lots. Some techniques:

  • physically separate the group. Go into another room or at least corner. Signal that you're not seeking additional participants.
  • When you notice this, make it explicit - "I'm really enjoying the depth of this conversation, should we move into the lounge for a brandy and a little more quiet?"
  • Admit (to yourself) that others may feel excluded, because they are. At many gatherings, such discussions/situations are time-bound and really can't last more than 10-45 minutes. The only solution is to have more frequent, smaller gatherings.
  • Get good at involved listening - it's different than 1:1 active listening, but has similar goals: don't inject any ideas, but do give signals that you're following and supporting. This is at least 80% as enjoyable as active participation, and doesn't break the flow when you join a clique in progress.

I wonder what analogs there are to online conversations. I suspect there's a lot of similarity for synchronous chats - too many people make it impossible to follow. For threaded, async discussions, the limits are probably much larger.

Replies from: clearthis, Raemon
comment by Tobias H (clearthis) · 2020-05-24T16:16:05.327Z · LW(p) · GW(p)

[EDIT, was intended as a response to Raemon, not Dagon.]

Maybe it's the way you phrase the responses. But as described, I get the impression that this norm would mainly work for relatively extroverted persons with low rejection sensitivity.

I'd be much less likely to ever try to join a discussion (and would tend to not attend events with such a norm). But maybe there's a way to avoid this, both from "my side" and "yours".

Replies from: Raemon
comment by Raemon · 2020-05-24T18:40:09.484Z · LW(p) · GW(p)

Hmm, seems like important feedback. I had specifically been trying to phrase the responses in a way that addressed this specific problem. Sounds like it didn't work.

There is some intrinsic rejection going on here, which probably no amount of kind wording can alleviate for a rejection-sensitive person.

For my "sorry, we're keeping the convo small" bit, I suggested:

smile warmly and say "hey, sorry we're kinda in a high context conversation right now. Listening is fine but probably don't join."

The Smile Warmly part was meant to be a pretty active ingredient, helping to reassure them it isn't personal. 

Another thing that seems pretty important, is that this applies to all newcomers, even your friends and High Status People. (i.e. hopefully if Anxious Alex gets turned away, but later sees High Status Bob also get turned away, they get reassured a bit that this wasn't about them)

comment by Raemon · 2020-05-24T18:50:46.241Z · LW(p) · GW(p)

I wonder what analogs there are to online conversations. I suspect there's a lot of similarity for synchronous chats - too many people make it impossible to follow. For threaded, async discussions, the limits are probably much larger.

FYI, the actual motivating example here was at a party in gather.town, (formerly online.town, formerly town.siempre), which has much more typical "party" dynamics. (i.e people can wander around an online world and video chat with people nearby).

In this case there were actually some additional complexities – I had joined a conversation relatively late, I did lurk for quite awhile, and wait for the current set of topics to die down completely before introducing a new one. And then the conversation took a turn that I was really excited by, and at least 1-2 other people were interested in, but it wasn't obvious to me that it was interesting to everyone else (I think ~5 people involved total?)

And then a new person came in, and asked what we were talking about and someone filled them in... ...and then immediately the conversation ended. And in this case I don't know if the issue was more like "the newcomer killed the conversation" or "the convo actually had roughly reached it's natural end, and/or other people weren't that interested in the first place."

But, from my own perspective, the conversation had just finished covering all the obvious background concepts that would be required for the "real" conversation to begin, and I was hoping to actually Make Real Progress on a complex concept.

So, I dunno if this counted as "an interesting conversation" yet, and unfortunately the act of asking the question "hey, do we want to continue diving deep into this, or wrap up and transition into some other convo?" also kinda kills the conversation. Conversations are so god damn fragile.

What I really wished was that everyone already had common knowledge of the meta-concept, wherein:

  • Party conversations are particularly fragile
  • Bringing a newcomer up to speed is usually costly if the conversation is doing anything deep
  • We might or might not want to continue delving into the current convo (but we don't currently have common knowledge of this in either direction)

And if everyone (newcomer included) had those concepts, and new everyone had those concepts, then I feel like I could have asked more gracefully "hey, I'm kinda interested in continuing to hash out some ideas here. Are people up for taking this high context", and had people give their honest answer.

comment by Matt Goldenberg (mr-hire) · 2020-05-25T22:11:28.379Z · LW(p) · GW(p)

I hosted an online-party using zoom breakout rooms a few weeks ago and ran into similar problems.

Half-way through the party I noticed people were clustering in suboptimal size conversations and bringing high-context conversations to a stop, so I actually brought everybody backed to the lobby then randomly assigned them to groups of 2 or 3 - and when I checked 10 minutes later everyone was in the same two rooms again with groups of 8 - 10 people.

AFAICT this was status/feelings driven - there were a few people at the party who were either existing high-status to the participants, or who were very charismatic, and everyone wanted to be in the same conversation as them.

I think norm-setting around this is very hard, because it's natural to want to be around high-status and charismatic people, and it's also natural to want to participate in a conversation you're listening to.

I'm going to try to add your suggestions to the top of the shared google doc next time I host one of these and see how it goes.

Replies from: Raemon
comment by Raemon · 2020-05-25T23:36:59.222Z · LW(p) · GW(p)

Agreed with the status/feelings cause. And I'm not 100% sure the solution is "prevent people from doing the thing they instinctively want to do" (especially "all the time.")

My current guess is "let people crowd around the charismatic/and/or/interesting people, but treat it more like a panel discussion or fireside chat, like you might have at a conference, where mostly 2-3 people are talking and everyone else is more formally 'audience.'"

But doing that all the time would also be kinda bad in different ways.

In this case... you might actually be able to fix this with technology? Can you literally put room-caps on the rooms, so if someone wants to be the 4th or 6th person in a room they... just... can't?

comment by Raemon · 2019-10-24T02:48:34.983Z · LW(p) · GW(p)

I'm not sure why it took me so long to realize that I should add a "consciously reflect on why I didn't succeed at all my habits yesterday, and make sure I don't fail tomorrow" to my list of daily habits, but geez it seems obvious in retrospect.

Replies from: Raemon, None
comment by Raemon · 2020-01-01T23:44:02.741Z · LW(p) · GW(p)

Following up to say that geez any habit practice that doesn't include this now feels super silly to me.

comment by [deleted] · 2019-10-24T19:34:35.205Z · LW(p) · GW(p)

Just don't get trapped in infinite recursion and end up overloading your habit stack frame!

Replies from: Raemon
comment by Raemon · 2019-10-24T19:48:39.348Z · LW(p) · GW(p)

I mean, the whole thing only triggers once per day, so I can't go farther than a single loop of "why didn't I reflect on my habit-failure yesterday?" :P

(But yeah I think I can handle up-to-one-working-memory-load of habits at a time)

Replies from: None
comment by [deleted] · 2019-10-24T19:56:16.073Z · LW(p) · GW(p)

Uh, what if you forget to do your habit troubleshooting habit and then you have to troubleshoot why you forgot it? And then you forget it twice and you have to troubleshoot why you forgot to troubleshoot forgetting to troubleshoot!

(I'm joking about all this in case it's not obvious.)

comment by Raemon · 2019-08-31T20:12:34.747Z · LW(p) · GW(p)

Strategic use of Group Houses for Community Building

(Notes that might one day become a blogpost. Building off The Relationship Between the Village and the Mission [LW · GW]. Inspired to go ahead and post this now because of John Maxwell's "how to make money reducing loneliness [LW · GW]" post, which explores some related issues through a more capitalist lens)

  • A good village needs fences:
    • A good village requires doing things on purpose. 
    • Doing things on purpose requires that you have people who are coordinated in some way
    • Being coordinated requires you to be able to have a critical mass of people who are actually trying to do effortful things together (such as maintain norms, build a culture, etc)
    • If you don't have a fence that lets some people in and doesn't let in others, and which you can ask people to leave, then your culture will be some random mishmash that you can't control
  • There are a few existing sets of fences. 
    • The strongest fences are group houses, and organizations. Group houses are probably the easiest and most accessible resource for the "village" to turn into a stronger culture and coordination point. 
  • Some things you might coordinate using group houses for:
    • Strengthening friendships
      • Select people who actually have a decent chance of wanting to be good friends
      • Don't stress overmuch about getting the perfect set of people – overly stressing about finding the 'best' people to be friends with is one of the pathologies in the Bay area that make friendship harder. If everyone's doing it, no one has the ability to let a friendship actually grow, which takes time.
      • DO find people you enjoy hanging out with, talking to, and share some interests with
      • It may take multiple years to find a group house where everyone gets along with everyone. I think it makes sense, earlier on, to focus on exploring (i.e. if you've just moved to the Bay, don't worry about getting a group house culture that is a perfect fit), but within 3 years I think it's achievable for most people to have found a group house that is good for friendship.
      • Once you've got a group house that seems like a good longterm home, actually invest it. 
        • Do things with your roommates. 
        • Allocate time, not just for solving logistical problems, but for getting on the same page emotionally
        • "Deep friendships often come from striving and growing together." Look for opportunities for shared activities that are active rather than passive and involve growing skills that you are excited about.
          • But, probably don't try to force this. Sometimes you're at the same stage in a life trajectory as someone else, and you're growing in the same way at the same time. But not always. And later on you may want to keep growing in a direction where someone else feels that they've solved their bottleneck and growing more in that direction isn't that relevant to them anymore. That's okay.
    • Having a nicer place to live
      • I think this is an important "lower Maslow hierarchy" level than the strong friendships one. If your house isn't a nice place to live, you'll probably have a harder time forming friendships with people there. 
      • "Nice place to live" means different things to different people. Form a group house with people who have similar desires re: cleanliness and approaches to problem solving and aesthetics, etc.
    • Deliberately cultivating your incentives
      • What sort of environment you're in shapes what sort of ways you grow. You might care about about this for reasons other than incidentally helping deepen friendships. 
      • This depends both on having people who want to cultivate the same sorts of incentives that you do, and on actually coordinating with each other to hold each other to those incentives
      • Be wary of your, and other's, desire to have the self-image as someone who wants to grow in a particular way. I've seen a failure mode where people felt vaguely obligated to pay lip service to certain kinds of growth but it wasn't actually what they wanted
      • Be wary of "generic emphasis on growth". A thing I've seen a few group houses try is something like "self improvement night" where they try to help each other level up, and it often doesn't work because people are just interested in pretty different skillsets.
Replies from: Vaniver
comment by Vaniver · 2019-09-01T19:26:09.879Z · LW(p) · GW(p)
It may take multiple years to find a group house where everyone gets along with everyone. I think it makes sense, earlier on, to focus on exploring (i.e. if you've just moved to the Bay, don't worry about getting a group house culture that is a perfect fit), but within 3 years I think it's achievable for most people to have found a group house that is good for friendship.

A thing that I have seen work well here is small houses nucleating out of large houses. If you're living in a place with >20 people for 6 months, probably you'll make a small group of friends that want similar things, and then you can found a smaller place with less risk. But of course this requires there being big houses that people can move into and out of, and that don't become the lower-common-denominator house that people can't form friendships in because they want to avoid the common spaces.

But of course the larger the house, the harder it is to get off the ground, and a place with deliberately high churn represents even more of a risk.

comment by Raemon · 2019-06-11T20:24:05.511Z · LW(p) · GW(p)

Lately I've been noticing myself getting drawn into more demon-thready discussions on LessWrong. This is in part due to UI choice – demon threads (i.e. usually "arguments framed through 'who is good and bad and what is acceptable in the overton window'") are already selected for getting above-average at engagement. Any "neutral" sorting mechanism for showing recent comments is going to reward demon-threads disproportionately.

An option might be to replace the Recent Discussion section with a version of itself that only shows comments and posts from the Questions page (in particular for questions that were marked as 'frontpage', i.e. questions that are not about politics).

I've had some good experiences with question-answering, where I actually get into a groove where the thing I'm doing is actual object-level intellectual work rather than "having opinions on the internet." I think it might be good for the health of the site for this mode to be more heavily emphasized.

In any case, I'm interested in making a LW Team internal option where the mods can opt into a "replace recent discussion with recent question activity" to experiment with living in a world that contains more nudges towards the object level and seeing how that goes.

My current best guess is that the best option includes giving people more choices about Recent Discussion works, and then having the default choice for new users be something a little more magical that is filtered to push things more towards the object level.

comment by Raemon · 2022-12-07T23:50:05.631Z · LW(p) · GW(p)

I still want to make a really satisfying "fuck yeah" button on LessWrong comments that feels really good to press when I'm like "yeah, go team!" but doesn't actually mean I want to reward the comment in our longterm truthtracking or norm-tracking algorithms.

I think this would seriously help with weird sociokarma cascades.  

Replies from: derpherpize, Viliam, Dagon
comment by Lao Mein (derpherpize) · 2022-12-08T01:50:18.556Z · LW(p) · GW(p)

You should just message them directly. "Your comment was very based." would feel quite nice in my inbox.

Replies from: Raemon
comment by Raemon · 2022-12-08T02:09:05.592Z · LW(p) · GW(p)

It needs to be less effort than upvoting to accomplish the thing I want.

comment by Viliam · 2022-12-09T11:12:12.819Z · LW(p) · GW(p)

Ah, I imagine a third set of voting buttons, with large colorful buttons "yay, ingroup!!!" and "fuck outgroup!!!", with the following functionality:

  • in your personal settings,you can replace the words "ingroup" and "outgroup" by a custom text
  • only the votes that agree with you are displayed; for example if there are 5 "yay" votes and 7 "boo" votes, if you voted "yay", you will only see "5 people voted yay on this comment" (not the total -2)
  • the yay/boo votes have no impact on karma
  • if you make a yay/boo vote, the other two sets of voting buttons are disabled for this comment

What I expect from this solution:

  • to be emotionally deeply satisfying
  • without having any impact on karma (actually it would take mindkilling votes away from the karma buttons)
comment by Dagon · 2022-12-08T18:29:32.691Z · LW(p) · GW(p)

What longterm truthtracking or norm-tracking algorithms are you talking about?  Can you give a few examples of sociokarma cascades that you think will improved by this complexity?  Would adding agree/disagree to top-level posts be sufficient (oh, wait, you're talking about comments.  How does agree/disagree not solve this?)

More fundamentally, why do you care about karma, aside from a very noisy short-term input into whether a post or comment is worth thinking about?

Now if you say "do away with strong votes, and limit karma-based vote multiples to 2x", I'm fully onboard.

comment by Raemon · 2019-08-05T19:18:41.731Z · LW(p) · GW(p)

Can democracies (or other systems of government) do better by more regularly voting on meta-principles, but having those principles come into effect N years down the line, where N is long enough that the current power structures have less clarity over who would benefit from the change?

Some of the discussion on Power Buys You Distance From the Crime [LW · GW] notes that campaigning to change meta principles can't actually be taken at face value (or at least, people don't take it at face value), because it can be pretty obvious who would benefit from a particular meta principle. (If the king is in power and you suggest democracy, obviously the current power structure will be weakened. If people rely on Gerrymandering to secure votes, changing the rules on Gerrymandering clearly will have an impact on who wins next election)

But what if people voted on changing rules for Gerrymandering, and the rules wouldn't kick in for 20 years. Is that more achievable? Is it better or worse?

The intended benefit is that everyone might roughly agree it's better for the system to be more fair, but not if that fairness will clearly directly cost them. If a rule change occurs far enough in the future, it may be less clear who will benefit from the change.

This is perhaps related to Robin Hanson's Near Mode vs Far mode. I think people are more idealistic in Far Mode... and at least sometimes this just seems good? Lack of clarity over who benefits from a change might improve people's ability to think about longterm benefits and fairness.

Replies from: habryka4
comment by habryka (habryka4) · 2019-08-05T20:05:05.264Z · LW(p) · GW(p)

I have a bunch of thoughts on this. A lot of the good effects of this actually happened in space-law, because nobody really cared about the effects of the laws when they were written.

Other interesting contracts that were surprisingly long-lasting is the ownership of Hong-Kong for Britain, which was returned after 90 years.

However, I think there are various problems with doing this a lot. One of them is that when you make a policy decision that's supposed to be useful in 20 years, then you are making a bid on that policy being useful in the environment that will exist in 20 years, over which you have a lot of uncertainty. So by default I expect policy-decisions made for a world 20 years from now to be worse than decisions made for the current world.

The enforcability of contracts over such long time periods is also quite unclear. What prevents the leadership 15 years from now from just calling off the policy implementation? This requires a lot of trust and support for the meta-system, which is hard to sustain over such long periods of time.

In general, I have a perspective that lots of problems could be solved if people could reliably make long-term contracts, but that there are no reliably enforcement mechanisms for long-term contracts at the national-actor level.

Replies from: Dagon
comment by Dagon · 2019-08-05T23:10:30.705Z · LW(p) · GW(p)

I think lack of long-term contract enforcement is one part of it - the US congress routinely passes laws with immediate costs and delayed revenue, and then either continually postpones or changes it's mind on the delayed part (while keeping the immediate part). I'd classify it as much as deception as of lack of enforcement. It's compounded by the fact that the composition of the government changes a bit every 2 years, but the fundamental problem is that "enforcement" is necessary, because "alignment" doesn't exist.

Trying to go meta and enforce far-mode stated values rather than honoring near-mode actual behaviors is effectively forcing people into doing what they say they want, as opposed to inferring what they actually want. I'm actually sympathetic to that tactic, but I do recognize that it's coercion (enforcement of ill-considered contract) rather than actual agreement (where people do what they want, because that's what they want).


Replies from: gworley
comment by Raemon · 2017-12-31T20:57:33.647Z · LW(p) · GW(p)

Musings on ideal formatting of posts (prompted by argument with Ben Pace)

My thoughts:

1) Working memory is important.

If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head.

2) Less Wrong is for thinking

This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them.

3) You can expand working memory with visual reference

Having larger monitors or notebooks to jot down thoughts makes it easier to think.

The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain).

But regardless of font-size:

4) Optimizing a post for re-skimmability makes it easier to refer to.

This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Sunset at Noon for an example)

Ben's Counter:

Ben Pace noticed this while reviewing an upcoming post I was working on, and his feeling was "all this bold is making me skim the post instead of reading it."

To which all I have to say is "hmm. Yeah, that seems likely."

I am currently unsure of the relative tradeoffs.

Replies from: Zvi
comment by Zvi · 2018-01-01T00:59:52.948Z · LW(p) · GW(p)

I pushed Oliver for smaller font size when I first saw the LW 2.0 design (I'd prefer something like the comments font), partly for the words-in-mind reason. I agree that bigger words work against complex and deep thinking, and also think that any time you force someone to scroll, you risk disruption (when you have kids you're trying to deal with, being forced to interact with the screen can be a remarkably large negative).

I avoid bold and use italics instead because of the skimming effect. I feel like other words are made to seem less important when things are bolded. Using it not at all is likely a mistake, but I would use it sparingly, and definitely not use it as much as in the comment above.

I do think that using variable font size for section headings and other similar things is almost purely good, and give full permission for admins to edit such things in if I'm being too lazy to do it myself.

Replies from: habryka4, Raemon
comment by habryka (habryka4) · 2018-01-01T01:53:16.262Z · LW(p) · GW(p)

The current plan is to allow the authors to choose between a smaller sans-serif that is optimized for skimmability, and a larger serif that is optimized for getting users into a flow of reading. Not confident about that yet though. I am hesitant about having too much variance in font-sizes on the page, and so don't really want to give authors the option to choose their own font-size from a variety of options, but having a conceptual distinction between "wiki-posts" that are optimized for skimmability and "essay-posts" that are optimized for reading things in a flow state seems good to me.

Also not sure about the UI for this yet, input is welcome. I want to keep the post-editor UI as simple as possible.

Replies from: Raemon
comment by Raemon · 2019-02-06T00:02:09.621Z · LW(p) · GW(p)

FYI it's been a year and I still think this is pretty important

comment by Raemon · 2018-01-01T01:25:29.871Z · LW(p) · GW(p)

Hmm. Here's the above post with italics instead, for comparison:

...

Musings on ideal formatting of posts (prompted by argument with Ben Pace)

My thoughts:

1) Working memory is important.

If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head.

2) Less Wrong is for thinking

This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them.

3) You can expand working memory with visual reference

Having larger monitors or notebooks to jot down thoughts makes it easier to think.

The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain).

But regardless of font-size:

4) Optimizing a post for re-skimmability makes it easier to refer to.

This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Sunset at Noon for an example)

Replies from: Raemon
comment by Raemon · 2018-01-01T01:30:29.465Z · LW(p) · GW(p)

I think it works reasonably for the bulleted-number-titles. I don't personally find it working as well for interior-paragraph things.

Using the bold makes the document function essentially as it's own outline, whereas italics feels insufficient for that - when I'm actually in skimming/hold-in-working-memory mode, I really want something optimized for that.

The solution might just to provide actual outlines after-the-fact.

Part of what I liked with my use of bold and headers was that it'd be fairly easy to build a tool that auto-constructs an outline.

Replies from: gjm
comment by gjm · 2018-01-19T00:09:37.077Z · LW(p) · GW(p)

For what it's worth, my feeling is pretty much the opposite. I'm happy with boldface (and hence feel no need to switch to italics) for structural signposts like headings, but boldface is too prominent, relative to ordinary text, to use for emphasis mid-paragraph unless we actively want readers to read only the boldface text and ignore everything else.

I would probably not feel this way if the boldface text were less outrageously heavy relative to the body text. (At least for me, in the browser I'm using now, on the monitor I'm using now, where the contrast is really extreme.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-01-19T01:39:40.527Z · LW(p) · GW(p)

Some comparisons and analysis:

(1) Using bold for emphasis

Using bold for emphasis

When the font size is small, and the ‘bold’ text has a much heavier weight than the regular text (left-hand version), the eye is drawn to the bold text. This is both because (a) reading the regular text is effortful (due to the small size) and the bold stands out and thus requires greatly reduced effort, and (b) because of the great contrast between the two weights.

But when the font size is larger, and the ‘bold’ text is not so much heavier in weight than the regular text (right-hand version), then the eye does not slide off the regular text, though the emphasized lines retains emphasis. This means that emphasis via bolding does not seriously impact whether a reader will read the full text.

(2) Using italics for emphasis

Using italics for emphasis

Not much to say here, except that how different the italic variant of a font is from the roman variant is critical to how well italicizing works for the purpose of emphasis. It tends to be the case that sans-serif fonts (such as Freight Sans Pro, the font currently used for comments and UI elements on LW) have less distinctive italic variants than serif fonts (such as Charter, the font used in the right-hand part of the image above)—though there are some sans-serif fonts which are exceptions.

(3) Skimmability

Skimmability

Appropriate typography is one way to increase a post’s navigability/skimmability. A table of contents (perhaps an auto-generated one—see image) is another. (Note that the example post in this image has its own table of contents at the beginning, provided by Raemon, though few other posts do.)

(4) Bold vs. italic for emphasis

Bold vs. italic for emphasis

This is a perfect case study of points (1) and (2) above. Warnock Pro (the font you see in the left-hand part of the image above) has a very distinctive italic variant; it’s hard to miss, and works very well for emphasis. Charter (the font you see in the right-hand part of the image) has a somewhat less distinctive italic variant (though still more distinctive than the italic variants of most sans-serif fonts).

Meanwhile, the weight of Warnock Pro used for ‘bold’ text on the left is fairly heavy compared to the regular text weight. That makes the bolding work very well for emphasis, but can also generate the “people only read the bold text” effect. On the other hand, the bold weight of Charter is distinctive, but not distractingly so.

Finally, as in point (1), the larger the font size, the less distracting bold type is.

Replies from: SaidAchmiz, Raemon
comment by Said Achmiz (SaidAchmiz) · 2018-01-19T06:07:37.996Z · LW(p) · GW(p)

Here, for reference, is a brief list of reasonably readable sans-serif fonts with not-too-heavy boldface and a fairly distinctive italic variant (so as to be suitable for use as a comments text font, in accordance with the desiderata suggested in my previous comment):

(Fonts marked with an asterisk are those I personally am partial to.)

Edit: Added links to screenshots.

comment by Raemon · 2018-01-20T23:50:39.941Z · LW(p) · GW(p)

One thing that's worth noting here is there's an actual difference of preference between me and (apparently a few, perhaps most) others.

When I use bold, I'm specifically optimizing for skimmability because I think it's important to reference a lot of concepts at once, and I'm not that worried about people reading every word. (I take on the responsibility of making sure that the parts that are most important not to miss are bolded, and the non-bold stuff is providing clarity and details for people who want them)

So, for my purposes I actually prefer bold that stands out well enough that my eyes easily can see it at a glance.

comment by Raemon · 2024-05-11T18:34:54.656Z · LW(p) · GW(p)

New concept for my "qualia-first calibration" app idea that I just crystallized. The following are all the same "type":

1. "this feels 10% likely"

2. "this feels 90% likely"

3. "this feels exciting!"

4. "this feels confusing :("

5. "this is coding related"

6. "this is gaming related"

All of them are a thing you can track: "when I observe this, my predictions turn out to come true N% of the time".

Numerical-probabilities are merely a special case (tho it still gets additional tooling, since they're easier to visualize graphs and calculate brier scores for)

And then a major goal of the app is to come up with good UI to help you visualize and compare results for the "non-numeric-qualia".

Depending on circumstances, it might seem way more important to your prior "this feels confusing" than "this feels 90% likely". (I'm guessing there is some actual conceptual/mathy work that would need doing to build the mature version of this)

comment by Raemon · 2024-05-09T20:28:02.347Z · LW(p) · GW(p)

"Can we build a better Public Doublecrux?"

Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format.

Public Doublecrux is a more truthseeking oriented version of Public Debate. (The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice vera")

Reasons to want to do _public_ doublecrux include:

  • it helps showcase subtle mental moves that are hard to write down explicitly (i.e. tacit knowledge transfer)
  • there's still something good and exciting about seeing high profile smart people talk about ideas. Having some variant of that format seems good for LessOnline. And having at least 1-2 "doublecruxes" rather than "debates" or "panels" or "interviews" seems good for culture setting.

Historically I think public doublecruxes have had some problems:

  • two people actually changing *their* minds tend to get into idiosyncratic frames that are hard for observers to understand. You're chasing *your* cruxes, rather than presenting "generally compelling arguments." This tends to get into weeds and go down rabbit holes
  • – having the audience there makes it a bit more awkward and performative.

...

...

With that in mind, here are some ideas:

  • Maybe have the double cruxers in a private room, with videocameras. The talk is broadcast live to other conference-goers, but the actual chat is in a nice cozy room.
  • Have _two_ (or three?) dedicated facilitators. One is in the room with the doublecruxers, focused on helping them steer towards useful questions. (this has been tried before seems to go well if the facilitator prepares). The SECOND (and maybe third) facilitator hangs out with the audience outside, and is focused on tracking "what is the audience confused about?". The audience participates in a live google doc where they're organizing the conversational threads and asking questions.

    (the first facilitator is periodically surreptitiously checking the google doc or chat and sometimes asking the Doublecruxers questions about it)
  • it's possibly worth investing in developing a doublcrux process that's explicitly optimized for public consumption. This might be as simple as having the facilitator periodically asking participants to recap the open threads, what the goal of the current rabbit hole is, etc. But, like, brainstorming and doing "user tests" of it might be worthwhile.

...

Anyway those are some thoughts for now. Curious if anyone's got takes.

Replies from: keltan
comment by keltan · 2024-05-10T07:44:13.929Z · LW(p) · GW(p)

Ramble dot points of thoughts I had around this.

  1. I like this idea

  2. When I listen to very high power or smart people debate, what I’m looking for is to absorb their knowledge.

    1. Tacit and semantic.
  3. Instead, as the debate heats up, I feel myself being draw into one of the sides.

    1. I spend more time thinking about my bias than the points being made.
    2. I’m not sure what I’m picking up from heated debate is as valuable as it could be.
  4. If the interlocutors are not already close friends, perhaps having them complete a quick bonding exercise to gain trust?

    1. I image playing on the same team in a video game or solving a physical problem together.
    2. Really let them settle into a vibe of being friends. Let them understand what it feels like to work with this new person toward a common goal.
comment by Raemon · 2024-03-06T23:53:42.986Z · LW(p) · GW(p)

Two interesting observations from this week, while interviewing people about their metacognitive practies.

  • @Garrett Baker [LW · GW] said that he had practiced memorizing theorems for linear algera awhile back, and he thinks this had (a side effect?) of creating a skill of "memorizing stuff quickly", which then turned into some kind of "working memory management" tool. It sounded something like "He could quickly memorize things and chunk them, and then he could do that on-the-fly while reading math textbooks".
     
  • @RobinGoins [LW · GW] had an experience of not being initially able to hold all their possible plans/goals/other in working memory, but then did a bunch of Gendlin Focusing on them, and then had an easier time holding them all. It sounds like the Gendlin Focusing was playing a similar role to the "fast memorization" thing, of "finding a [nonverbal] focusing handle for a complex thing", where the focusing handle was able to efficiently unpack into the full richness of the thing they were trying to think about.

Both of these are interesting because they hint at a skill of "rapid memorization => improved working memory". 

@gwern [LW · GW] has previously written about Dual N Back not actually working that well at improving IQ. It seems like history is littered with corpses of people trying to improve IQ or g, so I'm not too optimistic here. My current assumption/guess is that the Dual N Back stuff trained a particular skill that turned out not to transfer to other domains. 

But, like, even if "rapidly memorize math proofs" didn't generalize to anything other than memorizing math proofs, it feels plausible to me that this could at least help with situations where that particular skill is useful, and might be worth it even without domain transfer.

And I could imagine that there's something of a skill of "learn to rapidly chunk content in a given domain", which doesn't automatically translate to other domains, but which makes it easier to learn to chunk new types of domains, similar to how learning one language doesn't let you speak all languages but makes it easier to learn ones.

comment by Raemon · 2022-11-28T00:46:43.167Z · LW(p) · GW(p)

I think a bunch of discussion of acausal trade might be better framed as "simulation trade." It's hard to point to "acausal" trade in the real world because, well, everything is at least kinda iterated and at least kinda causally connected. But, there's plenty of places where the thing you're doing is mainly trading with a simulated partner. And this still shares some important components with literal-galaxy-brains making literal acausal trade.

Replies from: Dagon
comment by Dagon · 2022-11-28T02:14:56.098Z · LW(p) · GW(p)

I’d love to see a worked example. The cases I come up with are all practice for or demonstrations of feasibility for casual normal trade/interactions.

Replies from: Gunnar_Zarncke, Raemon
comment by Gunnar_Zarncke · 2022-11-28T12:25:05.154Z · LW(p) · GW(p)

I think I know at least some of the examples you refer to. I think the causality in these cases is a shared past of the agents making the trade. But I'm not sure that breaks the argument in cases where the agents involved are not aware of that, for example but not limited to, having forgotten about it or intentionally removed the memory. 

Replies from: Dagon
comment by Dagon · 2022-11-28T18:56:50.137Z · LW(p) · GW(p)

There is convoluted-causality in a lot of trust relationships.  "I trust this transaction because most people are honest in this situation", which works BECAUSE most people are, in fact, honest in that situation.  And being honest does (slightly) reinforce that for future transactions, including transactions between strangers which get easier only to the degree they're similar to you.

But, while complex and involving human social norms and "prediction", it's not comparable to Newcomb (one-shot, high-stakes, no side-effects) or acausal trade (zero-shot, no path to specific knowledge of outcome).

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-11-29T11:28:11.342Z · LW(p) · GW(p)

In which way is sharing some common social knowledge relevantly different from sharing the same physical universe?

Replies from: Dagon
comment by Dagon · 2022-11-29T15:00:34.589Z · LW(p) · GW(p)

Common social knowledge has predictive power and causal pathways to update the knowledge (and others' knowledge of the social averages which contain you).  Acausal trade isn't even sharing the same physical universe  - it's pure theory, with no way to adjust over time.

comment by Raemon · 2022-11-28T02:58:02.316Z · LW(p) · GW(p)

"Casual norm trade/interactions" does seem like most of the obvious example-space. The generator for this thought comes from chatting with Andrew Critch. See this post for some reference: http://acritch.com/deserving-trust/ 

Replies from: Dagon
comment by Dagon · 2022-11-28T05:17:38.580Z · LW(p) · GW(p)

Typo: s/casual/causal/ - these seem to be diffuse reputation cases, where one recognizes that signaling is leaky, and it’s more effective to be trustworthy than to only appear trustworthy. Not for subtle Newcombe or acausal reasons, but for highly evolved betrayal detection mechanisms.

comment by Raemon · 2018-02-09T18:55:08.633Z · LW(p) · GW(p)

So, AFAICT, rational!Animorphs is the closest thing CFAR has to publicly available documentation. (The characters do a lot of focusing, hypothesis generation-and-pruning. Also, I just got to the Circling Chapter)

I don't think I'd have noticed most of it if I wasn't already familiar with the CFAR material though, so not sure how helpful it is. If someone has an annotated "this chapter includes decent examples of Technique/Skill X, and examples of characters notably failing at Failure Mode Y", that might be handy.

comment by Raemon · 2019-12-07T01:23:29.489Z · LW(p) · GW(p)

In response to lifelonglearner's comment I did some experimenting with making the page a bit bolder. Curious what people think of this screenshot where "unread" posts are bold, and "read" posts are "regular" (as opposed to the current world, where "unread" posts "regular", and read posts are light-gray).

Replies from: RobbBB, Raemon, Raemon
comment by Rob Bensinger (RobbBB) · 2019-12-07T01:30:07.655Z · LW(p) · GW(p)

I'd be interested in trying it out. At a glance, it feels too much to me like it's trying to get me to read Everything, when I can tell from the titles and snippets that some posts aren't for me. If anything the posts I've already read are often ones I want emphasized more? (Because I'm curious to see if there are new comments on things I've already read, or I may otherwise want to revisit the post to link others to it, or finish reading it, etc.)

The bold font does look aesthetically fine and breaks things up in an interesting way, so I like the idea of maybe using it for more stuff?

comment by Raemon · 2019-12-07T01:50:31.645Z · LW(p) · GW(p)

Alternate version where only the title and karma are bolded:

Replies from: Evan Rysdam, adam_scholl, RobbBB
comment by Sunny from QAD (Evan Rysdam) · 2019-12-07T03:54:58.971Z · LW(p) · GW(p)

I think I prefer the status quo design, but not very strongly. Between the two designs pictured here, I at first preferred the one where the authors weren't bolded, but now I think I prefer the one where the whole line is bolded, since "[insert author whose posts I enjoy] has posted something" is as newsworthy as "there's a post called [title I find enticing]".

Something I've noticed about myself is that I tend to underestimate how much I can get used to things, so I might end up just as happy with whichever design is chosen.

comment by Adam Scholl (adam_scholl) · 2019-12-07T06:12:13.527Z · LW(p) · GW(p)

Fwiw, for reasons I can't explain I vastly prefer just the title bolded to the entire line bolded, and significantly prefer the status quo to title bolded.

comment by Rob Bensinger (RobbBB) · 2019-12-07T02:25:16.405Z · LW(p) · GW(p)

I think I prefer bolding full lines b/c it makes it easier to see who authored what?

Replies from: Raemon
comment by Raemon · 2019-12-07T02:35:23.101Z · LW(p) · GW(p)

I initially wanted "bold everywhere" because it helped my brain reliably parse things as "this is a bold line" instead of "this is a line with some bold parts but you have to hunt for them". But, after experimenting a bit I started to feeling having bold elements semi-randomly distributed across the lines made it a lot busier.

comment by Raemon · 2019-12-10T22:54:49.573Z · LW(p) · GW(p)

The LW team has been trying this out the "bolded unread posts" a few days as an admin-only setting. I think pretty much everyone isn't liking it.

But I personally am liking the fact that most posts aren't grey, and I'm finding myself wondering whether it's even that important to highlight unread posts. Obviously there's some value to it, but:

a) a post being read isn't actually that much evidence about whether I want to read it again – I find myself clicking on old posts about as often as new posts. (This might be something you could concretely look into with analytics)

b if I don't want to read a post, marking it as read is sort of annoying

c) I still really dislike having most of my posts be grey

d) it's really hard to make an "unread" variant that doesn't scream out for disproportionate attention.

(I suppose there's also an option for this to be a user-configurable setting, since most users don't read so many posts that they all show up grey, and the few who do could maybe just manually turn it off)

comment by Raemon · 2018-04-18T20:47:20.253Z · LW(p) · GW(p)

Issues with Upvoting/Downvoting

We've talked in the past about making it so that if you have Karma Power 6, you can choose whether to give someone anywhere from 1-6 karma.

Upvoting

I think this is an okay solution, but I also think all meaningful upvotes basically cluster into two choices:

A. "I think this person just did a good thing I want to positively reinforce"

B. "I think this person did a thing important enough that everyone should pay attention to it."

For A, I don't think it obviously matters that you award more than 1 karma, and definitely never more than 3 karma. The karma should be mostly symbolic. For B, I'd almost always want to award them maximum karma. The choice of "well, do they really deserve 1, 2 or 3 karma for their pat-on-the-head?" doesn't seem like a choice we should be forcing people to make.

The value in giving 1, 2 or 3 karma for a "small social reinforcement" is mostly about communicating "Social rewards from longtime trusted community members should feel better to get than social rewards from random newbies." I'm not sure how strong a signal this is.

For "Pay Attention To This" upvotes, similarly, if you have 6 karma power, I don't think it's that interesting a choice to assign 4, 5 or 6.

And, you know, Choices Are Bad [LW · GW].

So, I currently support a paradigm where you just have Big Upvote and Small Upvote. I'm neutral between "small upvote is always 1" or "small upvote grows from 1 to 3 as you gain karma

This feels elegant. The problem is downvoting.

Downvoting

When downvoting, there's a few different things I might be wanting to do (note: I don't endorse all of these, this is just what my S1 is wanting to do).

A. This person made a small mistake, and should be mildly socially punished

B. This person was deeply wrong and should heavily punished

C. This post is getting too much attention relative to how good it is. It's at 25 karma. I want to try to bring to around 15 or something.

D. This content should not be on the site (for any one of a number reasons), should not show up on the frontpage (meaning the karma should be at most zero) or the comment should be autocollapsed (karma should be -5)

When a newcomer shows up and does something I don't like, my natural instinct is to try to keep their comment at 0 (which feels like the right level of "your thing was bad", but in a way that feels more like an awkward silence than a slap in the face. I definitely need to be able to downvote by less than 6. The problem is as a user gains karma power, the amount I need to downvote just scales linearly.

This is all incompatible with the simple "Big Vote, Small Vote" paradigm. Which feels sad from an elegance/symmetry perspective.

So, that's a thing I'm thinking about.

Replies from: Wei_Dai, gwillen, Elo
comment by Wei Dai (Wei_Dai) · 2018-04-19T03:24:25.925Z · LW(p) · GW(p)

There's another issue with voting, which is that I sometimes find a comment or post on the LW1 part of the site that I want to vote up or down, but I can't because my 5 points of karma power would totally mess up the score of that comment/post in relation to its neighbors. I haven't mentioned this before because I thought you might already have a plan to address that problem, or at worst I can wait until the variable upvote/downvote feature comes in. But if you didn't have a specific plan for that and adopted "small upvote grows from 1 to 3 as you gain karma" then the problem wouldn't get solved.

Also, is there an issue tracker for LW2? I wanted to check it to see if there's an existing plan to address the above problem, but couldn't find it through Google, from the About page, or by typing in "issue tracker" in the top right search box. There's the old issue tracker at https://github.com/tricycle/lesswrong/issues but it doesn't look like that's being used anymore?

ETA: I found the issue tracker at https://github.com/Discordius/Lesswrong2/issues by randomly coming across a comment that linked to it. I'm still not sure how someone is supposed to find it.

comment by gwillen · 2018-04-19T04:28:42.821Z · LW(p) · GW(p)

I liked the idea I think you mentioned in an earlier thread about this, where each click increases vote weight by one. It's conceptually very simple, which I think is a good property for a UI. It does involve more clicks to apply more voting power, but that doesn't seem bad to me. How often does one need to give something the maximum amount of votes, such that extra clicks are a problem? It seems to me this would tend to default to giving everyone the same voting power, but allow users with more karma to summon more voting power with very slightly more effort if they think it's warranted. That feels right to me.

Replies from: TheWakalix, gwillen
comment by TheWakalix · 2018-05-01T15:37:41.865Z · LW(p) · GW(p)

If this is implemented, I think there should be a dot between the two vote buttons to reset the vote to 0.

comment by gwillen · 2018-04-19T04:30:47.537Z · LW(p) · GW(p)

(A possible downside I see is that it might somehow do the opposite -- that voting will feel like something that is reinforced in a conditioning sense, so that users with more voting power will get more reinforcers since they do click->reward more times, and that this will actually give them a habit of wanting to apply the maximum vote more than they otherwise would because it feels satisfying to vote repeatedly. This isn't clearly a lot worse than the situation we have now, where you always vote maximum with no option.)

comment by Elo · 2018-04-18T21:19:19.510Z · LW(p) · GW(p)

How do I "small up vote" for "keep thinking about it".

Replies from: Raemon
comment by Raemon · 2018-04-18T21:56:10.362Z · LW(p) · GW(p)

For now, I guess just do the thing you just did? :)

Replies from: Raemon
comment by Raemon · 2018-04-18T21:57:27.397Z · LW(p) · GW(p)

(that said I'd be interested in an unpacked version of your comment, sounded like the subtext was something like "this line of thinking is pointing somewhere useful but it doesn't seem like you're done thinking about it". If that's not the case, curious what you meant. If it is the case, curious about more detailed concerns about what would make for good or bad implementations of this)

Replies from: Elo
comment by Elo · 2018-04-19T06:25:18.450Z · LW(p) · GW(p)

It is clear that more thought I'd needed for a satisfactory answer here and I would encourage you to keep seeking a satisfactory solution.

comment by Raemon · 2018-01-20T23:49:22.151Z · LW(p) · GW(p)

I think learning-to-get-help is an important, often underdeveloped skill. You have to figure out what *can* be delegated. In many cases you may need to refactor your project such that it's in-principle possible to have people help you.

Some people I know have tried consciously developing it by taking turns being a helper/manager. i.e. spend a full day trying to get as much use out of another person as you can. (i.e. on Saturday, one person is the helper. The manager does the best they can to ask the helper for help... in ways that will actually help. On Sunday, they reverse)

The goal is not just to get stuff done for a weekend, but to learn how ask for help, to help, to be helped.

(Some people I know did this for a full day, others did it for an hour. The people who did it for an hour said it didn't quite feel that useful. A person who did it for a full day said that an hour was nowhere near enough time to make it through the initial learning curve of "I don't even know what sort of things are useful to ask for help with.")

So, this is a thing I'm interested in trying.

I think it requires some existing trust and being able to work side-by-side, so I'm mostly extending a request/offer to do this for a weekend with people who already know me and live near me, but am curious if other people try it and get benefit out of it.

comment by Raemon · 2020-07-30T18:03:40.280Z · LW(p) · GW(p)

With some frequency, LW gets a new user writing a post that's sort of... in the middle of having their mind blown by the prospect of quantum immortality and MWI. I'd like to have a single post to link them to that makes a fairly succinct case for "it adds up to normality", and I don't have a clear sense of what to do other that link to the entire Quantum Physics sequence. 

Any suggestions? Or, anyone feel like writing said post if it doesn't exist yet?

Replies from: adele-lopez-1
comment by Raemon · 2019-08-15T02:24:07.940Z · LW(p) · GW(p)

Draft/WIP: The Working Memory Hypothesis re: Intellectual Progress

Strong claim, medium felt

So I'm working with the hypothesis that working memory (or something related) is a major bottleneck on progress within a given field. This has implications on what sort of things fields need.

Basic idea is that you generally need to create new concepts out of existing sub-concepts. You can only create a concept if you can hold the requisite sub-concepts in your head at once. Default working memory limits is 4-7 chunks. You can expand that somewhat by writing things down on whiteboards. I'm not sure exactly what the practical limits are.

But there is some upper limit on how complex a concept people can work with, even the smartest people in the field. So there are some ideas you just can't invent, if they require 30 moving parts to consider at once. If an idea has 30 moving parts, it's necessary to find some way to conceptualize it as having fewer parts.

Fortunately, this is possible. When I first learn a new concept, it's unfamiliar, and I have to track all of it's individual pieces separately – if it has 5 sub-concepts it takes 5 chunks of working memory.

But eventually I learn to think of it as a single concept, and then it only takes 1.

So, an important aspect of a field is distillation – improving explanations to help people think of clusters-of-ideas as a single concept they can reason about as if it took up only one chunk.

Distillation is a sort of tedious process though. So part of why Bell Labs is a great idea is that you can have all these specialists of different fields working in the same building, and whenever one of them wants to learn something they can just walk down the hall and ask the guy who invented it "hey, how does this work", and they get an explanation. And if they don't understand it fully, they can immediately, frictionlessly, ask followup questions.

This doesn't just help them understand new ideas – it reduces those ideas in conceptual complexity until they only take up a single chunk, enabling them to be combined with other ideas.

This has implications for the AI Alignment field – investing in distillation and co-location are both potentially quite valuable. (But, correspondingly – watch out for things that improve the distillation of ideas that feed into AI capabilities)

Replies from: pktechgirl, Viliam
comment by Elizabeth (pktechgirl) · 2019-08-15T19:14:56.769Z · LW(p) · GW(p)

This seems highly related to Chris Olah's Research Debt.

Replies from: habryka4
comment by habryka (habryka4) · 2019-08-15T21:00:34.971Z · LW(p) · GW(p)

(That was indeed the piece that crystallized this intuition for me, and I think Ray got this broader concept from me)

Replies from: Raemon
comment by Raemon · 2019-08-15T21:27:59.241Z · LW(p) · GW(p)

Yuppers. Yeah, the idea I'm trying to get at here could be conceptualized as "take the underlying generator that outputs Research Debt, and then lean hard into using it as an explanatory theory, and see that other hypotheses turn up when you take that seriously."

(I'd already read research debt too at the time Oli first explained this concept to me. I think Oli's additional contribution was thinking in terms of chunks being a limiting factor. He didn't specific working memory precisely as the constraint. I later thought about the intersection of working-memory-in-particular after writing You Have About Five Words [LW · GW] and later thinking about some implications on this comment here [LW(p) · GW(p)])

Oli had left the number of chunks available deliberately vague, and I'm now concretely predicting that people can only build theories systems that don't require them to hold more than 4-10* chunks at once.

*where "10" is an ass-pulled number for "how much your working memory can really be improved via writing things done."

[I don't know if Oli thinks working-memory-in-particular makes sense to think of as the bottleneck]

comment by Viliam · 2019-08-15T21:37:47.544Z · LW(p) · GW(p)

After learning a new concept, it is important to "play with it" for a while. Because the new concept is initially not associated with anything, so you probably will not see what it is good for.

For example, if someone tells you "a prime number is an integer number greater than one that can only be divided by itself and by one", that is easy to understand (even easier if they also give you a few examples of primes and non-primes), but it is not obvious why is this concept important and how could it be used.

But when the person also tells you "the number of primes is infinite... each integer can be uniquely factored into primes... some numbers are obviously not primes, but we don't know a simple method to find out whether a large number is a prime... in arithmetic modulo n you can define addition, subtraction, and multiplication for any n, but you can unambiguously define division only when n is prime..." and perhaps introduces a concept of "relative primes" and the Chinese remainder theorem... then you may start getting ideas of how it could be useful, such as "so, if we take two primes so big that we can barely verify their primeness, and multiply them, it will be almost impossible to factor the result, but it would be trivial to verify when the original two numbers are provided -- I wonder whether we could use this as a form of signature."

comment by Raemon · 2019-07-27T20:48:53.247Z · LW(p) · GW(p)

How (or under what circumstances), can people talk openly about their respective development stages?

A lot of mr-hire's recent posts [LW(p) · GW(p)] (and my own observations and goals) have updated me on the value of having an explicit model of development stages. Kegan levels are one such frame. I have a somewhat separate frame of "which people I consider 'grown up'" (i.e. what sort of things they take responsibility for and how much that matters)

Previously, my take had been "hmm, it seems like people totally do go through development stages, which do typically come in particular order and later development stages are better than early ones. But all the conversations I've seen where anyone brought up development stages seem like terrible conversations."

Basically, the social move of "you seem like you're at a lower development level than me" is often legitimately read as a status attack, and worse, a plausibly-deniable status attack*.

But, also, it's sometimes an important part of the conversation.

(*In particular, when people are within one level of each other, and it's ambiguous. When people are two levels apart from each other, it's usually more obvious to both of them and to third parties which is more cognitively sophisticated)

My recent updates, clarified by mr-hire, were that if you're building an organization (and, perhaps, a community), you need some way to actually account for different people being at different development stages. This requires actually having a model of what's going on. But even the construction of the model can be legitimately read as a plausibly deniable status attack – as the model gets fleshed out it's going to become increasingly clear who it puts into positions of power.

For discussions between individuals about who is "more cognitively sophisticated", my current best guess is that you can actually have this conversation reasonably easily in private (where by "reasonably easily", I mean it maybe takes several hours of building trust and laying groundwork, but there's nothing mysterious about it)

For discussions about how to build and developmental ontology in a mixed environment with no hierarchy and where people start off with very different frames on how to think, I feel pretty confused and/or worried.

Replies from: Linda Linsefors, Dagon
comment by Linda Linsefors · 2019-07-28T07:30:35.780Z · LW(p) · GW(p)
For discussions between individuals about who is "more cognitively sophisticated", my current best guess is that you can actually have this conversation reasonably easily in private (where by "reasonably easily", I mean it maybe takes several hours of building trust and laying groundwork, but there's nothing mysterious about it)

I can confirm this (anecdotally).

comment by Dagon · 2019-08-05T16:10:40.549Z · LW(p) · GW(p)
How (or under what circumstances), can people talk openly about their respective development stages?

Talking about one's own is easy. Talking about someone else's is, as you note, fraught. I'd like to focus on the "how can such conversations be effective" and "what do we want from such conversations" part of the issue.

I think a lot of harm is done by framing it as a linear set of stages, rather than a mesh of abstractions, and recognizing that object-level results are ALWAYS relevant, and the stages are mostly ways to take more factors into account for the models and beliefs that lead to results.

When it's a stage-based system, it implies such an overt status signal that it's hard to actually discuss anything else. People of higher levels can't learn anything from those lower, and lower levels just have to accept whatever the higher-level says. This is not useful for anything.

Basically, the social move of "you seem like you're at a lower development level than me" is often legitimately read as a status attack, and worse, a plausibly-deniable status attack*.

Go further. Phrased this way, it _IS_ a status attack. There's no possible useful further discussion. This is not plausibly-deniable, it's just plain asserting "I'm thinking deeper, so I'm right".

If you phrase it not about the participants, but about the discussion, "consider this higher-level abstraction - does it not seem relevant to the point at hand?", then you've got a hook to talk about it. You don't need to bring up cognitive stages or categorize the participants, you only need to make clear what levels THIS discussion is about.

There _MAY_ be a place for talking directly about what levels someone can operate at, for elitists discussing or reinforcing a membership filter. "Don't hire a CEO who can't handle level-5 thinking" is good advice. And in such cases, it's STILL entangled with status games, as the strong implication is that if you're not on that level, you're not part of the group.

Replies from: Raemon
comment by Raemon · 2019-08-05T19:28:19.425Z · LW(p) · GW(p)
Go further. Phrased this way, it _IS_ a status attack.

To be clear, I don't every think anyone should phrase it that way (and I think usually people don't). But it's still just not hard to interpret through that lens even if you're moderately careful in phrasing.

I think a lot of harm is done by framing it as a linear set of stages, rather than a mesh of abstractions, and recognizing that object-level results are ALWAYS relevant, and the stages are mostly ways to take more factors into account for the models and beliefs that lead to results.

Yeah, I basically agree with this.

My guess is to frame things in terms of skills to learn or particular attributes to acquire.

Replies from: Dagon
comment by Dagon · 2019-08-05T20:49:14.770Z · LW(p) · GW(p)
My guess is to frame things in terms of skills to learn or particular attributes to acquire.

IMO, even this is too status-ey and centered on attributes of the person rather than crux-ey and centered on the discussion you want to have.

Frame things in terms of models of thinking and level of abstraction/generalization to apply here and now. There may be skills to learn (or even attributes that can't be acquired, making the conversation at that level impossible) in order to get there, but start with what you want to understand/communicate, not with an assumption of capability (or lack thereof).

Doing this is also a reminder that sometimes washing the dishes is just the fastest way to empty the sink - generalizing to some idealized division of labor and social reward scheme doesn't have to happen every time. It often works better to generalize when there's not an object-level decision to be made (but beware failing to tie it back to reality at all, or you'll ignore important details).

comment by Raemon · 2019-07-21T19:38:10.246Z · LW(p) · GW(p)

I am very confused about how to think (and feel!) about willpower, and about feelings of safety.

My impression from overviews of the literature is something like "The depletion model of willpower is real if you believe it's real. But also it's at least somewhat real even if you don't?"

Like, doing cognitive work costs resources. That seems like it should just be true. But your stance towards your cognitive work affects what sort of work you are doing.

Similarly, I have a sense that physiological responses to potentially threatening situations are real. People who feel defensive have a harder time thinking in truthseeking mode rather than "keep myself safe" mode. But, it also seems plausibly-true that if you naively reinforce feelings of defensiveness they get stronger. i.e. if you make saying "I'm feeling defensive" a get out of jail free card, people will use it, intentionally or no. (there's a weird video about parents tricking babies into thinking they've been hit on the head when they haven't, and the babies start crying as if they're hurt. I have no idea if this is just cute selection effect but sort of illustrates the point)

There's a practical question re: "what sort of norms or advice do you want to encourage about how people deal with defensiveness, and with willpower?". But then there's also just a "what the hell epistemic state do you actually want to have?" as well as "how do you communicate about this?"

If you're a manager at a company, maybe you want to have a model of willpower depletion so you can make good tradeoffs, but you don't want your employees to dwell upon it much? But this obviously collapses as soon as there's multiple management layers. And in any case it's not a sustainable equilibrium. (You can't stop people from learning about willpower. Although you can make a choice not to emphasize it)

Meanwhile, if I'm just trying to manage myself as a person with goals, what should my epistemic state on willpower be?

Replies from: pktechgirl, jason-gross, Benquo, jason-gross
comment by Elizabeth (pktechgirl) · 2019-07-21T21:27:19.421Z · LW(p) · GW(p)
People who feel defensive have a harder time thinking in truthseeking mode rather than "keep myself safe" mode. But, it also seems plausibly-true that if you naively reinforce feelings of defensiveness they get stronger. i.e. if you make saying "I'm feeling defensive" a get out of jail free card, people will use it, intentionally or no.

As someone who's been a large proponent of the "consider feelings of safety" POV, I want to loudly acknowledge that this is a thing, and it is damaging to all parties.

I don't have a good solution to this. One possibility is insisting on things that facilitate safety even if everyone is saying they're fine.

comment by Jason Gross (jason-gross) · 2019-07-22T05:11:45.254Z · LW(p) · GW(p)

People who feel defensive have a harder time thinking in truthseeking mode rather than "keep myself safe" mode. But, it also seems plausibly-true that if you naively reinforce feelings of defensiveness they get stronger. i.e. if you make saying "I'm feeling defensive" a get out of jail free card, people will use it, intentionally or no

Emotions are information. When I feel defensive, I'm defending something. The proper question, then, is "what is it that I'm defending?" Perhaps it's my sense of self-worth, or my right to exist as a person, or my status, or my self-image as a good person. The follow-up is then "is there a way to protect that and still seek the thing we're after?" "I'm feeling defensive" isn't a "'get out of jail free' card", it's an invitation to go meta before continuing on the object level. (And if people use "I'm feeling defensive" to accomplish this, that seems basically fine? "Thank you for naming your defensiveness, I'm not interested in looking at it right now and want to continue on the object level if you're willing to or else end the conversation for now" is also a perfectly valid response to defensiveness, in my world.)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-22T05:58:50.049Z · LW(p) · GW(p)

This seems exactly right to me. The main thing that annoys me is people using their feelings of defensiveness "as an argument" that I'm doing something wrong by saying the things that seem true/relevant, or that the things I'm saying are not important to engage with, instead of taking responsibility for their defensiveness. If someone can say "I feel defensive" and then do introspection on why, such that that reason can be discussed, that's very helpful. "I feel defensive and have to exit the conversation in order to reflect on this" is likely also helpful, if the reflection actually happens, especially if the conversation can continue some time after that (if it's sufficiently important). (See also feeling rational [LW · GW]; feelings are something like "true/false" based on whether the world-conditions that would make the emotion representative pertain or not.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-07-22T08:39:14.385Z · LW(p) · GW(p)

(See also feeling rational; feelings are something like “true/false” based on whether the world-conditions that would make the emotion representative pertain or not.)

But people's feelings are generally not under conscious control and (based on personal experience) some people are a lot more sensitive/emotional than others. If I want to talk with someone who might have important information or insights to offer, or just for general cooperation, and they're on the more sensitive side of the spectrum, it sure seems like I should take that into consideration and word my comments more carefully than I otherwise would, rather than tell them that their feelings are "false" or irrational (which would most likely just make them stop wanting to talk to me).

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-22T08:58:09.618Z · LW(p) · GW(p)

This seems right, and I don't think this contradicts what I said. It can simultaneously be the case that their feelings are false (in the sense that they aren't representative of the actual situation) and that telling them that their feelings are false is going to make the situation worse.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-07-22T09:06:09.935Z · LW(p) · GW(p)

But what is your general plan for dealing with (i.e., attracting and keeping) forum/community members who are on the more sensitive/emotional side of the spectrum? For example, suppose I see someone talking with a more sensitive person in an oblivious way which I think will drive the second person away from the forum/community, it seems like under your proposed norms [LW · GW] I wouldn't be allowed to point that out and ask the first person to word their comments more carefully. Is that right?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-22T16:45:24.481Z · LW(p) · GW(p)
  1. Intense truth seeking spaces aren't for everyone. Growing the forum is not a strict positive. An Archipelago-type model may be useful, but I'm not confident whether it's worth it.

  2. There are techniques (e.g. focusing, meditation) for helping people process their emotions, which can be taught.

  3. Some politeness norms are acceptable (e.g. most insults that are about people's essential characteristics are not allowed), as long as these norms are compatible with a sufficiently high level of truthseeking to reach the truth on difficult questions including ones about adversarial dynamics.

  4. Giving advice to people is fine if it doesn't derail the discussion and it's optional to them whether they follow it (e.g. in an offline discussion after the original one). "Whether it's a good idea to say X" isn't a banned topic, the concern is that it gets brought up in a conversation where X is relevant (as if it's an argument against X) in a way that derails the discussion.

Replies from: Raemon, Wei_Dai
comment by Raemon · 2019-07-22T19:27:05.931Z · LW(p) · GW(p)

One thing I don't think I've emphasized as much because I was mostly arguing against the Rock rather than the Hard Place (which are both real) is that I definitely think LessWrong should expect people to gain skills related to owning their feelings, and bringing them into alignment with reality, or things kinda in that space.

I think it mostly makes sense to develop tools that allow us to move that meta conversation into separate threads, so that the object level discussion can continue unimpeded. (We currently don't have the tools to do this seamlessly, effortlessly, and with good UI. So we do it sometimes for things like this comment thread [LW · GW] but it doesn't yet have first class support)

Partly because it doesn't yet have first class support, my preferred approach is to move such conversations private [LW · GW] (while emphasizing the need to have them in a way where each party commits to posting something publicly after the fact as a summary).

My current impression is that there was an additional level of confusion/frustration between me and Benquo when I did this for my extended critiques of the Drowning Children are Rare tone, because my approach read (to Benquo) more as using backchannels to collude, (or possibly to threaten with my moderator status in a less accountable way?) rather than as an attempt to have a more sane conversation in a place where we didn't need to worry about how the meta conversation would affect the object level conversation.

comment by Wei Dai (Wei_Dai) · 2019-07-23T00:31:28.839Z · LW(p) · GW(p)

Giving advice to people is fine if it doesn’t derail the discussion and it’s optional to them whether they follow it (e.g. in an offline discussion after the original one). “Whether it’s a good idea to say X” isn’t a banned topic, the concern is that it gets brought up in a conversation where X is relevant (as if it’s an argument against X) in a way that derails the discussion.

Why shouldn't the "derailing" problem be solved some other way, aside from having a norm against bringing up "whether it's a good idea to say X" during a conversation where X is relevant (which seems to have clear costs, such as it sometimes being too late to bring that up afterwards because the damage is already done)? For example you could talk about "whether it's a good idea to say X" until that matter is settled, and then return to the original topic. Or have some boilerplate ready to the effect of "Given what I know, including the arguments you've brought up so far, the importance of truth-seeking on the topic for which X is relevant, and the risk of derailing that object-level conversation and not being able to return to it, I prefer to continue to say X and not discussing further at this time whether it's a good idea to do so." and use that when it seems appropriate to do so?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-23T07:03:24.336Z · LW(p) · GW(p)

For example you could talk about “whether it’s a good idea to say X” until that matter is settled, and then return to the original topic.

This is what is critiqued in the dialogue. It makes silencing way too easy. I want to make silencing hard.

The core point is that appeals to consequences aren't arguments, they're topic changes. It's fine to change topic if everyone consents. (So, bringing up "I think saying X is bad, we can talk about that or could continue this conversation" is acceptable)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-07-23T09:51:27.688Z · LW(p) · GW(p)

(So, bringing up “I think saying X is bad, we can talk about that or could continue this conversation” is acceptable)

My proposed alternative (which I may not have been clear enough about) is that someone could also bring up "I think saying X is bad, and here are my reasons for thinking that" and then you could either decide they're right, or switch to debating whether saying X is bad, or keep talking about the original topic (using some sort of boilerplate if you wish to explain why). Is this also acceptable to you and if not why?

(Assuming the answer is no) is it because you think onlookers will be irrationally convinced by bad arguments against saying X even if you answer them with a boilerplate, so you'd feel compelled to answer them in detail? If so, why not solve that problem by educating forum members (ahead of time) about possible biases they may have that could cause them to be irrationally convinced by such arguments, instead of having a norm against unilaterally bringing up reasons for not saying X?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-23T16:25:11.011Z · LW(p) · GW(p)

You're not interpreting me correctly if you think I'm saying bringing up posaible consequences is banned. My claim is more about what the rules of the game should be such that degenerate strategies don't win. If, in a chess game, removing arbitrary pieces of your opponent is allowed (by the rules of the game), then the degenerate strategy "remove the opponent's king" wins. That doesn't mean that removing your opponent's king (e.g. to demonstrate a possibility or as a joke) is always wrong. But it's understood not to be a legal move. Similarly, allowing appeals to consequences to be accepted as arguments lets the degenerate strategy "control the conversation by insinuating that the other person is doing something morally wrong" to win. Which doesn't mean you can't bring up consequences, it's just "not a valid move" in the original conversation. (This could be implemented different ways; standard boilerplate is one way, but it's likely enough if nearly everyone understands why this is an invalid move)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-07-23T16:57:18.016Z · LW(p) · GW(p)

You’re not interpreting me correctly if you think I’m saying bringing up possible consequences is banned.

The language you used was "outlawing appeals to consequences", and a standard definition of "outlaw" is "to place under a ban or restriction", so consider changing your language to avoid this likely misinterpretation?

This could be implemented different ways; standard boilerplate is one way, but it’s likely enough if nearly everyone understands why this is an invalid move

What other ways do you have in mind? Among the ways you find acceptable, what is your preferred implementation? (It seems like if you had mentioned these in your post, that would also have made it much less likely for people to misinterpret "outlawing appeals to consequences" as "bringing up possible consequences is banned".)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-24T04:18:21.906Z · LW(p) · GW(p)

It's still outlawing in the sense of outlawing certain chess moves, and in the sense of law thinking [LW · GW].

Here's one case:

A: X.

B: That's a relevant point, but I think saying X is bad for Y reason, and would like to talk about that.

A: No, let's continue the other conversation / Ok, I don't think saying X is bad for Z reason / Let's first figure out why X is true before discussing whether saying X is bad

Here's another:

A: X.

B: That's bad to say, for Y reason.

A: That's an appeal to consequences. It's a topic change.

B: Okay, I retract that / Ok, I am not arguing against X but would like to change the topic to whether saying X is bad

There aren't fully formal rules for this (this website isn't formal debate). The point is the structural issue of what kind of "move in the game" it is to say that saying X is bad.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-07-24T09:30:53.715Z · LW(p) · GW(p)

It’s still outlawing in the sense of outlawing certain chess moves, and in the sense of law thinking.

Where in the post did you explain or give contextual clues for someone to infer that you meant "outlaw" in this sense? You used "outlaw" three times in that post, and it seems like every usage is consistent with the "outlaw = ban" interpretation. Don't you think that absent some kind of explanation or clue, "outlaw = ban" is a relatively natural interpretation compared to the more esoteric "in the sense of outlawing certain chess moves, and in the sense of law thinking"?

Aside from that, I'm afraid maybe I haven't bought into some of the background philosophical assumptions you're using, and "what kind of move in the game it is to say that X is bad" does not seem highly relevant/salient to me. I (re)read the "law thinking" post you linked but it doesn't seem to help much to bridge the inferential gap.

The way I'm thinking about it is that if someone says "saying X is bad for reasons Y", then I (as either the person saying X or as an onlooker) should try to figure out whether Y changes my estimate of whether cost-benefit favors continuing to say X, and the VOI of debating that, and proceed accordingly. (Probably not by doing an explicit calculation but rather just checking what my intuition says after considering Y.)

Why does it matter "what kind of move in the game" it is? (Obviously "it's bad to say X" isn't a logical argument against X being true. So what? If people are making the error of thinking that it is a logical argument against X being true, that seems really easy to fix. Yes it's an attempt to change the topic, but again so what? It seems that I should still try to figure out whether/how Y changes my cost-benefit estimates.)

comment by Benquo · 2019-07-21T19:43:42.494Z · LW(p) · GW(p)

I think Critch is basically correct here; it makes more sense to model distractions or stress due to internal conflict as accumulating in some contexts, rather than willpower as a single quantity being depleted.

comment by Jason Gross (jason-gross) · 2019-07-22T05:17:09.838Z · LW(p) · GW(p)

I dunno how to think about small instances of willpower depletion, but burnout is a very real thing in my experience and shows up prior to any sort of conceptualizing of it. (And pushing through it works, but then results in more extreme burn out after.)

Oh, wait, willpower depletion is a real thing in my experience: if I am sleep deprived, I have to hit the "get out of bed" button in my head harder/more times before I actually get out of bed. This is separate from feeling sleepy (it is true even when I have trouble falling back asleep). It might be mediated by distraction, but that seems like quibbling over words.

I think in general I tend to take outside view on willpower. I notice how I tend to accomplish things, and then try to adjust incentive gradients so that I naturally do more of the things I want. As was said in some CFAR unit, IIRC, if my process involves routinely using willpower to accomplish a particular thing, I've already lost.

comment by Raemon · 2019-07-01T23:34:14.280Z · LW(p) · GW(p)

I'm currently pretty torn between:

  • "Try to actually resolve the longstanding major disagreements about what sort of culture is good for LessWrong"
  • "Attempt to build real archipelago features [LW · GW] that let people self segregate into whatever discussions they want."
  • "Attempt to mostly bypass that discussion by just focusing on the Open Questions feature-set, with an emphasis on object-level questions."

The disagreements about "combat vs collaboration" and other related frames do seem to have real, important things to resolve. I think a lot of the debate can be broken into empirical questions that are (in theory) actually possible to resolve. But... unless people are actually in agreement about a meta-frame that would actually resolve it, mostly it seems like a massive, net negative time sink.

Archipelago hasn't worked, but, well "Real Archipelago hasn't even been attempted". But I'm not sure it actually helps. There's a few key unresolved questions like 'what are the default norms for users that haven't set moderation guidelines' which more or less necessitate solving the first option. There's also the issue wherein at least some ongoing debates have people who prefer different norms.

The latter can maybe be addressed by setting a stronger meta-norm of "if you think the discussion on Post X is important but has counterproductive norms, you can create you own post about it", possibly encouraging people more to create short posts that just say "this is my discussion for topic X, with norm Y". Something about that still feels unsatisfying.

Meanwhile, Open Questions that focus on object level problems mostly don't seem to generate demon threads [LW · GW]. They typically meet my own preferences for collaborativeness (since there's relatively clear criterion for comments of 'is this helping to answer the question that the author asked?'), while usually avoiding most of the issues raised by (my understanding of) people who are annoyed by pressures toward collaborative-ness.

(i.e. my experience is that the open question framework creates an environment that is better suited towards blunt disagreement, at least about factual things, conditional on the questions being object level. And while there's still sometimes disagreement over the best frame to answer a question is, that feels like a much simpler thing to patch)

Replies from: SaidAchmiz, Ruby, John_Maxwell_IV
comment by Said Achmiz (SaidAchmiz) · 2019-07-02T01:10:00.390Z · LW(p) · GW(p)

Archipelago hasn’t worked, but, well “Real Archipelago hasn’t even been attempted”. But I’m not sure it actually helps.

Er… has any ‘Archipelago’ been tried? When you say “Archipelago hasn’t worked”, you’re talking about… what?

Anyhow, as far as your three options go… some pros & cons:

Try to actually resolve the longstanding major disagreements …

Pro: If you succeed, then we march forward into the future in productive harmony! And you (probably) save yourself (and everyone else) a ton of heartache, going forward.

Con: If you fail, then you’ve wasted a ton of effort and accomplished at most nothing, and possibly even made everyone angrier at each other, etc.

Attempt to build real archipelago features …

Pro: Pretty hard to imagine a scenario where you totally waste your time, if you do this (unless you’re, like, such a bad programmer/designer/whatever that you try to build some features but you just fail somehow). In the worst case, you have new features that are useful for something or someone, even if they don’t solve the problem(s) they were meant to solve. And in the best case, you solve all the problems!

Con: Actually maybe the worst case is instead much worse: the new features have an effect but it’s in the opposite direction from what you intended, or there are some horrible consequences you didn’t foresee, etc.

Attempt to mostly bypass that discussion by just focusing on the Open Questions feature-set …

Pro: Similar to above, but best case is not as great (though still good) and worst case is almost certainly not nearly as bad—a lower-variance approach, but still it seems like at worst you’ve got some new features that are useful.

Con: Probably doesn’t do much to solve any of the serious problems. If, once you’ve done this, all the same problems remain, and meanwhile the community has been hemorrhaging participants… haven’t you wasted time that might’ve been better spent solving the aforesaid serious problems?

Replies from: Raemon, Raemon
comment by Raemon · 2019-07-18T18:01:44.737Z · LW(p) · GW(p)

Something I haven't actually been clear on re: your opinions:

If LW ended up leaning hard into Archipelago, and if we did something like "posts can be either set to 'debate' mode, or 'collaborative' mode, or there are epistemic statuses indicating things like "this post is about early stage brainstorming vs this post is ready to be seriously critiqued",

Does that actually sound good to you?

My model of you was worried that that sort of thing could well result in horrible consequences (via giving bad ideas the ability to gain traction).

(I suppose you might believe that, but still think it's superior to the status of quo of 'sorta kinda that but much more confusingly')

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-18T19:45:09.201Z · LW(p) · GW(p)

Having good and correct norms on Less Wrong > having some sort of Archipelago, and thereby having good and correct norms on some parts of Less Wrong > having bad and wrong norms everywhere on Less Wrong

We did discuss this a while ago, actually, though I’m afraid I haven’t the time right now to look for the comment thread in question. Simply: if you can set posts to “collaborative mode”, and there’s nothing wrong with that (norm-wise), well, everyone sets their posts to “collaborative mode” all the time (because defending their ideas is hard and annoying), the end. (Unless you also have strong norms along the lines of “using or even mentioning ideas which have thus far been discussed only in ‘collaborative mode’ posts, in other discussions, as if they have been properly defended and are anything but baseless speculation, is a faux pas; conversely, calling out such usage is right and proper and praiseworthy and deserving of upvotes”. But such a norm, which would be very useful and beneficial, nonetheless seems to me to be unlikely to end up as part of the Archipelago you envision. Or am I mistaken, do you think?)

Replies from: Raemon, pktechgirl
comment by Raemon · 2019-07-18T19:59:15.453Z · LW(p) · GW(p)

Nod. I do think the failure mode your pointing at is an important thing for the system to address.

comment by Elizabeth (pktechgirl) · 2019-07-18T22:09:33.569Z · LW(p) · GW(p)
Having good and correct norms on Less Wrong

This seems to assume there is one correct set of norms for all conversations. That would be really surprising to me. Do you think there's one set that is Always Correct, or that the switching costs outweigh the gains from tailored norms?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-18T22:42:34.453Z · LW(p) · GW(p)

All conversations? Certainly not. All conversations on Less Wrong? To a first approximation[1], yes.


  1. How much work we take this qualifier to be doing is, of course, a likely point of disagreement, but if you see it as doing most of the work in my comment, then assume that you’ve misunderstood me. ↩︎

Replies from: Raemon
comment by Raemon · 2019-07-18T23:25:47.171Z · LW(p) · GW(p)

I think a core disagreement here has less to do with collaborative vs debate. Ideas can, and should, be subjected to extreme criticism within a collaborative frame.

My disagreement with your claim is more about how intellectual progress works. I strongly believe you need a several stages, with distinct norms. [Note: I'm not sure these stages listed are exactly right, but think they point roughly in the right direction]

1. Early brainstorming, shower thoughts, and play.

2. Refining brainstormed ideas into something coherent enough to be evaluated

3. Evaluating, and iterating on, those ideas. [It's around this stage that I think comments like the ones I archetypically associate with you become useful]

4. If an idea seems promising enough to do rigorously check (i.e. something like 'do real science, spending thousands or millions of dollars to run experiments), figure out how to do that. Which is complicated enough that it's its own step, separate from....

5. Do real science (note: this section is a bit different for things like math and philosophy)

6. If the experiments disconfirm the idea (or, if an earlier stage truncated the idea before you got to the "real science" part), make sure to say "oops", and make it common knowledge that the idea is wrong.

I think the first two stages are extremely important (and bad things happen when you punish doing it publicly). The last stage is also extremely important. Right now, even at its most rigorous, the pipeline of ideas at LessWrong seems to stop around the 3rd stage.

I don't expect you to agree with all of that right now, but I am curious: how much would your concerns be addressed if we had clearer/better systems for the final step?

Replies from: Pattern
comment by Pattern · 2019-07-23T03:05:31.083Z · LW(p) · GW(p)

4 and 5 seem hard. Consider the "Archipelago" idea. Also, this model assumes the idea is easily disproved/proved, and isn't worth iterating on further.

(Rough) Contrasting model:

1) I want to make a [lightbulb] (before lightbulbs have been invented).

2) Come up with a design.

3) Test the design.

4) If it fails, go back to step 2, and start over, or refine the design, and go to step 3.

Repeat 100 times, or until you succeed.

5) If it works, come up with a snazzy name, and start a business.

comment by Raemon · 2019-07-02T04:59:52.131Z · LW(p) · GW(p)
Er… has any ‘Archipelago’ been tried? When you say “Archipelago hasn’t worked”, you’re talking about… what?

We *did* spend several months working on the Ban user and users-setting-moderation-norms features, and write up a lengthy post discussing how we hoped they would be used, and a couple people very briefly tried using them. So... "any" Archipelago has been tried.

But certainly it was not be tried in a way where the features were clear enough that I'd have expected people to have "really" tried it.

The rest of the pros-and-cons seem relevant, although I'm currently actually more optimistic about Open Questions than Archipelago (partly for unrelated reasons that have to do with why I think Open Questions was high value in the first place.)

Replies from: John_Maxwell_IV, SaidAchmiz
comment by John_Maxwell (John_Maxwell_IV) · 2019-07-04T22:56:54.310Z · LW(p) · GW(p)

I wonder if Archipelago is one of those features that is best tested in the context of a larger userbase [LW · GW]. Right now there is barely one "island" worth of users on LW. Maybe users just aren't numerous enough for people to expect bad experiences in the comments of their posts which would cause them to use advanced moderation features. It's not necessarily a bad thing that you guys have built advanced moderation features before they were actually needed. But I suspect the current userbase is not big enough to stress test them.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-07-04T23:07:50.418Z · LW(p) · GW(p)

We've seen 42 post in the last 7 days, and on average the community makes ~500 comments per week. Just want to clarify on the current size of the LW userbase.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-07-05T00:03:01.794Z · LW(p) · GW(p)

Thanks for the data! Any thoughts on this Wei Dai comment [LW(p) · GW(p)]?

Replies from: Benito
comment by Ben Pace (Benito) · 2019-07-05T00:09:24.349Z · LW(p) · GW(p)

Actually yes. For reasons of time, I won't write stuff now, but look out for a post in Meta probably Monday/Tuesday, with some thoughts on moving in that direction (and agreeing more with your take here [LW · GW] than I did at the time).

I only mention the data because I substantially under-predicted it before Ruby told me what the true numbers were.

Edit: Sorry! Turns out that I won't be writing this post.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-07-17T14:13:03.641Z · LW(p) · GW(p)

Edit: Sorry! Turns out that I won't be writing this post.

What happened?

Replies from: Raemon
comment by Raemon · 2019-07-27T18:34:08.793Z · LW(p) · GW(p)

The team decided to hold off on publishing some thoughts for awhile, sorry about that.

comment by Said Achmiz (SaidAchmiz) · 2019-07-02T06:38:34.472Z · LW(p) · GW(p)

Hmm, indeed. I suppose that does qualify as a form of Archipelago, if looked at in the right way. Those features, and that perspective, didn’t occur to me when I wrote the grandparent, but yes, fair point.

I think we agree w.r.t. “tried, sort of, but not ‘really’”.

Replies from: Raemon
comment by Raemon · 2019-07-02T17:17:08.592Z · LW(p) · GW(p)

To be clear, though – all the features that are necessary for you to set your own preferred norms on your own posts already exist. You can start writing posts and hosting discussions set in whatever frame you want.

The actions available are:

– set your default moderation guidelines in your user profile

– set post-specific moderation guidelines in a given post

– if a user has commented in a way that violates your guidelines, and doesn't stop after you remind them of them, you can click on a comment's menu item to delete said comment or ban said user.

So if you do prefer a given style of discourse, you can set that for your own posts, and if you wanted to discuss someone else's post in a different style of discourse than they prefer, I think it'd be good to create your own thread for doing so.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-07-03T07:18:06.411Z · LW(p) · GW(p)

Note: These features do not seem to exist on GW. (Not that I miss them since I don't feel a need to use them myself.)

Questions: Is anyone using these features at all? Oh I see you said earlier "a couple people very briefly tried using them". Do you know why they stopped? Do you think you overestimated how many people would use it, in a way that could have been corrected (for example by surveying potential users or paying more attention to skeptical voices)? (To be fair, upon reviewing the comments on your Archipelago posts, there weren't that many skeptical voices, although I did upvote this one [LW(p) · GW(p)].) Given that you spend several months on Archipelago, it seems useful to do a quick postmortem on lessons learned?

Replies from: Raemon, Raemon, Raemon
comment by Raemon · 2019-07-03T07:33:47.223Z · LW(p) · GW(p)

Each of the features has been used a bit, even recently. (I think there's 3-7 people who've set some kind of intentional moderation style and/or guideline, and at least one person who's banned a user from their posts recently).

I think the moderation guidelines help to set expectations and the small bit of counterfactual threat of banning helps lend them a bit of force.

The features were also a pre-requisite for Eliezer posting and/or allowing admins to do crossposts on his behalf (I doubt we would have prioritized them as hard without that, although I'd been developing the archipelago-concept-as-applied-to-lesswrong before then)

So I don't consider the features a failure, so much as "they didn't have this outsized, qualitatively different benefit" that I was hoping for.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-03T11:30:13.073Z · LW(p) · GW(p)

The features were also a pre-requisite for Eliezer posting and/or allowing admins to do crossposts on his behalf (I doubt we would have prioritized them as hard without that, although I’d been developing the archipelago-concept-as-applied-to-lesswrong before then)

Yet Eliezer still isn’t participating on Less Wrong… is there some reason for that? Were the implemented features insufficient? Is there still something left to do?

Replies from: Raemon, Richard_Kennaway
comment by Raemon · 2019-07-03T18:16:52.406Z · LW(p) · GW(p)

The moderation tools were a prerequisite even for the degree of Eliezer participation you currently see (where periodically Robby crossposts things on his behalf), which I still consider quite worth it.

As Richard notes, Eliezer isn't really participating in online discussion these days and that looks unlikely to change.

comment by Richard_Kennaway · 2019-07-03T13:30:18.873Z · LW(p) · GW(p)

Does Eliezer post anywhere public these days? His postings to Facebook are infrequent, and I don't know of him posting anywhere else.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-03T15:27:27.969Z · LW(p) · GW(p)

That makes it even worse, if true! If he doesn’t post anywhere, then he wasn’t ever going to post here, so what in the world was the point of all these changes and features and all that stuff that was allegedly “so that Eliezer would post here”?!

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-07-04T22:23:44.028Z · LW(p) · GW(p)

He seems to post on Twitter pretty frequently...

¯\_(ツ)_/¯

comment by Raemon · 2019-07-03T07:47:16.309Z · LW(p) · GW(p)

Re: GW – obviously the GW team has limited time, but there shouldn't be anything stopping them from implementing these features. And in the meanwhile, if you hop over to lesswrong.com to use a feature (such as deleting a comment or banning .a user) it should have the desired effect over on greaterwrong.

I do expect, as the LW team tries more and more experimental things that are designed to radically change the shape of the site, that the GW experience will start to feel a bit confusing, depending on how much time the GW team has to implement things.

[note to GW team: I know at least part of the problem is that the LW team hasn't been that proactive about communicating our plans. My current impression is that you're sufficiently bottlenecked on dev-time that doing so wouldn't really help, but if you thought otherwise I could maybe arrange for that]

One recent example are Related Questions, which I expect to be a major component of how the questions feature (and the site overall) ends up working. The greaterwrong version of this question [LW · GW] doesn't show it's parent question, either at the top of the page or in a list further down, which changes the context of the question quite a bit. See the lesswrong version).

(Related questions overall are still in a "soft beta" where we're still tweaking them a bunch and aren't confident that they're usable enough to really advertise, but I expect that to change within a couple weeks)

Replies from: SaidAchmiz, clone of saturn
comment by Said Achmiz (SaidAchmiz) · 2019-07-03T11:28:58.131Z · LW(p) · GW(p)

It is true that we’re bottlenecked on developer time, yes. We wouldn’t say no to more communication of the LW team’s plans, of course, but that is indeed not a major problem at this time, as far as I can tell.

One thing that would be quite useful would be a maintained centralized list of LW features (preferably in order of when they were added, and with links to documentation… a Blizzard-style list of “patch notes”, in other words, aggregated into a change history, and kept somewhere central and easy to find).

If, perhaps, this were a post that were to be updated as new features rolled in, we could use it as a way to track GW vs. LW feature parity (via comments and updating of the post itself), and as a publicly visible roadmap for same.

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-03T19:47:01.322Z · LW(p) · GW(p)

I think the recently published FAQ [LW · GW] has almost all of our features, though not in an easily skimmable or accessable format. But definitely better than what we had before it.

Agree having a proper list would be good.

comment by clone of saturn · 2019-07-08T06:56:30.226Z · LW(p) · GW(p)

Knowing your plans could definitely make a difference--I do want to prioritize fixing any problems that make GW confusing to use, as well as adding features that someone has directly asked for. As such, I just implemented the related questions feature.

Replies from: Raemon
comment by Raemon · 2019-07-18T17:56:43.735Z · LW(p) · GW(p)

Thanks! (missed this the first time around)

I think another major issue is going to be custom commenting-guidelines, which GreaterWrong doesn't have AFAICT.

Right now, custom commenting guidelines aren't actually all that clear on LW, and I don't think people rely on them much. But we've been talking about making guidelines and moderation-policies appear next to commenting boxes as soon as you start typing, or otherwise making it more visually distinct what the norms of a given discussion section is.

If we ended up learning harder into the archipelago model, this would become particularly important.

comment by Raemon · 2019-07-03T07:25:45.994Z · LW(p) · GW(p)

Yup. This post [LW · GW] is essentially the result of that post-mortem.

comment by Ruby · 2019-07-02T03:28:31.613Z · LW(p) · GW(p)

Quick comment to say that I think there are some separate disagreements that I don't want to get collapsed together. I think there's 1) "politeness/there are constraints on how you speak" vs "no or minimal constraints on how you speak", and 2) Combat vs Nurture [LW · GW] / Adversarial vs Collaborative. I think the two are correlated but importantly distinct dimensions. I really don't want Combat culture, as I introduced the term, to get rounded off to "no or minimal constraints on how you can speak".

Replies from: Raemon
comment by Raemon · 2019-07-02T04:53:01.169Z · LW(p) · GW(p)

Yeah, to be clear I think there's like 6 major disagreements (not all between the same people), and it's not that easy to summarize them.

comment by John_Maxwell (John_Maxwell_IV) · 2019-07-04T23:13:59.461Z · LW(p) · GW(p)

But... unless people are actually in agreement about a meta-frame that would actually resolve it, mostly it seems like a massive, net negative time sink.

Why does it need to be a time sink for you? You could pair off people who disagree with one another and say: "If you two are able to think up an experiment such that you both agree that experiment would allow us to discover who is right about the kind of culture that's good for LessWrong, we will consider performing that experiment." You could even make them settle on a procedure for judging the results of the experiment. Or threaten to ignore their views entirely if they can't come to any kind of agreement.

Archipelago hasn't worked, but, well "Real Archipelago hasn't even been attempted". But I'm not sure it actually helps. There's a few key unresolved questions like 'what are the default norms for users that haven't set moderation guidelines' which more or less necessitate solving the first option. There's also the issue wherein at least some ongoing debates have people who prefer different norms.

I think you're overthinking this. Why not randomize the default norms for each new user and observe which norms users tend to converge on over time?

The latter can maybe be addressed by setting a stronger meta-norm of "if you think the discussion on Post X is important but has counterproductive norms, you can create you own post about it", possibly encouraging people more to create short posts that just say "this is my discussion for topic X, with norm Y". Something about that still feels unsatisfying.

Yes, the solution you describe is unsatisfying, but I wonder if the empirical data you gather from it will get you to a perfect solution more effectively than armchair philosophizing.

Replies from: Raemon
comment by Raemon · 2019-07-04T23:28:52.145Z · LW(p) · GW(p)

I mean, among other things, *I'm* one of the people who's disagreeing with someone(s), and a major issue is disagreement or confusion about what are even the right frames to be evaluating things through.

Why not randomize the default norms for each new user and observe which norms users tend to converge on over time?

I don't currently expect that to really do anything. Most of the users doing any kind of deliberate norm setting are longtime users who are more bringing their own expectations of what they thought the norms already were, vs people reading the text we wrote in the moderation guidelines.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-07-05T00:05:41.833Z · LW(p) · GW(p)

Hm. More ideas which probably won't help:

  • Find a person or people you both respect with relevant expertise. Do a formal debate where you both present your case. Choose a timed debate format so things can't take forever. At the end, agree to abide by the judgement of the debate audience (majority vote if necessary).

  • Figure out whose vision for LessWrong is least like Facebook and implement that vision. The person whose vision is more similar to Facebook can just stay on Facebook.

comment by Raemon · 2018-07-16T03:44:13.009Z · LW(p) · GW(p)

I notice that I'm increasingly confused that Against Malaria Foundation isn't just completely funded.

It made sense a few years ago. By now – things like Gates Foundation seem like they should be aware of it, and that it should do well on their metrics.

It makes (reasonable-ish) sense for Good Ventures not to fully fund it themselves. It makes sense for EA folk to either not have enough money to fully fund it, or to end up valuing things more complicated than AMF. But it seems like there should be enough rich people and governments for whom "end malaria" is a priority that the $100 million or so that it should just be done by now.

What's up with that?

Replies from: VipulNaik, Vaniver, Raemon
comment by VipulNaik · 2018-07-16T05:11:18.628Z · LW(p) · GW(p)

My understanding is that Against Malaria Foundation is a relatively small player in the space of ending malaria, and it's not clear the funders who wish to make a significant dent in malaria would choose to donate to AMF.

One of the reasons GiveWell chose AMF is that there's a clear marginal value of small donation amounts in AMF's operational model -- with a few extra million dollars they can finance bednet distribution in another region. It's not necessarily that AMF itself is the most effective charity to donate to to end malaria -- it's just the one with the best proven cost-effectiveness for donors at the scale of a few million dollars. But it isn't necessarily the best opportunity for somebody with much larger amounts of money who wants to end malaria.

For comparison:

The main difference I can make out between the EA/GiveWell-sphere and the general global health community is that malaria interventions (specifically ITNs) get much more importance in the EA/GiveWell-sphere, whereas in the general global health spending space, AIDS gets more importance. I've written about this before: http://effective-altruism.com/ea/1f9/the_aidsmalaria_puzzle_bleg/

Replies from: VipulNaik, Raemon
comment by VipulNaik · 2018-07-29T20:13:53.440Z · LW(p) · GW(p)

There is some related stuff by Carl Shulman here: https://www.greaterwrong.com/posts/QSHwKqyY4GAXKi9tX/a-personal-history-of-involvement-with-effective-altruism#comment-h9YpvcjaLxpr4hd22 that largely agrees with what I said.

comment by Raemon · 2018-07-16T05:59:18.718Z · LW(p) · GW(p)

If Gates Foundation is actually funding constrained I guess that explains most of my confusion, although it still seems a bit weird not to "top it off" since it seems within spitting distance.

comment by Vaniver · 2018-07-16T17:36:36.734Z · LW(p) · GW(p)

Check out Gates's April 2018 speech on the subject. Main takeaway: bednets started becoming less effective in 2016, and they're looking at different solutions, including gene drives to wipe out mosquitoes, which is a solution unlikely to require as much maintenance as bed nets.

comment by Raemon · 2018-07-16T03:46:44.465Z · LW(p) · GW(p)

Like, I'm actually quite worried that we haven't hit the point where EA folk are weirdly bottlenecked on not having an obviously defensible charity to donate to as a gateway drug.

comment by Raemon · 2018-05-12T03:01:38.215Z · LW(p) · GW(p)

[cn: spiders I guess?]

I just built some widgets for the admins on LW, so that posts by newbies and reported comments automatically show up in a sidebar where moderators automatically have to pay attention to them, approving or deleting them or sometimes taking more complicated actions.

And... woahman, it's like shining a flashlight into a cave that you knew was going to be kinda gross, but you weren't really prepared to a million spiders suddenly illuminated. The underbelly of LW, posts and comments you don't even see anymore because we installed karma filters on the frontpage.

There's a webcomic called Goblins, where one goblin decided to become a paladin, and gains the ability to Detect Evil. And suddenly is confronted with all the evil lurking about, in the shadows of people's hearts, literal shadows, and sometimes in broad daylight. And he's describing this to a fellow goblin, and they're like "Holy hell, how can you live like that!? Why would you choose to _force_ yourself to see the evil around you?"

And Goblin A nods gravely and says "so that you don't have to."

Replies from: Elo
comment by Elo · 2018-05-12T04:06:13.324Z · LW(p) · GW(p)

You realise that I read every comment in the rss feed right?

comment by Raemon · 2024-11-30T00:32:37.861Z · LW(p) · GW(p)

For this year's LessWrong Review, we're building UI to make it much easier to import linkposts from other blogs, since a lot of important rationalsphere or AI Safety content lives in other places, and backdate it such that it's eligible for the Review.

It's actually pretty easy to automatically import all the text from a url in most cases (We're looking into auto-importing PDFs of papers, which I suspect is doable but haven't checked), and in many cases I think this would basically be preferred, but it's also kinda exploitable in ways I don't know that I'd endorse. (i.e. some authors are probably happy to have people crosspost stuff while nominating it for Best of LessWrong, other authors might feel violated)

Three options are:

  • only auto-import the first few paragraphs, ending with a load more
  • have an LLM extract some important highlights. (I'm ignoring "have an LLM summarize it" because they suck at that, but I think they're decent at identifying key paragraphs)
  • start off by auto-importing the whole post, and then wait until anybody complains.

I'd probably be limiting this to users who are otherwise eligible to nominate (i.e. their account is at least two years old, and maybe they have like 100 karma), so randos can't go crazy with it. Admins will be seeing all posts imported this way so we can be sanity checking things.

Curious what people think.

Replies from: Screwtape
comment by Screwtape · 2024-11-30T00:44:30.135Z · LW(p) · GW(p)

Tentative support for only auto-importing the first few paragraphs, if not that then start by auto-importing the whole post and waiting until anybody complains. My guess (~65%?) is that somebody will. Against having an LLM extract some important highlights- if doing highlights is the way to go I think whoever nominated the piece for the review can find the highlights?

I'd love it if I could use LessWrong as a central place to read rationalsphere content, and since more and more rationalist sphere writers are writing elsewhere this seems like it's worth trying.

Replies from: Raemon, ete
comment by Raemon · 2024-11-30T18:34:37.173Z · LW(p) · GW(p)

Against having an LLM extract some important highlights- if doing highlights is the way to go I think whoever nominated the piece for the review can find the highlights?

This changes it from a 10 second operation to a several minute operation, which makes it prohibitively expensive to do it for a lot of posts.

Curious to hear more about what feels off about LLM extract. I do think this is something they're actually pretty good at (and you can always edit it afterwards)

Replies from: Screwtape
comment by Screwtape · 2024-11-30T22:01:04.388Z · LW(p) · GW(p)

I imagine two people are talking and one says "oh, I think you should read this essay, here's the link!" and the second asks "oh, what's it about? Any good quotes?"

If the first doesn't have an answer to that, then it feels like a weird recommendation? I guess that's the second stage of, where people review them.

Replies from: Raemon
comment by Raemon · 2024-11-30T22:02:20.468Z · LW(p) · GW(p)

Yeah. It needs a review to pass to the third stage so this should have come up by then. The first stage is "are there a number of people who are like 'oh yeah that post, that was important' and upvote it?"

comment by plex (ete) · 2024-11-30T01:25:03.960Z · LW(p) · GW(p)

I lean towards an opt-out system for whole post imports? I'd expect the vast majority of relevant authors to be happy with it, and it would offer less inconvenience to readers. Letting an author easily register as "no whole text imports please" seems worthwhile, and maybe if people aren't happy with that switching to opt-in?

Replies from: Raemon
comment by Raemon · 2024-11-30T18:33:13.410Z · LW(p) · GW(p)

well a lot of the things-imported may be from people who don't think of themselves as centrally LW members, or who wouldn't notice. 

(medium-difficulty case: Robin Hanson. Harder-difficulty-case: some academic who wrote something relevant to x-risk but isn't actually very involved in our ecosystem)

Replies from: ete
comment by plex (ete) · 2024-11-30T18:38:42.025Z · LW(p) · GW(p)

Cool, in that case probably opt-in to full-post makes more sense, maybe with the ability to switch modes for all posts by an author if they give permission?

Replies from: Raemon
comment by Raemon · 2024-11-30T18:41:32.075Z · LW(p) · GW(p)

I think basically nobody is going to really opt-in-or-out, so I think the question is "what actually is a reasonable default?"

Replies from: ete
comment by plex (ete) · 2024-11-30T19:59:39.281Z · LW(p) · GW(p)

If it's easy for submitters to check a box which says "I asked them and they said full post imports are fine", maybe?

No strong takes on default, just obvious considerations you'll have thought of.

Replies from: Raemon
comment by Raemon · 2024-11-30T20:28:59.670Z · LW(p) · GW(p)

Mmm, I kinda like that.

comment by Raemon · 2024-08-16T23:31:35.042Z · LW(p) · GW(p)

Have you used the LessWrong Concepts page, or generally used our tagging/wiki features? I'm curious to hear about your experience.

I'm particularly interested in people who read content from them, rather than people who contribute content to them. How do you use them? Do you wish you could get value from them better?

Replies from: Viliam, mateusz-baginski, zac-hatfield-dodds, papetoast
comment by Viliam · 2024-08-18T12:31:47.808Z · LW(p) · GW(p)

When I try to reference a concept, I often find it better to link the tag page than the original article from the Sequences, because the article in the Sequences often assumes that you have recently read the previous article, or sometimes only 1/2 or 1/3 of the article is about the idea and the rest is about something else.

In some sense, this is a difference between writing a tutorial and writing a reference book. The Sequences are a tutorial; they are supposed to be read in order. The tag pages are the reference book; they can be read individually, they are continuously updated, and they still contain the links to the most important articles so it okay to link them even if you think the articles are more valuable.

comment by Mateusz Bagiński (mateusz-baginski) · 2024-08-17T07:42:48.993Z · LW(p) · GW(p)

Sometimes I look up a tag/concept to ensure that I'm not spouting nonsense about it.

But most often I use them to find the posts related to a topic I'm interested in.

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-08-17T00:27:26.180Z · LW(p) · GW(p)

tags: used them semi-regularly to find related posts when I want to refer to previous discussions of a topic. They work well for that, and I've occasionally added tags when the post I was looking for wasn't tagged yet.

Replies from: Raemon
comment by Raemon · 2024-08-17T00:29:19.999Z · LW(p) · GW(p)

Neat (that's indeed, like, their intended use case). Do you feel like you personally end up learning stuff from seeing that previous discussion, or is it more like "hey guys, here's some previous discussion, if you want some context?"

Replies from: zac-hatfield-dodds
comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-08-17T06:17:32.534Z · LW(p) · GW(p)

Hmm, usually when I go looking it's because I remember reading a particular post, but there's always some chance of getting tab-sniped into reading a just a few more pages...

comment by papetoast · 2024-08-18T08:15:37.432Z · LW(p) · GW(p)

How do you use them?

I use it when I am interested in learning about a specific topic. I rarely use the Concepts page, because it contains too many tags, and sometimes I don't even know what tag I am looking for. Instead, I usually already have one or two articles that I have previously read, which feels similar to the topic I am thinking about. I would then search for those posts, look at the tags, and click on the one that is relevant. In the tag page, I start by reading the wiki, but often feel disappointed by the half-done/incompleteness of the wiki. Then I filter by high karma and read the articles from top to bottom, skipping ones that feels irrelevant or uninteresting based on title.

Do you wish you could get value from them better?

I wish the default most relevant ordering is not based on the raw score, but rather a normalized relevance score or something more complicated, because right now it means nothing other that "this post is popular so a lot of people voted on the tags". This default is really bad, every new user has to independently realize that they should change the sorting. LW also does not remember the sorting so I have to change it manually every time, which is irritating but not a big deal.

Replies from: Raemon
comment by Raemon · 2024-08-18T16:58:26.640Z · LW(p) · GW(p)

I rarely use the Concepts page, because it contains too many tags, and sometimes I don't even know what tag I am looking for.
 

Do you feel like you have a missing usecase that the concepts page should be helpful with?

Replies from: papetoast
comment by papetoast · 2024-08-19T07:22:00.031Z · LW(p) · GW(p)

To answer your question directly - not really.

I think index pages are just meant to be used by only a small minority of people in any community. In my mind, the LW concepts page is like the wiki topic groups (not sure what they're called).

The similarities are:

  1. It is fun to go through the concepts page and find tags I haven't learned about, this is good for exploration but a rare use case (for me)
  2. Because it is an index, it is useful when you have a concept in your mind but couldn't remember the name

But the concepts page has a worse UX than wiki since you have to explicitly search for it, rather than it popping up in the relevant tags page, and also they show up in a cluster

comment by Raemon · 2023-08-21T16:57:55.418Z · LW(p) · GW(p)

One concrete skill I gained from my 2 weeks of Thinking Physics problems [LW · GW]was:

  1. Notice something feels intractably hard
  2. Ask "okay, why is this intractably hard?". This might be multiple reasons.
    1. Do those reasons seem intractably hard to fix? If so, recurse and ask "why" again.
    2. Does one of them not seem intractably hard? Then, make a plan for fixing it. Then, if that plan seems cost-effective, do the plan.
  • Your intractably hard problem is now solved!

This doesn't seem very novel ("break a problem down into simpler problems" is a pretty canonical tool). But I felt like I got a more visceral understanding of the skill, and how to notice its relevance.

comment by Raemon · 2021-04-03T19:52:46.864Z · LW(p) · GW(p)

Theory that Jimrandomh was talking about the other day, which I'm curious about:

Before social media, if you were a nerd on the internet, the way to get interaction and status was via message boards / forums. You'd post a thing, and get responses from other people who were filtered for being somewhat smart and confident enough to respond with a text comment.

Nowadays, generally most people post things on social media and then get much more quickly rewarded via reacts, based on a) a process that is more emotional than routed-through-verbal-centers, and b) you are get rewards from a wider swath of the populationl. Which means, in practice, you're getting your incentive gradient from less thoughtful people, both due to the medium, and due to regression to the mean.

This feeds a bit into my model of "Do we want reacts on LessWrong [LW · GW]?", and when/why reacts might be bad for society.

I'd previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts. 

This new model adds a thing like "Maybe we actually just want reacts to be available to people with 1000+ karma or so. So, they increase the signal ratio from people who have demonstrated at least some reasonable threshold of thoughtfulness." (This has the obvious downside of increasing groupthink, which I do take seriously, but there's an unfortunate tradeoff between "increasing groupthink" and "getting your signal from random society which is pretty bad", and I'd currently lean towards the former if I had to pick one. I do eventually want to get a filtering system that selects more directly on "thoughtfulness", more reliably than the karma system does)

Replies from: Viliam, Dagon
comment by Viliam · 2021-04-03T23:37:13.588Z · LW(p) · GW(p)

There is a trade-off: would you prefer higher-quality feedback with great chance of no feedback at all, or a greater probability of feedback which will most likely be lower-quality?

Maybe this is a problem with social media: sometimes we get a lot of feedback, and sometimes we get high-quality feedback, and it kinda makes us expect that it should be possible to get lots of high-quality feedback constantly. But that is not possible, so people are dissatisfied.

comment by Dagon · 2021-04-03T23:25:41.148Z · LW(p) · GW(p)

I don't participate in a very wide swath of social media, so this may vary beyond FB and the like.  But from what I can tell, reacts do exactly the opposite of what you say - they're pure mood affiliation, with far less incentive nor opportunity for subtlety or epistemically-useful feedback than comments have.

The LW reacts you've discussed in the past (not like/laugh/cry/etc, but updated/good-data/clear-modeling or whatnot) probably DO give some opportunity, but can never be as subtle or clear as a comment.  I wonder if something like Slack's custom-reacts (any user can upload an icon and label it for use as a react) would be a good way to get both precision and ease.  Or perhaps just a flag for "meta-comment", which lets people write arbitrary text that's a comment on the impact or style or whatnot, leaving non-flagged comments as object-level comments about the topic of the post or parent.

Replies from: Raemon
comment by Raemon · 2021-04-04T00:01:56.350Z · LW(p) · GW(p)

This isn’t intended at all to replace comments. The idea here is giving people accordance to do lower effort ‘pseudo comments’ that are somewhere in between an upvote / downvote and a comment, so that people who find it too effortful to write a comment can express some feedback.

Hypothesis is that this gets you more total feedback.

Replies from: Dagon
comment by Dagon · 2021-04-04T01:22:42.661Z · LW(p) · GW(p)

I was mostly reacting to "I'd previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts. ", and failed my own first rule of evaluation: "compared to what?".

As something with more variations than karma/votes, and less distracting/lower hurdle than comments, I can see reacts as filling a niche.  I'd kind of lean toward more like tagging and less like 5-10 variations on a vote.  

comment by Raemon · 2021-03-31T03:16:57.912Z · LW(p) · GW(p)

The latest magic set has… possibly the subtlest, weirdest take on the Magic color wheel so far. The 5 factions are each a different college within a magical university, each an enemy-color-pair.

The most obvious reference here is Harry Potter. And in Harry Potter, the houses map (relatively) neatly to various magic colors, or color pairs.

Slytherin is basically canonical MTG Black. Gryffindor is basically Red. Ravenclaw is basically blue. Hufflepuff sort of green/white. There are differences between Hogwarts houses and Magic colors, but they are aspiring to very similar archetypes. And that’s what I was initially expecting out of “Wizards of the Coast makes a magic set inspired by Magical School YA Fiction.”

But, each of the factions in Strixhaven is *quite weird*, at least by MTG standards. At first I was very confused. Now that I've had more time to think about it I am pretty impressed.

Each faction is basically a department focused on particular clusters of classes and interests. They are each defined by *a major philosophical argument that divides the field*, where people argue what the point of the field is and what paradigm it should be operating under.

One of the more straightforward ones is, say, the college of the arts, which is Blue/Red. Everyone involved agrees you're supposed to make good, skillful art. But, they argue over whether the point of art is to make you *think*, and philosophically engage with things, or to make you *feel*, and convey raw emotion.

A weirder one is the humanities department – history, psychology, anthropology. The humanities in this school are White/Red, and their defining debate is "is core human condition (er, 'humanoid' condition) primarily about how people relate to the systems and rules they created, or about the close relations and bonds that individuals created themselves?"

The college of mathematics is Blue/Green, with shared love of fractals, the laws governing nature, etc... who debate whether mathematics is a "natural, platonic thing" that sentients merely discovered, or is it a tool they created?

And then...

...there's the communication department. Which is White/Black. Whose central debate is about whether the point of communication is to serve the public good and benefit society, or to manipulate the social fabric to benefit yourself.

I mentioned this last bit to Jim Babcock, who said "WHAT!? *Neither* of those is the point of communication!"

And I said "Yeah it sure would be better if the communications department was Blue, wouldn't it? Good thing this is just a fantasy world created for fun and not at all reflective of the real world."

Replies from: Measure
comment by Measure · 2021-03-31T03:26:54.471Z · LW(p) · GW(p)

What about Black/Green?

Replies from: Raemon
comment by Raemon · 2021-03-31T03:28:19.691Z · LW(p) · GW(p)

They’re the biology department, who disagree about whether the primary force underlying ecosystems is life/death/growth/decay.

comment by Raemon · 2020-07-18T02:27:07.133Z · LW(p) · GW(p)

After starting up PredictionBook, I've noticed I'm underconfident at 60% (I get 81% of my 60% predictions right) and underconfident at 70% (only get 44% right).

This is neat... but I'm not quite sure what I'm actually supposed to do. When I'm forming a prediction, often the exact number feels kinda arbitrary. I'm worried that if I try to take into account my under/overconfidence, I'll end up sort of gaming the system rather than learning anything. (i.e. look for excuses to shove my confidence into a bucket that is currently over/underconfident, rather than actually learning "when I feel X subjectively, that corresponds to X actual confidence."

Curious if folk have suggestions.

Replies from: Zvi, habryka4
comment by Zvi · 2020-07-18T11:12:08.526Z · LW(p) · GW(p)

Sounds like mostly low sample size?

Replies from: Raemon
comment by Raemon · 2020-07-18T12:49:20.817Z · LW(p) · GW(p)

Both of them have 15 predictions at this point. Could still be low sample size but seemed enough to be able to start adjusting.

(and, even if it turns out I am actually better calibrated than this and it goes away at larger samples, I'm still interested in the general answer to the question)

comment by habryka (habryka4) · 2020-07-18T02:40:25.047Z · LW(p) · GW(p)

and underconfident at 70% (only get 44% right).

Presumably you mean "overconfident"?

Also, you dropped a parenthesis somewhere.

comment by Raemon · 2020-01-29T03:21:27.532Z · LW(p) · GW(p)

Someone recently mentioned that strong-upvotes have a particular effect in demon-thread-y comment sections, where if you see a Bad Comment, and that that comment has 10 karma, you might think "aaah! the LessWrong consensus is that a Bad Comment is in fact Good! And this must be defended against."

When, in fact, 10 karma might be, like, one person strong-upvoting a thing.

This was a noteworthy point. I think the strong upvotes usually "roughly does their job" in most cases, but once things turn "contested" they quickly turn into applause/boo lights in a political struggle. And it might be worth looking into ways to specifically curtail their usefulness in that case somehow.

Replies from: Wei_Dai, gworley, Wei_Dai, FactorialCode
comment by Wei Dai (Wei_Dai) · 2020-01-29T06:58:05.546Z · LW(p) · GW(p)

If I had a vote, I'd vote for getting rid of strong votes altogether. Here's another downside from my perspective: I actually don't like getting strong upvotes on my comments, because if that person didn't do a strong upvote, in most cases others would eventually (weakly) upvote that comment to around the same total (because people don't bother to upvote if they think the comment's karma is already what it deserves), and (at least for me) it feels more rewarding and more informative to know that several people upvoted a comment than to know that one person strongly upvoted a comment.

Also strong upvotes always make me think "who did that?", which is pointless because it's too hard to guess based on the available information but I can't help myself. (Votes that are 3 points also make me think this.) (I've complained about this before, but from the voter perspective as opposed to the commenter perspective.) I think I'd be happier if everyone just had either 1 or 2 point votes.

Replies from: Zack_M_Davis, Dagon, Wei_Dai
comment by Zack_M_Davis · 2020-01-30T06:10:55.640Z · LW(p) · GW(p)

Votes that are 3 points also make me think this.

The 3-point votes are an enormous entropy leak: only 13 users have a 3-point weak upvote (only 8-ish of which I'd call currently "active"), and probably comparatively few 3-point votes are strong-upvotes from users with 100–249 karma. (In contrast, about 400 accounts have 2-point weak upvotes, which I think of as "basically everyone.")

Replies from: Wei_Dai, ChristianKl
comment by Wei Dai (Wei_Dai) · 2020-02-01T11:02:50.672Z · LW(p) · GW(p)

Gah, this makes me even more reluctant to vote. I didn't realize there are so few active 3-point members. (Didn't know about Issa Rice's karma list.) Seriously, there have already been multiple instances since you wrote this that I thought about voting and then stopped myself.

I'm not sure why the LW team hasn't made a change about this, but if they really want to keep the 3-point votes, maybe drop the threshold a bit so that there are at least several tens of users with 3-point votes?

Replies from: Zack_M_Davis, Raemon
comment by Zack_M_Davis · 2020-04-17T05:19:23.670Z · LW(p) · GW(p)

Looks like the weak 3-votes are gone now!

Replies from: habryka4, Raemon
comment by habryka (habryka4) · 2020-04-17T05:46:40.622Z · LW(p) · GW(p)

Yep, it didn't seem worth the cost of the chilling effects that were discussed in this thread. 

Replies from: lsusr
comment by lsusr · 2021-05-19T05:00:13.008Z · LW(p) · GW(p)

Yeah. Even if Wei_Dai is the only one chilled then that's still a huge fraction of the 3-point members.

comment by Raemon · 2020-04-17T08:15:36.429Z · LW(p) · GW(p)

I think we probably should have announced this with more fanfare but a series of distracting things happened and we forgot. Alas!

comment by Raemon · 2020-02-01T13:13:51.216Z · LW(p) · GW(p)

Yeah this discussion had me update that we should probably just drop 3-point smallvotes. (dropping the threshold would solve this problem, but not the problem I personally experience most, which is 'a lot of comments feel worth upvoting a tiny bit, but 3-karma feels excessive').

Yesterday the team discussed some weirder ideas, such as:

  • Just don't display karma for comments. Instead, just use it to silently sort things in the background. This might also make people more willing to downvote (since people often find it unpleasantly mean to downvote things below 0). It might also curtail some of the "voting as yay/boo". This is what hackernews currently does AFAIK. We might also copy hackernews's thing of "downvoted things start to fade away based on how downvoted they are.
    • On the flipside, sometimes it's actually good to see when things are highly upvoted (such as an important criticism or question)
  • Alternately: maybe karma doesn't get displayed until it has at least 3 votes (possibly in addition to the OP's auto-upvote?). This might help obfuscate who's been doing which upvoting. (I personally find it most noticeable when the karma score and voter-count is low)
Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-02-02T08:00:10.190Z · LW(p) · GW(p)

I prefer to see the karma, because "sometimes it’s actually good to see when things are highly upvoted (such as an important criticism or question)".

While we're on the topic of voting, when I look at my old LW1 comments I occasionally see 10-20 people vote up one of my comments. Now my comments often get voted up to 10-20 karma by 1-4 people (besides my own default upvote), but almost never receive more than 10 votes. This makes me worried that I'm reaching a lot fewer people with my content compared to those days. Is this true, or do people just vote less frequently now?

Replies from: Raemon
comment by Raemon · 2020-02-02T08:21:14.343Z · LW(p) · GW(p)

It is (alas) definitely the case that there are fewer site participants now than in Ye Old Golden Days, although the metrics have been trending upwards for the past year(ish). (sometime we'll do an updated analytics post to give a clearer picture of that)

Replies from: habryka4
comment by habryka (habryka4) · 2020-02-02T19:11:15.052Z · LW(p) · GW(p)

I do also think that in addition to that, people also just vote less. If I remember correctly, number of people voting in a given week is about 60% of what it was at the peak, but total number of votes per week is closer to 35% or something like that. There are also a bunch less comments, so you likely get some quadratic effects that at least partially explain this. 

comment by ChristianKl · 2020-01-30T08:40:45.938Z · LW(p) · GW(p)

Aren't there also people for whom 3 points is a strong upvote that you can't distinguish from those where 3 point is a weak upvote?

Replies from: Raemon
comment by Raemon · 2020-01-30T21:34:58.663Z · LW(p) · GW(p)

True, but I think you can usually tell what sort of things might-have-gotten strong upvoted

comment by Dagon · 2020-01-30T21:40:23.272Z · LW(p) · GW(p)

I'd get rid of strong upvotes as well, or perhaps make voting nonlinear, such that a weak/strong vote changes in value based on how many voters expressed an opinion (as it kind of does over time - strong votes only matter a small bit when there are 20+ votes cast, but if they're one of the first or only few to vote, they're HUGE). Or perhaps only display the ordinal value of posts and comments (relative to others shown on the page), with the actual vote values hidden in the same way we do number of voters.

The vast majority of my comments get 5 or fewer voters. This is data in itself, of course, but it means that I react similarly to Wei when I see an outsided change.

comment by Wei Dai (Wei_Dai) · 2020-01-29T22:07:57.976Z · LW(p) · GW(p)

Someone strong-voted down my comment, from 11 to 7. (Normally I wouldn't mention this, but it seems relevant here. :)

Replies from: Raemon
comment by Raemon · 2020-01-30T00:37:14.449Z · LW(p) · GW(p)

In this case this was actually me removing a weak upvote, presumably at the same time someone else cast a regular weak downvote? (I had originally upvoted as a general reward for providing information about what users might care about, then realized I kinda didn't want to make it look like the object-level idea have tons of support. Which is relevant. In any case apologies for confusion. :p)

comment by Gordon Seidoh Worley (gworley) · 2020-01-29T19:36:53.562Z · LW(p) · GW(p)

A mechanism I really like is making certain kinds of votes scarce. I've appreciated it when it was a function on other sites I've used, as I think it improved things.

For example, Stack Overflow lets you spend karma in various ways. Two that come to mind:

  • downvotes cost karma (a downvote causing -5 karma costs the downvoter 2 karma)
  • you can pay karma to get attention (you can effectively super strong upvote your own posts, but you pay karma to do it)

Ways this or something similar might work on LW:

  • you get a budget of strong votes (say 1 per day) that you can save and spend how you like but you can't strong upvote everything
  • you get a budget of downvotes
  • strong votes cost karma
  • downvotes cost karma

I like this because it at least puts a break on excess use of votes in fights and otherwise makes these signals more valuable when they are used because they are not free like they are now.

Replies from: Raemon
comment by Raemon · 2020-01-30T00:42:29.404Z · LW(p) · GW(p)

The idea I am currently most interested in is "You can add short anonymous 'reasons' to your upvote or downvote, and such reasons are required for strong upvotes." 

(I'm not actually sure what this would do to the overall system, but I think it'd give us a better window into what voting patterns are common before making more explicitly functional changes to the system, and meanwhile probably subtly discourage strong upvotes and downvotes by adding a bit of cognitive labor to them)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2020-01-30T01:08:39.859Z · LW(p) · GW(p)

Yeah, I think anything that adds a meaningful speedbump to any voting operation other than weak upvote is likely a step in the right direction of reshaping incentives.

comment by Wei Dai (Wei_Dai) · 2020-01-29T07:01:50.018Z · LW(p) · GW(p)

When, in fact, 10 karma might be, like, one person strong-upvoting a thing.

Oh, this is why I added a feature to my userscript [LW(p) · GW(p)] to always display the number of votes on a comment/post (without having to hover over the karma).

comment by FactorialCode · 2020-01-29T06:31:19.183Z · LW(p) · GW(p)

Has there been any discussion about showing the up/down vote counts? I know reddit used to do it a long time ago. I don't know why they stopped though.

comment by Raemon · 2019-12-05T03:05:47.142Z · LW(p) · GW(p)

After this weeks's stereotypically sad experience with the DMV....

(spent 3 hours waiting in lines, filling out forms, finding out I didn't bring the right documentation, going to get the right documentation, taking a test, finding out somewhere earlier in the process a computer glitched and I needed to go back and start over, waiting more, finally getting to the end only to learn I was also missing another piece of identification which rendered the whole process moot)

...and having just looked over a lot of 2018 posts [? · GW] investigating coordination failure... 

I find myself wondering if it's achievable to solve one particular way in which bureaucracy is terrible: the part where each node/person in the system only knows a small number of things, so you have to spend a lot of time rehashing things, and meanwhile can't figure out if your goal is actually achievable.

(While attempting to solve this problem, it's important to remember that at least some of the inconvenience of bureaucracy may be an active ingredient rather than inefficiency. But at least in this case it didn't seem so: drivers licenses aren't a conserved resource that the DMV wants to avoid handing out. If I had learned early on that I couldn't get my license last Monday it would have not only saved me time, but saved DMV employee hassle)

I think most of the time there's just no incentive to really fix this sort of thing (while you might have saved DMV employee hassle, you probably wouldn't save them time, since they still just work the same 8 hour shift regardless. And if you're the manager of a DMV you probably don't care too much about your employees having slightly nicer days.

But, I dunno man, really!?. Does it seem like at least Hot New Startups could be sold on software that, I dunno, tracks all the requirements of a bureaucratic process and tries to compile "will this work?" at start time?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-12-05T03:44:25.039Z · LW(p) · GW(p)

I can’t easily find it right now, but there was a comment thread a while back on Slate Star Codex where we concluded that, actually, the problem isn’t with DMVs.

The problem is with DMVs in California.

Any attempt to analyze the problem and/or solve it, must take into account this peculiarity!

EDIT: Found it. The situation’s a bit more nuanced that my one-sentence summary above, but nonetheless it’s clear that “DMVs are just terrible” does not generalize. Some are (seemingly more often in California); many are not.

Replies from: Raemon
comment by Raemon · 2019-12-05T05:43:46.211Z · LW(p) · GW(p)

I recall them being terrible in NY, although it's been awhile. 

I was also in a uniquely horrible situation because I moved from NY, lost my drivers license, couldn't easily get a new from from NY (cuz I don't live there anymore) and couldn't easily get one from CA because I couldn't prove I had one to transfer. (The results is that I think I need to take the driving test again, but it'll get scheduled out another couple months from now, or something)

Which, I dunno I'd be surprised if any bureaucracy handled that particularly well, honestly. 

Replies from: adam_scholl, Pattern
comment by Adam Scholl (adam_scholl) · 2019-12-07T10:39:57.076Z · LW(p) · GW(p)

Fwiw, my experiences with DMVs in DC, Maryland, Virginia, New York, and Minnesota have all been about as terrible as my experiences in California.

comment by Pattern · 2019-12-05T22:55:05.580Z · LW(p) · GW(p)

Unless there was a bureaucracy that used witnesses.

comment by Raemon · 2019-10-20T20:22:11.874Z · LW(p) · GW(p)

I don't know of a principled way to resolve roomate-things like "what is the correct degree of cleanliness", and this feels sad.

You can't say "the correct amount is 'this much' because, well, there isn't actually an objectly correct degree of cleanliness."

If you say 'eh, there are no universal truths, just preferences, and negotiation', you incentivize people to see a lot of interactions as transactional and adversarial that don't actually need to be. It also seems to involve exaggerating and/or downplaying one's own preferences.

The default outcome is something like "the person who is least comfortable with mess ends up doing most of the cleaning". If cleanliness were just an arbitrary preference this might actually be fine, especially if they really do dramatically care more about it. But usually it's more like "everyone cares at least a bit about being clean, one person just happens to care, say, 15% more and be more quick to act." So everyone else gets the benefits without paying the cost.

Replies from: mr-hire, Dagon, None, crabman
comment by Matt Goldenberg (mr-hire) · 2019-10-21T19:19:20.736Z · LW(p) · GW(p)
It also seems to involve exaggerating and/or downplaying one's own preferences.

There's a large portion of auction theory/mechanism design specifically designed to avoid this problem. The "you cut the cake, I choose the pieces" is a simple example. I've tried to implement some of these types of solutions in previous group houses and organizations, and there's often a large initial hurdle to overcome, some of which just outright failed.

However, enough has succeeded that I think it's worth trying to more explicitly work game theoretically optimal decision procedures into communities and organizations, and worth familiarizing yourself with the existing tools out there for this sort of thing.

Replies from: Raemon
comment by Raemon · 2019-10-21T20:06:37.782Z · LW(p) · GW(p)

I'm interested in hearing more details about that.

comment by Dagon · 2019-10-21T17:19:42.422Z · LW(p) · GW(p)

There's no avoiding negotiation - the actual truth is that it's about preferences (both in what states are preferable and in how much effort to put into it). There is no objective authority you can appeal to. Get over that.

It may help, for longer-term relationships, to negotiate utility functions and happiness of each other, rather than (or as a precursor to) negotiating tasks and chore rotations.

comment by [deleted] · 2019-10-20T20:25:38.205Z · LW(p) · GW(p)

In my experience, trade can work well here. That is, you care more about cleanliness than your roommate, but they either care abstractly about your happiness or care about some concrete other thing you care about less, e.g. temperature of the apartment. So, you can propose a trade where they agree to be cleaner than they would be otherwise in exchange for you either being happier or doing something else that they care about.

Semi-serious connection to AI: It's kind of like merging your utility functions but it's only temporary.

Replies from: Raemon
comment by Raemon · 2019-10-20T20:35:03.818Z · LW(p) · GW(p)

The trade is sort of the default outcome among people who are, like, reasonably competent adults. But:

a) it still encourages (at least subtle) exaggeration or downplaying of your preferences (to get a better trade)

b) often, fastidiousness is correlated along many axis, so it's more like "the roommate with stronger preferences isn't get any of their preferences met", and "the roommate who doesn't care much doesn't have much they really want other than to not get yelled at." (temperature preference might be one of a few things I expect to be uncorrelated with most other roommate disagreements)

comment by philip_b (crabman) · 2019-10-21T11:51:12.249Z · LW(p) · GW(p)

Talk to your roommates and make an agreement, that each of you, in round robin order, orders apartment cleaning service, with period equal to X weeks. This will alleviate part of the problem.

Replies from: Raemon
comment by Raemon · 2019-10-21T12:51:17.482Z · LW(p) · GW(p)

I don't currently have a problem with roommates (we solved it last time with some ad-hoc negotiation) I'm just more generally annoyed that there's not a good principled approach here that I can pitch as "fair".

(We do have apartment cleaners who come biweekly, whose cost is split evenly, but that also just doesn't address all the various small ways mess can add up on the timescale of hours or days. In the original motivating case it was about hairs getting in the sink-drain, which I prefer to solve once a year with a bottle of Draino, and others preferred to solve much-more-frequently with smaller-dollops-of-draino. i.e. I consider it fine if a sink drains slightly slowly, others found it gross)

((Also, there's a much more general version of this which is what I was more interested in, which isn't just the case of roommates in particular - it includes small ad-hoc situations such as some friends going camping and having different preferences about how much to cleanup))

comment by Raemon · 2019-08-17T17:29:14.122Z · LW(p) · GW(p)

I think there's a preformal / formal / post-formal thing going on with Double Crux.

My impression is the CFAR folk who created the doublecrux framework see it less as a formal process you should stick to, and more as a general set of guiding principles. The formal process is mostly there to keep you oriented in the right direction.

But I see people (sometimes me) trying to use it as a rough set of guiding principles, and then easily slipping back into all the usual failure modes of not understanding each other, or not really taking seriously the possibility that they might be the wrong one.

Right now in some contexts I've come across as a bit anal about sticking to "Formal Doublecrux rules". Model share. Check for cruxes. Recurse until you find common cruxes. Look for experiments you can actually run to gain new evidence. Aim to converge on truth.

And it does clearly seem that these steps aren't always the best approach for a given conversation. But I often perceive what feel like basic errors, which would have been caught if you were following the formal rules.

So I'm currently, like, on a crusade to make sure the people around me that I end up in the most disagreements with are able to nail the Formal Doublecrux Framework, and once we're all roughly on that page I'll trust us to do a post-formal version of it where we trust each other to get the basics right, so we can relax about the rules.

Replies from: Ruby
comment by Ruby · 2019-08-17T20:18:30.114Z · LW(p) · GW(p)

I believe I'm one of the people who commented on your strong focus on using the Double Framework recently, but on reflection I think can clarify my thoughts. I think generally there's a lot to be said for sticking to the framework as explicitly formulated until you learn how to do the thing reliably and there's a big failure mode of thinking you can skip to the post-formal stage. I think you're right to push on this.

The complication is that I think the Double-Crux framework is still nascent (at least in common knowledge; I believe Eli has advanced models and instincts, but those are hard to communicate and absorb), which means I see us being in a phase of "figuring out how to do Double-Crux right" where the details of the framework are fuzzy and you might be missing pieces, parts of the algorithm, etc.

The danger is then that if you're too rigid in sticking to your current conception of what the formal framework of Double-Crux, you might lack the flexibility to see where you're theory is failing in practice, and you need to update what you think Double-Crux even should be.

I perceive something a shift (could be wrong here) where after some conversations you started paying more attention to the necessity of model-sharing as a component of Double-Crux as maybe a preliminary stage to find cruxes, and this wasn't emphasized before. That's the kind of flexibility I think is need to realize when the current formalization is insufficient and deviation from it is warranted as part of the experimentation/discovery/development/learning/testing/etc.

comment by Raemon · 2019-07-20T20:32:21.592Z · LW(p) · GW(p)

Counterfactual revolutions are basically good, revolutions are basically bad

(The political sort of revolution, not the scientific sort)

Replies from: Dagon
comment by Dagon · 2019-07-29T17:20:32.561Z · LW(p) · GW(p)

Are you intentionally using "counterfactual" here to distinguish from hypothetical? I'd say there are very few things for which hypothetical X isn't far better than actual X. Fundamentally, details matter far more that we think, most of the failure is in the details, and we routinely ignore details in far-mode thinking about what could be.

Code you haven't written yet is efficient, understandable, and bug-free. Systems of governance are free of corruption and petty dominance games. Your next team will have perfect management that understands the cost of impossible deadlines. Ok, even I can't believe the last one. But the others are pretty common false beliefs.

Replies from: Raemon
comment by Raemon · 2019-07-29T19:17:53.283Z · LW(p) · GW(p)

A more fleshed out version of my comment is:

It is very important that the threat of political revolutions exist – the fact that if the people get angry, they *will* overthrow rulers is the thing that keeps rulers in check. (This is relevant for countries as well as web forums and EA organizations)

But, actual revolutions are generally quite bad – they are very costly, and my impression is that a lot of the time they A) don't actually successfully build something better than the thing they destroyed, B) the prospect of constant revolution makes it harder to build anything lasting.

So, it's important for the threat of revolution to be real (to the point where if things get real bad you actually revolt even though it's probably locally negative to do so). But, still, it's better for all parties to fix things such that the threat doesn't need to get carried out.

(I don't have that solid a grasp on the difference between hypothetical vs counterfactual. The important point here is that IF the political situation doesn't improve, THEN there will be a revolution)

Replies from: Dagon
comment by Dagon · 2019-07-30T14:56:28.868Z · LW(p) · GW(p)

Ah, I fully agree with this observation. I wonder how related it is to other cases where the actual underlying reality is less important than the perception of the possible. Stock markets may be another illustration of the concept - a given share in a company is, in the end, a claim on future cash flows until termination of the enterprise. But there's such distance and uncertainty in that, that many stocks trade more on short-term perceptions than on long-term values, and many participants forget what the underlying security actually means.

(counterfactual means things that are known not to happen, hypothetical is for things that could turn out to happen. What would you have done if X (when ~X actually occurred) is counterfactual.
What would you do if X (where X may or may not happen) is hypothetical. I asked because using "counterfactual" is somewhat specific and I wasn't sure if you were using it in a technical meaning. Hypothetical (or "possible") is the more common word colloquially. "possible revolutions are good, actual revolutions are bad" would have been less distracting on this front. Ok, sorry for long diversion from what could have been a thumbs-up react.)

comment by Raemon · 2019-07-15T23:19:09.895Z · LW(p) · GW(p)

Possible UI:

What if the RecentDiscussion section specifically focused on comments from old posts, rather than posts which currently appear in Latest Posts. This might be useful because you can already see updates to current discussions (since comments turn green when unread, and/or comment counts go up), but can't easily see older comments.

(You could also have multiple settings that handled this differently, but I think this might be a good default setting to ensure comments on old posts get a bit more visibility)

comment by Raemon · 2019-07-14T18:40:38.034Z · LW(p) · GW(p)

Weird thoughts on 'shortform'

1) I think most of the value of shortform is "getting started writing things that turn out to just be regular posts, in an environment that feels less effortful."

2) relatedly, "shortform" isn't quite the right phrase, since a lot of things end up being longer. "Casual" or "Off-the-cuff" might be better?

comment by Raemon · 2018-04-04T22:08:22.765Z · LW(p) · GW(p)

Failure Modes of Archipelago

(epistemic status: off the cuff, maybe rewriting this as a post later. Haven't discussed this with other site admins)

In writing Towards Public Archipelago, I was hoping to solve a couple problems:

  • I want authors to be able to have the sort of conversational space that they actually want, to incentivize them to participate more
  • I want LW's culture to generally encourage people to grow. This means setting standards that are higher than what-people-do-by-default. But, people will disagree about what standards are actually good. So, having an overarching system whereby people can try out and opt-into higher-level-standards that they uphold each other to seems better than fighting what the overall standards of the site should be.

But, I've noticed an obvious failure mode. For Public Archipelago to work as described, you need someone who is:

  • willing to enforce rules
  • writes regularly, in a way that lends itself towards being a locus of conversation.

(In non-online spaces, you have a different issue, where you need someone who runs some kind of physical in-person space who is willing to enforce norms who is also capable of attracting people to their space)

I have a particular set of norms I'd like to encourage, but most of the posts I write that would warrant enforcing norms are about meta-stuff-re-Less-Wrong. And in those posts, I'm speaking as site admin, which I think makes it important for me to instead be enforcing a somewhat different set of norms with a higher emphasis on fairness.

(i.e. if site admins start deleting your comments on a post about what sort of norms a site should have, that can easily lead to some real bad chilling effects. I think this can work if you're very specific about what sort of conversation you want to have, and make your reasons clear, but there's a high risk of it spilling into other kinds of damaged trust that you didn't intend)

My vague impression is that most of the people who write posts that would benefit from some kind of norm-enforcing are somewhat averse to having to be a norm-enforcer.

Some people are willing to do both, but they are rare.

So the naive implementation of Public Archipelago doesn't work that well.

Problematic Solution #1: Subreddits

Several people suggested subforums as an alternative to author-centric Islands.

First, I think LW is still too small for this to make sense – I've seen premature subreddits kill a forum, because it divided everyone's attention and made it harder to find the interesting conversation.

Second, I don't think this accomplishes the same thing. Subforurms are generally about topics, and the idea I'm focusing on here is norms. In an AI or Math subforum, are you allowed to ask newbie questions, or is the focus on advanced discussion? Are you allowed to criticize people harshly? Are you expected to put in a bunch of work to answer a question yourself before you answer it?

These are questions that don't go away just because you formed a subforum. Reasonable people will disagree on them. You might have five people who all want to talk about math, none of who agree on all three of those questions. Someone has to decide what to enforce.

I'm very worried that if we try to solve this problem with subreddits, people will run into unintentional naming collisions where someone sets up a space with a generic name like "Math", but with one implicit set of answers to norm-questions, and then someone else wants to talk about math with a different set of answers, and they get into a frustrating fight over which forum should have the simplest name (or force all subforums to have oddly specific names, which still might not address all the nuances someone meant to convey)

For this reason, I think managing norms by author(s), or by individual-post makes more sense.

Problematic Solution #2: Cooperation with Admins

If a high-karma user sets their moderation-policy, they have an option to enable "I'm happy for admins to help enforce my policy." This allows people to have norms but outsource the enforcing of them.

We haven't officially tried to do this yet, but in the past month I've thought about how I'd respond in some situations (both on LW and elsewhere) where a user clearly wanted a particular policy to be respected, but where I disagreed with that policy, and/or thought the user's policy wasn't consistent enough for me to enforce it. At the very least, I wouldn't feel good about it.

I could resolve this with a simple "the author is always right" meta-policy, where even if an author seems (to me) to be wanting unfair or inconsistent things, I decide that giving authors control over their space is more important than being fair. This does seem reasonable-ish to me, at least in principle. I think it's good, in broader society, to have police who enforce laws even when they disagree with them. I think it's good, say, to have a federal government or UN or UniGov that enforces the right of individual islands to enforce their laws, and maybe this includes helping them do so.

But I think, at the very least, this requires a conversation with the author in question. I can't enforce a policy I don't understand, and I think policies that may seem simple-to-the-author will turn out to have lots of edge-cases.

The issue is that having that conversation is a fairly non-trivial-inconvenience, which I think will prevent most instances of admin-assisted-author-norms from coming to fruition.

Variant Solution #2B: Cooperation with delegated lieutenants

Instead of relying on admins to support your policy with a vaguely-associated halo of "official site power structure", people could delegate moderation to specific people they trust to understand their policy (either on a per-post or author-wide system).

This involves a chain-of-trust. (The site admins have to make an initial decision about who gains the power to moderate their posts, and if this also includes delegating moderation rights the admins also need to trust the person to choose good people to enforce a policy). But I think that's probably fine?

Variant Solution #2C: Shared / Open Source Norms

Part of the problem with enforcing norms is that you need to first think a bunch about what norms are even good for and which ones you want. This is a hugely non-trivial inconvenience.

A thing that could help this a bunch is to have people who think a lot about norms posting more about their thought process, and which norms they'd like to see enforced and why. People who are then interested in having norms enforced on their post, and maybe even willing to enforce those norms themselves, could have a starting point to describe which ones they care about.

Replies from: clone of saturn, Wei_Dai, Benito
comment by clone of saturn · 2018-10-31T04:58:14.820Z · LW(p) · GW(p)

Idea: moderation by tags. People (meaning users themselves, or mods) could tag comments with things like #newbie-question, #harsh-criticism, #joke, etc., then readers could filter out what they don't want to see.

comment by Wei Dai (Wei_Dai) · 2018-04-18T06:39:10.740Z · LW(p) · GW(p)

Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1? I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further. So it seems like before these experiments can start, we need to figure out why people aren't commenting much, and do something about that.

Replies from: Raemon, ESRogs, Wei_Dai
comment by Raemon · 2018-04-18T17:23:40.119Z · LW(p) · GW(p)
I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further.

That is a good point, to at least keep in mind. I hadn't explicitly been weighing that cost. I do think I mostly endorse have more barriers to commenting (and fewer comments), but may not be weighing things right.

Off the cuff thoughts:

Fractal Dunbar

Part of the reason I comment less now (or at least feel like I do? maybe should check the data) than I did 5 months ago is that the site is now large enough that it's not a practical goal to read everything and participate in every conversation without a) spending a lot of time, b) feeling lost/drowned out in the noise.

(In particular, I don't participate in SSC comments despite having way more people due to the "drowned out in the noise" thing).

So, one of the intended goals underlying the "multiple norms" thingy is to have a sort of fractal structure, where sections of the site tend to cap out around Dunbar-number of people that can actually know each other and expect each other to stick to high-quality-discussion norms.

Already discouraging comments that don't fit

I know at least some people are not participating in LW because they don't like the comment culture (for various reasons outlined in the Public Archipelago post). So the cost of "the norms are causing some people to bounce off" is already being paid, and the question is whether the cost is higher or lower under the overlapping-norm-islands paradigm.

Replies from: Qiaochu_Yuan, Elo
comment by Qiaochu_Yuan · 2018-04-18T18:27:57.838Z · LW(p) · GW(p)

I mostly stopped commenting and I think it's because 1) the AI safety discussion got higher cost to follow (more discussion happening faster with a lot of context) and 2) the non-AI safety discussion seems to have mostly gotten worse. There seem to be more newer commenters writing things that aren't very good (some of which are secretly Eugine or something?) and people seem to be arguing a lot instead of collaboratively trying to figure out what's true.

comment by Elo · 2018-04-18T21:03:08.178Z · LW(p) · GW(p)

If the site is too big it could be divided in one sections. That would effectively make it smaller.

I believe the content do far is a bit different. Worth being curious about what changed.

Yes we have less comments about day on lw2.

comment by ESRogs · 2018-04-18T06:58:15.952Z · LW(p) · GW(p)
we need to figure out why people aren't commenting much

My hypothesis would be that a) the ratio of post/day to visitors/day is higher on LW2 than it was on LW1, and so b) the comments are just spread more thin.

Would be curious whether the site stats bear that out.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-04-18T07:09:49.212Z · LW(p) · GW(p)

See the graphs I posted on this month’s open thread for some relevant data.

Replies from: Raemon
comment by Raemon · 2018-04-18T17:29:56.580Z · LW(p) · GW(p)

To save everyone else some time, here's the relevant graph, basically showing that amount of comments has remained fairly constant for the past 4 months at least (while a different graph showed traffic as rising, suggesting ESRog's hypothesis seems true)

Graph

Replies from: ESRogs
comment by ESRogs · 2018-04-18T19:16:58.262Z · LW(p) · GW(p)

This is great. Would love to see graphs going back further too, since Wei was asking about LW2 vs LW1, not just since earlier in the LW2 beta.

comment by Wei Dai (Wei_Dai) · 2018-05-17T00:46:56.043Z · LW(p) · GW(p)

Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1?

One hypothesis I thought of recently for this is that there are now more local rationalist communities where people can meet their social needs, which reduces their motivations for joining online discussions.

comment by Ben Pace (Benito) · 2018-04-04T23:53:48.992Z · LW(p) · GW(p)

Variant Solution #2D: Norm Groups ( intersection of solutions 1 and 2B): There are groups of authors and lieutenants who enforce a single set of norms, you can join them, and they'll help enforce the norms on your posts too.

You can join the sunshine regiment, the strict-truth-team, the sufi-buddhist team, and you can start your own team, or you can just do what the current site does where you run your own norms on your post and there's no team.

This is like subreddits except more implicit - there's no page for 'all the posts under these norms', it's just a property of posts.

comment by Raemon · 2024-07-29T23:51:03.180Z · LW(p) · GW(p)

Is there a good LLM tool that just wraps GPT or Claude with a speech-to-text input and text-to-speech output? I'd like to experiment with having an aways-on-thinking assistant that I talk out loud to.

Replies from: kave, ted-sanders
comment by kave · 2024-07-30T00:15:30.184Z · LW(p) · GW(p)

ChatGPT does this, though seemingly not on the web interface (vs the phone app).

Replies from: Raemon
comment by Raemon · 2024-07-30T16:03:53.438Z · LW(p) · GW(p)

Wowzers how did I not know about this / why is it not on desktop?

comment by Ted Sanders (ted-sanders) · 2024-07-30T00:26:43.202Z · LW(p) · GW(p)

ChatGPT voice (transcribed, not native) is available on iOS and Android, and I think desktop as well.

comment by Raemon · 2024-05-08T23:30:57.968Z · LW(p) · GW(p)

I've recently updated on how useful it'd be to have small icons representing users. Previously some people were like "it'll help me scan the comment section for people!" and I was like "...yeah that seems true, but I'm scared of this site feeling like facebook, or worse, LinkedIn."

I'm not sure whether that was the right tradeoff, but, I was recently sold after realizing how space-efficient it is for showing lots of commenters. Like, in slack or facebook, you'll see things like:

This'd be really helpful, esp. in the Quick Takes and Popular comments sections, where you can see which people you know/like have commented to a thing

Replies from: Benito, lahwran, Raemon
comment by Ben Pace (Benito) · 2024-05-09T00:37:13.203Z · LW(p) · GW(p)

I am fairly strongly against having faces, which I think boot up a lot of social instincts that I disprefer on LessWrong. LessWrong is a space where what matters is which argument is true, not who you like / have relationships with. I think some other sort of unique icon could be good.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-05-09T01:48:36.174Z · LW(p) · GW(p)

Aren't text names basically similar in practice? At least for me, I find they trigger basically the same thing because I do actually associate names with people.

Maybe this wouldn't be true if I didn't know people very well (but in that case, icons also wouldn't matter).

(I overall dislike icons, but I don't have a principled reason for this.)

Replies from: Benito
comment by Ben Pace (Benito) · 2024-05-09T14:33:45.400Z · LW(p) · GW(p)

I miswrote a bit when I said "relationships". Yes, names and faces both trigger social recognition, but I meant to make the point that they operate in significantly different ways in the brain, and facial recognition is tuned to processing a lot of emotional and social cues that we aren't tuned to from text. I have tons of social associations with people's physical forms that are beyond simply their character.

(A language model helped me write this comment.)

comment by the gears to ascension (lahwran) · 2024-05-09T05:38:26.308Z · LW(p) · GW(p)

a ui on your user page where you get to pick a four letter shortening of your name and a color. the shortening is displayed as 

t g
t a

in a tiny color-of-your-choice box. when picking your name, each time you pick a hue and saturation in the color picker (use a standard one, don't build a color picker), it does a query (debounced - I hope you have a standard way to debounce in react elements) for other people on the site who have that initialism, and shows you their colors in a list, along with an indicator min(color_distance(you.color, them.color) for them in other_users).

the color distance indicator could be something like the one from here, which would need transliterating into javascript:
 

This formula has results that are very close to L*u*v* (with the modified lightness curve) and, more importantly, it is a more stable algorithm: it does not have a range of colours where it suddenly gives far from optimal results. The weights of the formula could be optimized further, but again, the selection of the closest colour is subjective. My goal was to find a reasonable compromise.

typedef struct {
   unsigned char r, g, b;
} RGB;

double ColourDistance(RGB e1, RGB e2)
{
  long rmean = ( (long)e1.r + (long)e2.r ) / 2;
  long r = (long)e1.r - (long)e2.r;
  long g = (long)e1.g - (long)e2.g;
  long b = (long)e1.b - (long)e2.b;
  return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
comment by Raemon · 2024-05-09T17:07:50.872Z · LW(p) · GW(p)

Are the disagree reacts with ‘small icons are good for this reason (enough to override other concerns)’ or ‘I didn’t update previously?’

comment by Raemon · 2022-03-01T22:14:36.516Z · LW(p) · GW(p)

I... had a surprisingly good time reading Coinbase's Terms of Service update email?

We’ve recently updated our User Agreement. To continue using our services and take advantage of our upcoming feature launches, you’ll need to sign in to Coinbase and accept our latest terms.

You can read the entire agreement here. At a glance, here’s what this update means for you:

Easier to Understand: We’ve reorganized and modified our user agreement to make it more understandable and in line with our culture of clear communications.

Clarity on Dispute Resolution: We’ve revised our arbitration agreement to streamline the process for resolving problems you may experience.

Transparent Product Terms: We’ve consolidated the terms relating to many of our new products in a single appendix to make them easier to find.

Restrictions on Use of Services: We’ve created a standalone policy to make it easy to understand how you may use our services.

  • Response to Regulatory and Legal Changes: We’ve updated our tax language and some of our definitions to reflect the fact that crypto and crypto law are always on the move!

Please read the updated User Agreement and check out our help center article if you have any questions. 

What I like about this was that normally Terms of Service are just incredibly opaque to me, and when someone says they updated them I just shrug helplessly.

And here... well... to be clear I totally haven't checked if this is actually a good distillation of their changes, or how adversarial their terms of service are.But when I imagine a good, benevolent company trying to have a reasonable terms-of-service, and communicating clearly about it, it seems like actually a hard problem. (I'm currently looking at this through a lens that has general alignment parallels – communicating truthfully in an ontology your user can understand is a difficult problem)

And this email seemed like it was trying (to at least pretend to) do a good job with that.It's still not sufficient. (this email would be much better if instead of saying "We changed something about X", it said more explicitly "we changed X to Y, here's a short summary of Y")But it was an interesting signpost along the way to clear communication in a world of complicated interlocking systems

Replies from: jimrandomh, matto
comment by jimrandomh · 2022-03-03T19:53:21.650Z · LW(p) · GW(p)

I think the reason you had a good time with this is because you don't actually care what your agreement with Coinbase is, because you don't have large amounts of money deposited with them. For people who do have large amounts of money at stake (myself not among them), this summary doesn't really tell you anything, and you probably need to put the old and new ToS side by side and read the whole thing line by line.

Replies from: Raemon
comment by Raemon · 2022-03-04T02:45:20.148Z · LW(p) · GW(p)

Yeah, sounds right.

It still gets me thinking about what the idealized version of this actually is. 

I guess game/software patch notes are the thing that seems closest-in-concept space that's actually useful. It'd be interesting to see a TOS that had github/googledoc-changelog capability. (It occurs to me LW could maybe have a TOS that lived in a post which would have that automatically)

comment by matto · 2022-03-02T01:23:47.228Z · LW(p) · GW(p)

One of their developers reached out to me recently to talk about working for them. I got strong good vibes about the quality of their engineering culture. For example, they are 100% remote and seem to be doing it well enough that employees are happy. They also organize a week of all-company PTO every quarter, which also speaks to the stability of their systems.

I associate good engineering culture with good writing, and this email is pretty good as far as terms and conditions go.

comment by Raemon · 2019-12-31T23:21:45.418Z · LW(p) · GW(p)

This is a response to Zack Davis in the comments on his recent post [LW(p) · GW(p)]. It was getting increasingly meta, and I wasn't very confident in my own take, so I'm replying over on my shortform.

OP is trying to convey a philosophical idea (which could be wrong, and whose wrongness would reflect poorly on me [LW · GW], although I think not very poorly, quantitatively speaking [LW · GW]) about "true maps as a Schelling point." (You can see a prelude to this in the last paragraph of a comment of mine from two months ago [LW(p) · GW(p)].)

I would have thought you'd prefer that I avoid trying to apply the philosophy idea to a detailed object-level special case (specifically, that of this website) in the comment section of a Frontpaged post (as a opposed to a lower-visibility meta post or private conversation)?? (Maybe this is another illustration of Wei's point that [LW · GW] our traditional norms just end up encouraging hidden agendas.)

I'm legitimately unsure what the correct norm here is at this point. (I was recently also writing a post that made a more general point, but all my examples were from working on LessWrong, and not sure about the best approach because, years from now, I do still want to be able to link to the post regardless of whether the object-examples are still salient)

One thing that I think clearly fits the currently implemented norms (albeit is higher effort), is to write two posts, one focusing on the crisp abstraction and one on the object-level politics. I think doing both in close proximity is both more honest and more helpful (since no one has to second guess if there's a hidden agenda, but you can also be putting forth what is hopefully a good abstraction that will apply in future situations)

I think, when followed, the above norm produces better writing, in part because forcing yourself to look for 1-2 examples other than the current object-level-situation forces you to check if it's a real pattern. (Although, another problem is it may be "all the examples"  are from some variety of local politics, if those are the places one actually has clear enough knowledge)

The main counterpoint is that this all makes writing a lot harder, and I'm not confident it's worth that extra effort. (And I think there are then downstream effects on what gets written and where people bother writing things up that are potentially bad)

Within the current normset (which I think is important to stick to for now, so that law can be predictable), another option is to go ahead and write the more opinionated post and leave it on Personal Blog (which does tend to end up getting seen by the people who actually have context on the local political situation)

Replies from: Dagon
comment by Dagon · 2020-01-01T18:29:58.255Z · LW(p) · GW(p)

Making this explicit would allow the important discussion of how widely applicable this model is. Things that are primarily about an extremely weird subgroup are interesting, but some participants tend to claim a more fundamental truth to their models than is really supported.

Replies from: Raemon
comment by Raemon · 2020-01-01T18:38:25.796Z · LW(p) · GW(p)

‘Make this explicit’ is a suggestion to writers, or to the LW mod team?

Replies from: Dagon
comment by Dagon · 2020-01-01T19:03:04.095Z · LW(p) · GW(p)

I think mostly to the writers. There's a bit too much editorial control being used if the site enforces some tag like "bay-area rationalist culture related". The hidden agenda norm (where authors seem to try to generalize without reference to the reasons they believe the model is useful) is something I'd like to see changed, but I think it needs to come from the authors and readers, not from the mods or site owners.

comment by Raemon · 2019-11-14T04:52:07.001Z · LW(p) · GW(p)

The 2018 Long Review (Notes and Current Plans)

I've spent much of the past couple years pushing features that help with the early stages of the intellectual-pipeline – things like shortform [? · GW], and giving authors moderation tools [LW · GW] that let them have the sort of conversation they want (which often is higher-context, and assuming a particular paradigm that the author is operating in)

Early stage ideas benefit from a brainstorming, playful, low-filter environment. I think an appropriate metaphor for those parts of LessWrong are "a couple people in a research department chatting about their ideas."

But longterm incentives and filters matter a lot as well. I've focused on the early stages because that's where the bottleneck seemed to be, but LessWrong is now at a place where I think we should start prioritizing the later stages of the pipeline – something more analogous to publishing papers, and eventually distilling them into textbooks. 

So, here's the current draft of a plan that I've been discussing with other LW Team members:

— The Long Review Format —

Many LessWrong posts are more conceptual than empirical, and it's hard to tell immediately how useful they are. I think they benefit a lot from hindsight. So, once each year, we could reflect as a group about the best posts of the previous year*, and which them seem to have withstood the tests of time as something useful, true, and (possibly), something that should enter in the LessWrong longterm canon that people are expected to be familiar with.

Here's my current best guess for the format:

[note: I currently expect the entire process to be fully public, because it's not really possible for it to be completely private, and "half public" seems like the worst situation to me]

  • (1 week) Nomination
    • Users with 1000+ karma can nominate posts from 2018-or-earlier, describing how they found the post useful over the longterm.
  • (4 weeks) Review Phase
    • Authors of nominated posts can opt-out of the rest of the review process if they want.
      • Posts with 3* nominations are announced as contenders. For a month, people are encouraged to look at them thoughtfully, writing comments (or posts) that discuss:
        • How has this post been useful?
        • How does it connect to the broader intellectual landscape.
        • Is this post epistemically sound?
        • How could it be improved?
        • What further work would you like to see people do with the content of this post?
      • Authors are encouraged to engage with critique. Ideally, updating the post in response to feedback, and/or discussing what sort of further work they'd be interesting seeing by others.
  • (1 Week) Voting
    • Users with 1000+ karma rank each post on...
      • 1-10 scale for "how important is the content"
      • 1-10 scale for "how epistemically virtuous is this post"
      • Yes/No/Veto on "should this post be added to LessWrong canon?"
    • (In the 1-10 scale, 6+ means "I'd be happy to see this included in the 'best of 2018'" roundup, and 10 means 'this is the best I can imagine'")
    • "Yes, add this to canon" means that it hits some minimum threshold of epistemic virtue, as well as "this is something I think all LW readers should be at least passingly familiar with, or if they're not, the burden is on them to read up on it if it comes up in conversation."
  • Rewards
    • The votes will all be publicly available. A few different aggregate statistics will be available, including the raw average, and probably some attempt at a "karma-weighted average."
    • The LW moderation team will put together a physical book, and online sequence, of the best posts, as well as the most valuable reviews of each post.
    • The LW team awards up to* $1000 in prizes to the best reviewers, and $3000 in prizes to the top post authors.
      • * this depends on whether we get reviews that seem to genuinely improve the epistemic landscape. Prizes for reviewers will be mostly moderator discretion (plus some inputs like "how much karma and engagement the review got")

And importantly:

Next Year

Even if we stuck to the above plan, I'd see it as more of an experiment than the definitive, longterm review mechanism. I expect we'd iterate a lot the following year.

But one thing I'm particularly interested in is how this builds over the longterm: next year (November 2020), while people would mostly be nominating posts from 2019, there should also be a process for submitting posts for "re-review", if there's been something like a replication crisis, or if a research direction that seemed promising now seems less-so, that's something we can revisit.

Replies from: Raemon
comment by Raemon · 2019-11-14T05:12:29.018Z · LW(p) · GW(p)

Some major uncertainties

1. How much work will the community be motivated to do here? 

The best version of this involves quite a bit of effort from top authors and commenters, who are often busy. I think it gracefully scales down if no one has time for anything other than quick nominations or voting. 

...

2. What actually are good standards for LessWrong?

A lot of topics LessWrong focuses on are sort of pre-paradigmatic. Many posts suggest empirical experiments you might run (and I'm hoping for reviews that explore that question), but in many cases it's unclear what those experiments would even be, let alone the expense of running them.

Many posts are about how to carve up reality, and how to think. How do you judge how well you carve up reality or think? Well, ideally by seeing whether thinking that way turns out to be useful over the longterm. But, that's a very messy, confounded process that's hard to get good data on.

I think this will become more clear over longer timescales. One thing I hope to come out of this project is a bunch of people putting serious thought into the question, and hopefully getting a bit more consensus on it than we currently have.

I'm kind of interested in an outcome here where there's a bar you 

...

3. How to actually decide what goes in the book

I have a lot of uncertainty about how many nominations, reviews and votes we'd get.

I also have a lot of uncertainty about how much disagreement there'll be about which posts.

So, I'm pretty hesitant about committing in advance to a particular method of aggregation, or how many vetoes are necessary to prevent a post from making it into the book. I'd currently lean towards "the whole thing just involves a lot of moderation discretion, but the information is all public and if there's a disconnect between "the people's choice awards" and the "moderators choice awards", we can have a conversation about that.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-11-15T18:43:33.588Z · LW(p) · GW(p)

I feel a lot of unease about the sort of binary "Is this good enough to be included in canon" measure.

I have an intuition that making a binary cut off point tied to prestige leads to one of to equilibria:

1. You choose a very objective metric (P<.05) and then you end up with goodhearting.

2. You choose a much more subjective process, and this leads to either the measure being more about prestige than actual goodness, making the process highly political, as much about who and who isn't being honored as about the actual thing its' trying to measure(Oscars, Nobel Prizes), or to gradual lowering of standards as edge cases keep lowering the bar imperceptibly over time (Grade inflation, 5 star rating systems).

Furthermore, I think a binary system is quite antithetical to how intellectual progress and innovation actually happen, which are much more about a gradual lowering of uncertainty and raising of usefulness, than a binary realization after a year that this thing is useful.

Replies from: Raemon
comment by Raemon · 2019-11-17T00:54:25.620Z · LW(p) · GW(p)

Fair concerns. A few more thoughts:

First, small/simple update: I think the actual period of time for "canonization" to be on the table should be more like 5 years. 

My intent was for canonization to be pretty rare, and in fact is mostly there to sort of set a new, higher standard that everyone can aspire to, which most LW posts don't currently meet. (You could make this part of a different process than a yearly review, but I think it's fairly costly to get everyone's attention at once for a project like this, and it makes more sense to have each yearly review include both "what were the best things from the previous year" as well as even longer term considerations)

Why have Canonization?

Furthermore, I think a binary system is quite antithetical to how intellectual progress and innovation actually happen, which are much more about a gradual lowering of uncertainty and raising of usefulness, than a binary realization after a year that this thing is useful.

I do think this how a lot of progress works. But it's important that sooner or later, you have to update your textbooks that you generally expect students to read. 

I think the standards for the core LW Library probably aren't quite at the level of standards for textbooks (among other things, because most posts currently aren't written with exercises in mind, and otherwise not quite optimized as a comprehensive pedagogical experience)

Journal before Canon?

Originally, I included the possibility of "canonization" in this year's review round because longterm, I'd expect it to make most sense for the review to include both, and the aforementioned "I wanted part of the point here to highlight a standard that we mostly haven't reached yet."

But two things occur to me as I write this out:

1. This particular year, most of the value is in experimentation. This whole process will be pretty new, and I'm not sure it'll work that well. That makes it perhaps not a good time to try out including the potential for "updating the textbooks" to be part of it.

2. It might be good to require two years to for a post to have a shot at getting added to the top shelf in the LW Library, and for posts to first need to have previously been included

2. You choose a much more subjective process, and this leads to either the measure being more about prestige than actual goodness, making the process highly political, as much about who and who isn't being honored as about the actual thing its' trying to measure(Oscars, Nobel Prizes), or to gradual lowering of standards as edge cases keep lowering the bar imperceptibly over time (Grade inflation, 5 star rating systems).

I agree that these are both problems, and quite hard. My current sense is that it's still on net better to have a system like this than not. But I'll try to spend some time thinking about this more concretely.

comment by Raemon · 2019-09-14T01:10:54.369Z · LW(p) · GW(p)

I know I'll go to programmer hell for asking this... but... does anyone have a link to a github repo that tried really hard to use jQuery to build their entire website, investing effort into doing some sort of weird 'jQuery based components' thing for maintainable, scalable development?

People tell me this can't be done without turning into terrifying spaghetti code but I dunno I feel sort of like the guy in this xkcd and I just want to know for sure.

Replies from: jimrandomh, None
comment by jimrandomh · 2019-09-14T05:26:16.569Z · LW(p) · GW(p)

Note that this would be a very non-idiomatic way to use jQuery. More typical architectures don't do client-side templating; they do server-side rendering and client-side incremental mutation.

comment by [deleted] · 2019-09-14T06:54:34.277Z · LW(p) · GW(p)

There's jquery UI which maybe counts?

Replies from: Raemon
comment by Raemon · 2019-09-14T09:14:43.813Z · LW(p) · GW(p)

AFAICT jQuery UI is somsthing like a component library, which is (possibly) a piece of what you might build this out of, but not the thing itself (which is to say, a well functioning, maintainable, complete website).

Although I don't think it's really designed to do the sort of thing I'm talking about here.

comment by Raemon · 2019-08-12T03:11:36.019Z · LW(p) · GW(p)

I've lately been talking a lot [LW · GW] about doublecrux [LW · GW]. It seemed good to note some updates I'd also made over the past few months about debate.

For the past few years I've been sort of annoyed at debate because it seems like it doesn't lead people to change their opinions – instead, the entire debate framework seems more likely to prompt people to try to win, meanwhile treating arguments as soldiers and digging in their heels. I felt some frustration at the Hanson/Yudkowsky Foom Debate because huge amounts of digital ink were spilled, and neither party changed their mind much.

The counterpoint that's been pointed out to me lately is:

While debate may have that effect, it also produces lots of positive externalities. The process of Hanson and Yudkowsky spelling out their intuitions and arguments and preferred debate frameworks lead to a lot of interested facts and frameworks to chew on.

Replies from: Wei_Dai, bendini
comment by Wei Dai (Wei_Dai) · 2019-08-12T07:42:51.029Z · LW(p) · GW(p)

While debate may have that effect, it also produces lots of positive externalities. The process of Hanson and Yudkowsky spelling out their intuitions and arguments and preferred debate frameworks lead to a lot of interested facts and frameworks to chew on.

This became especially salient to me after reading AI Safety via Debate [LW · GW] (which I highly recommend, BTW). However it seems clear that fully adversarial debates do not work as well for humans as the authors hope it will work for AIs, and we really need further research to figure out what the optimal debate/discussion formats are under what circumstances.

Replies from: Raemon
comment by Raemon · 2019-08-12T20:30:58.755Z · LW(p) · GW(p)

I had read AI Safety via Debate but it felt like the version of it that connected to my OP here was... a few years down the line. I'm not sure which bits feel most salient here to you.

(It seems like in the future, when we've progressed beyond 'is it a dog or a cat', that AI debate could produce lots of considerations about a topic that I hadn't yet thought about, but this wasn't obvious to me from the original blogpost)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-08-12T22:23:20.801Z · LW(p) · GW(p)

I guess it was mostly just the basic idea that the point of a debate isn't necessarily for the debaters to reach agreement or to change each other's mind, but to produce unbiased information for a third party. (Which may be obvious to some but kind of got pushed out of my mind by the "trying to reach agreement" framing, until I read the Debate paper.) These quotes from the paper seem especially relevant:

Our hypothesis is that optimal play in this game produces honest, aligned information far beyond the capabilities of the human judge.

Despite the differences, we believe existing adversarial debates between humans are a useful analogy. Legal arguments in particular include domain experts explaining details of arguments to human judges or juries with no domain knowledge. A better understanding of when legal arguments succeed or fail to reach truth would inform the design of debates in an ML setting.

comment by Bendini (bendini) · 2019-08-15T14:20:42.476Z · LW(p) · GW(p)

The fact that such debates can go on for 500 pages without significant updates from either side point towards a failure to 1) systematically determine which arguments are strong and which ones are distractions 2) restrict the scope of the debate so opponents have to engage directly rather than shift to more comfortable ground.

There are also many simpler topics that could have meaningful progress made on them with current debating technology, but they just don't happen because most people have an aversion to debating.

comment by Raemon · 2019-01-20T22:09:41.975Z · LW(p) · GW(p)

My review of the CFAR venue:

There is a song that the LessWrong team listened to awhile back, and then formed strong opinions about what was probably happening during the song, if the song had been featured in a movie.

(If you'd like to form your own unspoiled interpretation of the song, you may want to do that now)

...

So, it seemed to us that the song felt like... you (either a single person or small group of people) had been working on an intellectual project.

And people were willing to give the project the benefit of the doubt, a bit, but then you fucked it up in some way, and now nobody believed in you and you were questioning all of your underlying models and maybe also your sanity and worth as a human. (vaguely A Beautiful Mind like)

And then you retreat to your house where you're pretty alone, and it's raining outside. And the house is the sort of house a moderately wealthy but sometimes-alone intellectual might live, with a fair bit of space inside, white walls, whiteboards, small intellectual toys scattered about. A nice carpet.

And you're pacing around the house re-evaluating everything, and it's raining and the rain dapples on the windows and light scatters in on your old whiteboard diagrams that no longer seem to quite make sense.

And then you notice a small mental click of "maybe, if I applied this idea in this slightly different way, that might be promising". And you clear off a big chunk of whiteboard and start to work again, and then over a several day montage you start to figure out a new version of your idea that somehow works better this time and you get into a flow state and then you're just in this big beautiful empty house in the rain, rebuilding your idea, and this time it's pretty good and maybe will be the key to everything.

So anyway the LW team listened to this song a year+ ago and we now periodically listen to it and refer to it as "Building LessWrong in the Rain."

And, last week we had the LW Team Retreat, which was located at the new(ish) CFAR venue, and... a) it was raining all week, b) we basically all agreed that the interior of the CFAR venue looked almost exactly like how we had all imagined it. (Except, I at least had been imagining it a bit more like a Frank Lloyd Wright house, so that from the outside it looked more rectangular instead of a more traditional house/big-cottage or whatever)

...

The house interior is quite well designed. Every room had a purpose, and I'd be mulling about a given room thinking "gee, I sure wish I had X", and then I'd rotate 30º and then X would be, like, within arm's reach.

Most rooms had some manner of delightful thing, whether that be cute magnet puzzles or a weird glowing flower that looked like if I touched it it'd disappear and then I'd start glowing an either be able to jump higher or spit fireballs (I did not touch it)

Small complaints include:

a) the vacuum was quite big and heavy, which resulted in me switching to using a broom when I was cleaning up,

b) the refrigerator was like 500x more dangerous than any other fridge I ever encountered. Normally the amount of blood a refrigerator draws when I touch it gently is zero. The bottom of this fridge cut me 3 times, twice on my toes, once on my thumb while I was trying to clean it.

c) the first aid kit was in a black toolbox with the red "+" facing away from the visible area which made it a bit more counterintuitive to discover than most of the other things in the house.

comment by Raemon · 2018-05-04T00:20:16.584Z · LW(p) · GW(p)

Jargon Quest:

There's a kind of extensive double crux that I want a name for. It was inspired by Sarah's Naming the Nameless [LW · GW] post, where she mentions Double Cruxxing on aesthetics. You might call it "aesthetic double crux" but I think that might lead to miscommunication.

The idea is to resolve deep disagreements that underlie your entire framing (of the sort Duncan touches on in this post on Punch Buggy. That post is also a reasonable stab at an essay-form version of the thing I'm talking about).

There are a few things that are relevant here, not quite the same thing but clustered together:

  • what counts as evidence?
  • what counts as good?
  • what counts as beautiful?

Each of them suggest a different name (epistemic double crux, values double crux, aesthetic double crux). Maybe a good common name is "Deep Double Crux" or "Framing Double Crux"

The main point is that when you hunker down for a deep double crux, you're expecting to spend a long while, and to try to tease some real subtle shit.

I liked the phrase Aesthetic Double Crux, suggested in the Naming the Nameless post, since it pointed at entire ways of thinking that had multiple facets, but seemed to orient most around what felt elegant and right. But the people who followed up on that focused most on the art interpretation, so it seemed ripe for misinterpretation.

(In the course of writing this I think I basically decided I liked Deep Double Crux best, but decided to leave the post up as a demonstration of thought process.)

Replies from: Hazard
comment by Hazard · 2018-07-06T02:18:16.247Z · LW(p) · GW(p)
The main point is that when you hunker down for a deep double crux, you're expecting to spend a long while, and to try to tease some real subtle shit.

Yes! I feel like a lot of the time, the expectation of putting such sustained will attention is not there. Not to say that you should always be ready to hunker down at the drop of a hat. It seems like the default norm is closer to, "Giving up if it gets too hard."

comment by Raemon · 2018-04-26T06:15:58.515Z · LW(p) · GW(p)

We've been getting increasing amounts of spam, and occasionally dealing with Eugins. We have tools to delete them fairly easily, but sometimes they show up in large quantities and it's a bit annoying.

One possible solution is for everyone's first comment to need to be approved. A first stab at the implementation for this would be:

1) you post your comment as normal

2) it comes with a short tag saying "Thanks for joining less wrong! Since we get a fair bit of spam, first comments need to be approved by a moderator, which normally takes [N hours, whatever N turns out to be]. Sorry about that, we'll be with you soon!"

3) Comments awaiting approval show up on moderator's screen at the top of the page or something, with a one-click approval, so that it's very unlikely to be missed. I think this could get the wait time down pretty low even with a smallish number of moderators.

The main downside here is that people's first commenting experience wouldn't be as good. My intent with step #2 was to smooth it over as much as possible. (i.e. if it just said "comment awaiting approval", I'd think it be much worse).

I'm curious a) how bad people think this experience would be, and b) any other issues that seem relevant?

Replies from: Elo, clone of saturn, Elo
comment by Elo · 2018-04-28T22:54:30.707Z · LW(p) · GW(p)

If in the first 10 comments of a user and including a link, hold for moderation.

Also make a safe list and anyone on the safe list is fine to post.

Replies from: Raemon
comment by Raemon · 2018-04-29T04:21:35.094Z · LW(p) · GW(p)

Hmm. Doing it only for links would def solve for spammers, which I think hits roughly 60% of the problem and is pretty good. Doesn't solve for Eugins. Not sure how to weigh that.

(Still interested in a literal answer to my question "how bad is it to have your first post need to be approved?" which I don't have much of an intuition for)

Replies from: Elo
comment by Elo · 2018-04-29T05:04:20.640Z · LW(p) · GW(p)

The other option is to hold comments from new accounts (or accounts with low posts) with certain keywords - for moderation.

I.e. "plumber", a phone number etc.

I think if you specify "you have less than 10 comments and you posted a link" to let people know why their comment is being held for "a day" or so. It's not a big deal.

If it was not explained then it would be more frustrating.

If you capture all comments while an account is suspected spam, that would be okay.

comment by clone of saturn · 2018-04-26T07:55:02.622Z · LW(p) · GW(p)

As long as LW isn't high-profile enough to attract custom-written spambots, a possible easier alternative would be to combine a simple test to deter human spammers with an open proxy blacklist like SORBS. This strategy was very effective on a small forum I used to run.

Replies from: Raemon
comment by Raemon · 2018-04-29T04:25:14.755Z · LW(p) · GW(p)

Using a list like SORBS sounds good. I actually think the test might be more annoying than waiting to get your post approved. (or, maybe less annoying, but causing more of a trivial inconvenience)

comment by Elo · 2018-04-28T22:55:56.444Z · LW(p) · GW(p)

Also some of them are businesses. Like plumbers. You could call them up and tell them that they are paying spammers to post in irrelevant places and they should ask for their money back.

comment by Raemon · 2018-03-24T08:02:38.272Z · LW(p) · GW(p)

Recently watched Finding Dory. Rambly thoughts and thorough spoilers to follow.

I watched this because of a review by Ozy a long while ago, noting that the movie is about character with a mental disability that has major affects on her. And at various key moments in the movie, she finds herself lost and alone, her mental handicap playing a major role in her predicament. And in other movies they might given her some way to... willpower through her disability, or somehow gain a superpower that makes the disability irrelevant or something.

And instead, she has to think, and figure out what skills she does have she can use to resolve her predicaments. And that this was beautiful/poignant from the standpoint of getting to see representation of characters with disabilities getting to be protagonists in a very real way.

I think the movie generally lived up to that review (with some caveats, see below). But I was also found myself looking at it through the recent "Elephant" and "Mythic" lens. This is "Self-Identify-As-An-Elephant" and "Live In Mythic Mode" The Movie.

Dory has a "rider", maybe, but the rider can't form longterm memories, which makes it much less obvious as the seat-of-identity.

She seems to have the ability to form system-1 impressions that gradually accumulate into familiarity, useful intuitions that help her find her way around, and the ability to form friends after prolonged exposure to them. (My understanding is that this is not a realistic depiction of humans with short term memory loss, but since the movie is about a talking fish I'm willing to cut it some slack).

Her intuition-powers strain credibility a bit. I'm also willing to cut the movie some slack here from the standpoint of "in most Everett branches, Dory dies very young, and the interesting story worth telling was about the Dory who had just enough natural skill and luck to skate by early on, and then develop S1 associations useful enough to continue surviving.

(Aside: this movie has loads of places where Jesus Christ everyone should have just died, and for some reason this was the most stressful cinematic experience I've had in a living memory)

The thing I found most interesting about the movie is the scene where she's lost and alone and sad, and has to figure out what to do, and starts sharing her thought process outside, making it legible to both herself and the audience for the first time.

comment by Raemon · 2018-02-04T22:51:22.588Z · LW(p) · GW(p)

Looking at how facebook automatically shows particular subcomments in a thread, that have a lot of likes/reacts.

And then looking at how LW threads often become huge and unwieldy when there's 100 comments.

At first I was annoyed by that FB mechanic, but it may in fact be a necessary thing for sufficiently large threads, to make it easy to find the good parts.

comment by Raemon · 2018-01-14T22:24:35.733Z · LW(p) · GW(p)

Social failure I notice in myself: there'll be people at a party I don't know very well. My default assumption is "talk to them with 'feeler-outer-questions' to figure out what what they are interested in talking about". (i.e. "what do you do?"/"what's your thing?"/"what have you been thinking about lately?"/"what's something you value about as much as your right pinky?"/"What excites you?").

But this usually produces awkward, stilted conversation. (of the above, I think "what have you been thinking about lately?" produces the best outcomes most of the time)

Recently, I was having that experience, and ended up talking to a nearby person I knew better about a shared interested (videogames in this case). And then the people nearby who I didn't know as well were able to join in the conversation and it felt much more natural.

Part of the problem is that if there is no person-I-know nearby, I have to take a random guess at a thing to talk about that the person is interested in talking about.

In this case, I had various social cues that suggested video games would be a plausible discussion prompt, but not enough context to guess which sorts of games were interesting, and not enough shared background knowledge to launch into a discussion of a game I thought was interesting without worrying a bunch about "is this too much / wrong sort of conversation for them."

Not sure what lesson to learn, but seemed noteworthy.

Replies from: Qiaochu_Yuan, gjm
comment by Qiaochu_Yuan · 2018-01-18T22:19:00.031Z · LW(p) · GW(p)

I really dislike the pinky question for strangers (I think it's fine for people you know, but not ideal). It's an awkward, stilted question and it's not surprising that it produces awkward, stilted responses. Aimed at a stranger it is very clearly "I am trying to start a reasonably interesting conversation" in a way that is not at all targeted to the stranger; that is, it doesn't require you to have seen and understood the stranger at all to say it, which they correctly perceive as alienating.

It works on a very specific kind of person, which is the kind of person who gets so nerdsniped wondering about the question that they ignore the social dynamic, which is sometimes what you want to filter for but presumably not always.

Replies from: Raemon
comment by Raemon · 2018-01-18T23:11:45.622Z · LW(p) · GW(p)

A noteworthy thing from the FB version of this thread was that people radically varied in which question seemed awkward to them. (My FB friends list is sharply distorted by 'the sort of friends Ray is likely to have', so I'm not sure how much conclusion can be drawn from this, but at the very least it seemed that typical minding abounds all around re: this class of question)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-01-18T23:49:12.958Z · LW(p) · GW(p)

Sure, I think all of these questions would be awkward addressed to various kinds of strangers, which is part of my point: it's important to do actual work to figure out what kind of question a person would like to be asked, if any.

Replies from: Raemon
comment by Raemon · 2018-01-19T01:02:15.650Z · LW(p) · GW(p)

So a reframing of this question is "what do you say/do/act to gain information about what a person would like to be asked without resorting to one of these sorts of questions?"

(With a side-note of "the hard mode for all of this is when you actually do kinda know the person, or have seen them around, so it is in fact 'legitimately' awkward' that you haven't managed to get to know them well enough to know what sorts of conversations to have with them.)

comment by gjm · 2018-01-19T00:12:17.019Z · LW(p) · GW(p)

I have no idea how (a)typical this is, but I find it difficult to give quick answers for "global summary" type questions. What's the best book you've ever read? What do you spend most of your time doing? What are your two most important values? Etc. Those "feeler-outer questions" have that sort of quality to them, and if the people at those parties are like me I'm not surprised if conversation is sometimes slow to get started.

comment by Raemon · 2020-12-30T21:07:44.586Z · LW(p) · GW(p)

Man I wish the "Battle of the Sexes" game theory thing had a less distracting name. 

Replies from: Raemon
comment by Raemon · 2020-12-30T21:08:09.742Z · LW(p) · GW(p)

And "Bach or Stravisnky" somehow just feels even more confusing. Although maybe it's fine?

comment by Raemon · 2020-01-05T19:58:36.919Z · LW(p) · GW(p)

Have you changed your mind about frames or aesthetics? 

I'm working on the next post in the "Keep Beliefs Cruxy and Frames Explicit" sequence. I'm not sure if it should be one or two posts. I'm also... noticing that honestly I'm not actually sure what actions to prescribe, and that this is more like a hypothesis and outlining of problems/desiderata.

Two plausible post titles

  • Doublecruxing on Frame
  • Keeping Frames Explicit

(I'm currently unsure whether aesthetics are best thought of as a type of frame, or a separate thing)

Honestly, I'm not sure whether I've successfully doublecruxed on a frame (i.e. reached convergence, or had both participants change their mind significantly, or come to any kind of shared). I've definitely singlecruxed on a frame (by which I mean, I have outlined the beliefs I'd have to change in order for my frame to change, and thought about hypothetical experiments you might run to check the beliefs)

So I think I'm switching to "think out loud about this for awhile before trying to write the final posts". I'm curious if other people have perspectives on it. 

Repeating the opening question: Have you changed your mind about frames or aesthetics? Have you engaged in major disagreements involving substantially different outlooks (ways of seeing, ways of thinking, or expectations of conversational context/goals) that went well?

Replies from: mr-hire, Hysteria
comment by Matt Goldenberg (mr-hire) · 2020-01-05T23:28:06.161Z · LW(p) · GW(p)

It used to be really hard for me to see things as ugly, but I was able to get that skill.

Prior to that, it used to be really hard for me to judge people, but I was also able to learn that skill.

Replies from: Raemon
comment by Raemon · 2020-01-06T00:29:24.342Z · LW(p) · GW(p)

What changed?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-01-06T17:19:24.902Z · LW(p) · GW(p)

Mostly a concerted effort on my part to find people who were good at these things, talk to them, and inhabit their positions with empathy. A lot of it was finding my own aesthetic analogies for what they were doing, then checking in with them to see ways the analogy didn't work, and tweaking as needed.

comment by Hysteria · 2020-01-06T21:23:05.544Z · LW(p) · GW(p)

I just came here to write a shortform on aesthetics, but I might as well write some random thoughts here and reach you in particular.

I believe that "Aesthetics Maketh the Man". You can judge much about one's character simply by what they find beautiful or ugly, and you can judge their values and morals simply by how solid their aesthetics are.

Perhaps it is indeed easier or better to quantify "aesthetics" as the array of morals, values, sense of beauty and empirical metis that compromise a living being's personality. Things that are intrinsically part of how we interact with the world and society at large.


But to actually answer your question: I have given thought to aesthetics from a rational(?) POV that I hadn't bothered with before, and no, I haven't ever went into a "major disagreement" that went anywhere near "well". People can be very irrational towards things their own aesthetic sense considers "ugly", even (or specially) within the rationalist community.

comment by Raemon · 2019-11-07T02:40:22.387Z · LW(p) · GW(p)

Ben Kuhn's Why and How to Start a For Profit Company Serving Emerging Markets [EA · GW] is, in addition to being generally interesting, sort of cute for being two of the canonical Michael Vassar Questions rolled into one, while being nicely operationalized and clear.

("Move somewhere far away and stay their long enough to learn that social reality is arbitrary", and "start a small business and/or startup to a bunch about how pieces of the world fit together" being the two that come easiest to mind)

comment by Raemon · 2019-09-03T05:49:50.402Z · LW(p) · GW(p)

random anecdote in time management and life quality. Doesn't exactly have obvious life lesson

I use Freedom.to to block lots of sites (I block LessWrong during the morning hours of each day so that I can focus on coding LessWrong :P).

Once a upon a time, I blocked the gaming news website, Rock/Paper/Shotgun, because it was too distracting.

But a little while later I found that there was a necessary niche in my life of "thing that I haven't blocked on Freedom, that is sort of mindlessly entertaining enough that I can peruse it for awhile when I'm brain dead, but not so bottomlessly entertaining that it'll consume too much of my time." If I didn't have such a site, I would find one.

If I didn't have a standardized one, I would find one at random, and it'd be a bit of a crap shoot whether it was 5 minutes of eyes-glazed-skimming, or an hour of tabsplosioning.

The site I ended up settling on as my default blah-time was Kotaku, which was... basically RockPaperShotgun but worse. Gaming news that was sort of pointless and devoid of personality but juuuust over the threshold of "interesting enough that I actually wanted to read it."

Which I thought about a bit and then decided I reflectively endorsed.

Meanwhile, while I could access RockPaperShotgun in the evenings... I didn't, because, well, it wasn't that important and I was trying to cut back on videogames anyway.

Two years later... I dunno I found myself sort of thinking "you know, I wish I was passively gaining more interesting videogame news."

And... I unblocked RockPaperShotgun.

And I was surprised to notice

a) wow, most the content was actually interesting, tailored for the sorts of games I like, and written in a more entertaining voice

b) there were only a couple articles per day, whereas Kotaku used a vaguely facebook-like algorithm of "most of the articles are crap, but every few ones is a gem, which sort of gets me into a skinner-box that (I realized, in retrospect) probably had me reading _more_ than RPS did.

comment by Raemon · 2019-02-06T00:01:14.968Z · LW(p) · GW(p)

I frequently feel a desire to do "medium" upvotes. Specifically, I want tiers of upvote for:

1) minor social approval (equivalent to smiling at a person when they do something I think should receive _some_ signal of reward, in particular if I think they were following a nice incentive gradient, but where I don't think the thing they were doing was especially important.

2) strong social reward (where I want someone to be concretely rewarded for having done something hard, but I still don't think it's actually so important that it should rank highly in other people's attention

3) "this is worth your time and attention", where the signal is more about other people than the post/comment author.

(It's possible you could split these into two entirely different schemas, but I think that'd result in unnecessary UI complexity without commensurate benefit)

Replies from: MakoYass, jason-gross
comment by mako yass (MakoYass) · 2019-07-30T05:10:11.148Z · LW(p) · GW(p)
strong social reward (where I want someone to be concretely rewarded for having done something hard, but I still don't think it's actually so important that it should rank highly in other people's attention

If you don't want to make it more prominent in other peoples' attention, it would be a misuse of upvoting. Sounds like you just want reactions.

Replies from: Raemon
comment by Raemon · 2019-07-30T05:26:12.052Z · LW(p) · GW(p)

I do think a good site equilibrium would be "upvotes are *only* used to promote things to other people's attention, reactions are used to give positive reinforcement" would be pretty good and better than what we have now.

It's not quite right, because I also want people's longterm site attention-allocational power to be able to take into account them executing good algorithms, in addition to actually outputting good content.

(Also, I'd prefer if people weighed in on Giant Social Drama fights via reactions rather than voting, but I'm not sure it's possible to stop that. i.e 'ah my opponent is so WRONG I want them to get less attention' or vice versa)

Replies from: MakoYass
comment by mako yass (MakoYass) · 2019-07-30T21:48:30.422Z · LW(p) · GW(p)

Maybe a "give eigentrust" option distinct from voting, or, heck decouple those two actions completely.

comment by Jason Gross (jason-gross) · 2019-07-22T05:25:30.443Z · LW(p) · GW(p)

I'm wanting to label these as (1) 😃 (smile); (2) 🍪 (cookie); (3) 🌟 (star)

Dunno if this is useful at all

comment by Raemon · 2018-06-30T21:22:14.208Z · LW(p) · GW(p)

I have a song gestating, about the "Dream Time" concept (in the Robin Hanson sense).

In the aboriginal mythology, the dreamtime is the time-before-time, when heroes walked the earth, doing great deeds with supernatural powers that allowed them to shape the world.

In the Robin Hanson sense, the dreamtime is... well, still that, but *from the perspective* of the far future.

For most of history, people lived on subsistence. They didn't have much ability to think very far ahead, or to deliberately steer their future much. We live right now in a time of abundance, where our capacity to produce significantly outstrips our drive to reproduce, and this gives us (among other things) time and slack to think and plan and do things other than what is the bare minimum for survival.

The song I have in mind is in the early stages before a few pieces click together. (Songwriting is a form of puzzle-solving, for those that don't know)

Constraints of the puzzle so far:

1. I want it to be more of a summer solstice song than winter solstice one, of the sort that you can easily sing while gathered around a campfire, _without_ having lyrics available.

2. Due to the above (and because of which non-lyric-requiring-songs I *already* have written), the verses have each line in two parts. The (A) part of each line is new each time. The (B) sections are consistent, such that even if you're hearing the song for the first time you can sing along with at least part of the verses (in addition to the chorus)

((#1 and #2 are the core requirements, and if I ended up having to sacrifice the dreamtime-concept for the song, would do so)

3. Summer Solstice is focused on the present moment (contrasted with winter solstice, which is very distant-past and far-future oriented). The dreamtime concept came to me as something that could be framed from within the far-future perspective, while still having the bulk of the song focusing on the present moment.

4. Aesthetically, my current thought is for the song to be kind of a mirror-image of Bitter Wind Blown:

– the singer is a child, asking her mother to tell stories of the Before Time
– Structurally, fairly similar to Bitter Wind Blown, except the "Little one, little one" equivalent is a bit more complex
– where Bitter Wind Blown is, well, bittersweet, this is dwells more on the positive, and when looking at the negative, does so through a lens of acceptance (not in the "this is okay", but "this is what is, and was.")

However:

As I reflect on what the platonic ideal of the song wants to be, I'm noticing a bit of tension between a few directions. Here we get to the "how do you slide the pieces around and solve the puzzle?" bit (this is at the higher level, before you start _also_ sliding around individual lyrics)

a. The theme of presentness, being mindful of the here and now

b. The subtheme of abundance – right now is the dreamtime because our capacity for production gives us the affordance to thrive, and to think

c. The subtheme of power/heroism – the dreamtime is when heroes walked the earth and shaped the world that will one day become "the normal world."

(a) feels a bit in tension with (b) and (c). I think it's possible to blend them but not sure it'll quite work out.

That's what I got so far. Interested in thoughts.

Replies from: DanielFilan
comment by DanielFilan · 2019-12-31T23:52:55.347Z · LW(p) · GW(p)

I like the idea of this song existing. Any progress?

Replies from: Raemon
comment by Raemon · 2020-01-01T00:13:13.135Z · LW(p) · GW(p)

I think a major issue I ran into is that it felt dishonest (or, like, appropriative?) to write a song about "The Dreamtime" that wasn't Hansonianly cynical, and... I dunno I'm just not Hansonianly cynical.

The central metaphor of "child asking mother for song" also just felt sort of weird because the implied Em-World people just... probably wouldn't do that sort of thing. 

Maybe that's fine? Dunno.

Replies from: Raemon
comment by Raemon · 2020-01-01T00:13:55.454Z · LW(p) · GW(p)

It occurs to me that if one was to write the song anyway, it could either be set in a Billions/Trillions Year stable state, or it could be set just as the universe winds down, while Fades at Last the Last Lit Sun.

Also, another major issue I ran into was "well, no one commented on it and I lost motivation." :P Although maybe that part can be fixed now.

comment by Raemon · 2018-01-09T02:28:36.975Z · LW(p) · GW(p)

Kinda weird meta note: I find myself judging both my posts, and other people's, via how many comments they get. i.e. how much are people engaged. (Not aiming to maximize comments but for some "reasonable number").

However, on a post of mine, my own comments clearly don't count. And on another person's post, if there's a lot of comments but most of them are the original authors, it feels like some kind of red flag. Like they think their post is more important than other people do. (I'm not sure if I endorse this perception).

So, I have a weird sense of wanting to see a "comment count minus author's comments", for slightly different reasons. I don't think this is actually a good feature to have, but the fact that I want it feels like weird evidence of something.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-05-14T18:56:06.323Z · LW(p) · GW(p)

However, on a post of mine, my own comments clearly don’t count. And on another person’s post, if there’s a lot of comments but most of them are the original authors, it feels like some kind of red flag. Like they think their post is more important than other people do. (I’m not sure if I endorse this perception).

There is definitely value to this heuristic, but note that, e.g., I have commented on my own posts with nitpicky counterpoints to my own claims, or elaborations/digressions that are related but don’t really fit into the structure/flow of the post, or updates, etc. It seems like we shouldn’t discourage such things—do you agree?

Replies from: Raemon
comment by Raemon · 2018-05-15T00:45:06.667Z · LW(p) · GW(p)

So, this isn't an idea I still really endorse (partly because it doesn't seem worth the complexity cost, partly because I just don't think it was that important in the scheme of things), but I said this as someone who _also_ often makes additional comments on my post to expand ideas. And the point wasn't to discourage that at all – just to also showcase which posts are generating discussion _beyond_ the author fleshing out their own ideas.

comment by Raemon · 2018-01-09T01:31:49.558Z · LW(p) · GW(p)

(Empirically, I post my meta thoughts here instead of in Meta. I think this might actually be fine, but am not sure)

comment by Raemon · 2024-03-17T18:14:01.067Z · LW(p) · GW(p)

My goal right now is to find (toy, concrete) exercises that somehow reflect the real world complexity of making longterm plans, aiming to achieve unclear goals in a confusing world.

Things that seem important to include in the exercise:

  • "figuring out what the goal actually is"
  • "you have lots of background knowledge and ideas of where to look next, but the explosion of places you could possibly look is kinda overwhelming"
  • managing various resources along the way, but it's not obvious what those resources are.
  • you get data from the world (but, not necessarily the most important data)
  • it's not obvious how long to spend gathering information, or refining your plan
  • it's not obvious whether your current strategy is anywhere close to the best one

The exercise should be short (ideally like a couple hours but maybe a day or a hypothetically a week), but, somehow metaphorically reflects all those things.

Previously I asked about strategy/resource management games you could try to beat on your first try. One thing I bump into is that often the initial turns are fairly constrained in your choices, only later does it get complex (which is maybe fine, but, for my real world plans, the nigh-infinite possibilities seem like the immediate problem?)

Replies from: D0TheMath, romeostevensit, mr-hire, pktechgirl, lcmgcd, lcmgcd
comment by Garrett Baker (D0TheMath) · 2024-03-17T19:15:59.736Z · LW(p) · GW(p)

This sounds like my experience playing the Enigmatica 2: Expert mod in minecraft without looking at the internal tech tree, or any documentation. You could probably speedrun the relevant tech-tree in <1 week (if you want that to be your goal), but this would be basically impossible if you go in blind as the exercise you're describing suggests.

comment by romeostevensit · 2024-04-19T22:44:08.838Z · LW(p) · GW(p)

CRPGs with a lot of open world dynamics might work, where the goal is for the person to identify the most important experiments to run in a limited time window in order to manmax certain stats.

comment by Matt Goldenberg (mr-hire) · 2024-03-17T21:29:29.515Z · LW(p) · GW(p)

Why not just have people spend some time working with their existing goals?

Replies from: Raemon
comment by Raemon · 2024-03-17T23:41:45.542Z · LW(p) · GW(p)

My general plan is to mix "work on your real goals" (which takes months to find out if you were on the right track) and "work on faster paced things that convey whether you've gained some kind of useful skill you didn't have before".

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2024-03-18T05:41:53.417Z · LW(p) · GW(p)

I think most people have short term, medium term, and long term goals. E.g., right about now many people probably have the goal of doing their taxes, and depending on their situation those may match many of your desiderata.

I used to put a lot of effort into creating exercises, simulations, and scenarios that matched up with various skills I was teaching, but ultimately found it much more effective to just say "look at your todo list, and find something that causes overwhelm". Deliberate practice consists of finding a thing that causes overwhelm, seeing how to overcome that overwhelm, working for two minutes, then finding another task that induces overwhelm. I also use past examples, imagining in detail what it would have been like to act in this different way

You're operating in a slightly different domain, but still I imagine people have plenty of problems and sub problems in either their life or research where the things you're teaching applies, and you can scope them small enough to get tighter feedback loops.

comment by Elizabeth (pktechgirl) · 2024-03-17T20:02:26.731Z · LW(p) · GW(p)

They are probably too long but at one point I ran this exercise with Master of Orion and Stardew Valley

comment by lemonhope (lcmgcd) · 2024-03-22T08:09:21.330Z · LW(p) · GW(p)

Two hours to build a paper tower as high as you can outside in the wind

comment by lemonhope (lcmgcd) · 2024-03-22T08:05:39.406Z · LW(p) · GW(p)

Looking forward to see what exercises you land on!

comment by Raemon · 2022-08-25T07:53:49.118Z · LW(p) · GW(p)

Okay, I'm adding the show "Primal" to my Expanding Moral Cinematic Universe headcanon – movies or shows that feature characters in a harsh, bloody world who inch their little corner of the universe forward as a place where friendship and cooperation can form. Less a sea of blood an violence and mindless replication.

So far I have three pieces in the canon:

1. Primal

2. The Fox and the Hound

3. Princess Mononoke

in roughly ascending order of "how much latent spirit of cooperation exists in the background for the protagonists."

("Walking Dead" is sort of in the same genre but is more about the moral circle crumbling and characters struggling to hang onto it)

The Fox and the Hound each have a teeny ingroup of 2-3 people, and enough safety net that the characters can begin a friendship purely of play. There is death, predation, tribal conflict. The characters face an uphill battle to maintain their friendship and connection. But they are not alone. Their friendship is built on sedimentary layers of empathy, and trade. The protagonist's allies warn them "foxes and hounds can't be friends", but notably, those allies *know what friendship is and why it's desirable*.

Princess Mononoke's world is one of medium-scale tribes, each of which has complex coordination going on within it, and many of whom have some ability to trade with other trades, a sense of honor and reputation. Awakening consciousness to the fact that

Primal is about about cave man and a t-rex (named "Spear" and "Fang") who become allies, and then friends.

The Primal world *includes* tribes who coordinate within each other, but they are remote pockets in a brutish, dino-eat-dino world. The protagonists climb out of a background-state of bloodshed, isolation, and meager survival.

The characters first become allies by necessity. They are bad at being allies. But they learn how to be good allies, and as they come to trust each other they learn to be friends.

Their friendship... isn't _completely_ built out of nothing. The cave man was raised among a small tribe, and in slightly different circumstances his story might have been more similar to the Fox and the Hound. He has some sense of what it can mean to have a safety net, and love, and companionship. But it is so precious little – teeny scraps and glimpses of what it can mean to have connection. The whispers of familyship in this world are so close to being snuffed out at any given moment.

The show is very slow and meditative. There is not much going on this world. Sleep. Hunt. Spend hours walking to get places or watching silently as you prepare to strike, alone in the wilderness. There is only the next kill, and avoiding being someone's next kill.

I'm only a few episodes in and not sure where this is going, but I doubt that Spear and Fang will have much luxury of trusting or cooperating with almost anything else they meet. Their circle of concern only gets to grow by the tiniest inches to include each other.

But in that world, two creatures reach across species lines, and kindle the beginnings of friendship.

comment by Raemon · 2022-08-25T07:53:32.476Z · LW(p) · GW(p)

Just rewatched Princess Mononoke, and... I'm finding that this is grounded in the same sort of morality as The Fox And The Hound, but dialed up in complexity a bunch?

The Fox and The Hound is about a moral landscape where you have your ingroup, your ingroup sometimes kills people in the outgroup, and that's just how life is. But occasionally you can make friends with a stranger, and you kinda bring them into your tribe.

Welcoming someone into your home doesn't necessarily mean you're going to take care of them forever, nor go to bat for them as if they were literally your family.

But in this ingroup-eat-outgroup world, there are occasional glimmers of heroism. People make friends across tribal barriers, and they try to make those friendships work despite the difficulties.

It is possible for a fox to remember his friendship with a hound, and decide that it's worth fighting a bear to save his friend. This is a simple enough moral decision that it is just at the edge of a literal-fox's ability to glimpse it, and decide to be a hero.

And it is these little flickers of heroism that that slowly push the moral landscape from an ingroup-eat-outgroup world, to a world where people's circle of concern is broader, and more complex relations between tribes can evolve.

...

Princess Mononoke is a world where different tribes of humans and spirits are trying to make a home for themselves. Sometimes, other tribes (of humans, or spirits) want resources in your territory and try to fight you for it.

There is heroism within a tribe, as people struggle to survive and thrive. Ingroups grow – Lady Eboshi makes the choice to rescue lepers and whores. She sees potential in them, and she forges them into Iron Town, aiming to make a better life for them than they had before.

But between groups lie zero-sum-games. To survive, they must cut down the forest, and go to war against the spirits.

The spirits are... perhaps "natural", but their morality isn't much different. They defend their tribe, they fight, they kill, they eat. They are at war with the humans and they are losing, but in a slightly different timeline they might have been winning, and they wouldn't treat the humans any better than the humans are treating them.

Miyazaki intends there to be _something_ special about the spirits that the humans aren't respecting, which affects the ecosystem. But, fundamentally this is a moral landscape where no one has the slack or abundance to really think about ecosystems or how to negotiate towards peace.

And into this world comes Ashitaka the traveler, who walks among different tribes. Different people welcome him briefly into their homes, and he treats them with respect and they respect him, but he is not one of them. But he crosses between enough circles of concern to see...

...there is something really sad about this world where people war over limited resources, killing each other to better themselves.

In his heart, is a little glimpse of something better.

He's smarter than Todd the Fox. Todd the fox seems a simple fight between his friend and a bear, and he saves his friend. Ashitaka sees a world of decades-long conflict and there is no simple solution, and he doesn't really have a very good plan for fixing anything. He stumbles his way into different conflicts and sees people hurting and tries locally to help the people in front of him.

But soon he's made friends with each of them. And as they are all locked in conflict, his efforts to help just shuffles the damage around.

With a bit of luck, by the skin of his teeth, his efforts lead to a world that is a bit better and more peaceful. For now.

His confused, bumbling heroism inches the world slightly towards a moral landscape where people can think longer term, consider (somewhat) the value of the ecosystem, form trading partnerships with more people, and build a better world.

It isn't much. It's still mostly an ingroup-eat-outgroup world. I think Lady Eboshi does more to improve the world than Ashitaka does – she's a clever leader, she's able to make actually good plans, she's able to establish trade relations from a position of power. She doesn't try to help everyone, she doesn't overextend, she doesn't bumble her way through conflict. She slowly builds an empire where she can take care of people.

But, in little spurts of heroism, the intersection of people like Lady Eboshi, and people like Ashitaka, inches the world towards the sort of morality that I care about.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-08-26T23:05:56.887Z · LW(p) · GW(p)

This is a great review of one of my favorite movies. Thanks for posting it!

comment by Raemon · 2021-01-23T21:55:01.258Z · LW(p) · GW(p)

Query: "Grieving" vs "Letting Go"

A blogpost in the works is something like "Grieving/Letting-Go effectively is a key coordination skill."

i.e. when negotiating with other humans, it will often (way more often than you wish) be necessary to give up a thing that are important to you.

Sometimes this is "the idea that we have some particular relationship that you thought we had."

Sometimes it will be "my pet project that's really important to me."

Sometimes it's "the idea that justice can be served in this particular instance."

A key skill is applying something Serenity-Prayer-Like. "May I have the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference." 

Your attachment to a given thing is often doing useful work, because, well, the thing is actually important. And sometimes that thing is worth fighting for, and sometimes you need to let it go, and sometimes you need to let a particular piece of it go, but it's remember why the thing matters so you can still fight for it later.

My question here is "is the better term here 'let go', or 'grieve'?"

I've been using the word grieve, largely because of this post [LW · GW]. I think the process of letting go of special things is often grief-shaped. You keep expecting things to be a particular way, and facing the fact that they can no longer (or never were) that way is painful, and it requires both time and some skills to process that.

This is somewhat a stretch of what most people mean by the word "grief", but I think it's appropriate.

That said, a key goal of mine right now is to have a good, scalable coordination framework. And using nonstandard definitions is costly for scalability. You can do it once or twice, but if you're doing it all the time you're building up an impenetrable wall of jargon. Is this Common or Expert level jargon [LW · GW]?

 "Letting go" doesn't capture everything important, or communicate the magnitude of how hard the skill is, but it is a phrase I expect more people to know, and might be close enough.

Replies from: johnswentworth, Raemon, gworley, Measure
comment by johnswentworth · 2021-01-24T01:59:09.645Z · LW(p) · GW(p)

Somewhat tangential, but I sometimes think about the sort of tradeoffs you're talking about in a different emotional/narrative lens, which might help spur other ideas for how to communicate it.

(I'm going to use an analogy from Mother of Learning, spoilers ahead)...

There's this scene in Mother of Learning where the incredibly powerful thousand-year-old lich king realizes he's in some sort of simulation, and that the protagonists are therefore presumably trying to extract information from him. Within seconds of realizing this, without any hesitation or hemming or hawing, he blows up his own soul in an attempt to destroy both himself and the protagonists (at least within the simulation). It's cold calculation: he concludes that he can't win the game, the best available move is to destroy the game and himself with it, and he just does that without hesitation.

That's what it looks like when someone is really good at "letting it go". There's a realization that he can't get everything he wants, a choice about what matters most, followed by ruthlessly throwing whatever is necessary under the bus in order to get what he values most.

The point I want to make here is that "grieving" successfully captures the difficulty aspect, in a way that "letting it go" doesn't. But a sometimes-workable substitute for grieving is ruthlessness.

Say you have to trade off between two sacred values. My Inner Villain says something like:

Humans hate trading off between sacred values, they'll hem and haw about it, make a big dramatic show out of the whole process - "grieving". But a large chunk of "grieving" is performative - not all of it, but a lot. Cultivate an identity of ruthlessness and coldheartedness, and you can instead perform a role which just makes the hard choices without the drama.

Replies from: Raemon
comment by Raemon · 2021-01-24T21:51:49.981Z · LW(p) · GW(p)

Yeah.

I think my preferred group level solution is to have some people around who do ruthlessness and some who do grieving (with accompanying broader strategies) who keep each other in check.

comment by Raemon · 2021-01-24T21:39:37.027Z · LW(p) · GW(p)

FYI there's some good discussion over on the FB version of this post, where several people came out in defense of "grieving". ("Relinquish" did come up over there too)

https://www.facebook.com/raymond.arnold.5/posts/10223038780691962

comment by Gordon Seidoh Worley (gworley) · 2021-01-23T22:55:30.548Z · LW(p) · GW(p)

I like "letting go" better because to me "grieving" is placing some frame around the kind of letting go being done. When I think of grieving I think of the process of dealing with the death of a loved one. But I let go of things all the time without grieving, or because I already did all the grieving a long time ago for a whole category of thing and so now I just let things go because I never was really holding on to them—they were just resting within my grasp.

comment by Measure · 2021-01-24T20:55:37.441Z · LW(p) · GW(p)

"Relinquish" might be a good alternative. To me "grieving" is more about emotions and is an ongoing process whereas "letting go" or "relinquishing" is about goals and is a one-time decision to stop striving for an outcome.

comment by Raemon · 2021-01-21T23:52:19.926Z · LW(p) · GW(p)

I vaguely recall there being some reasons you might prefer Ranked Choice Voting over Approval voting, but can't easily find them. Anyone remember things off the top of their head?

Replies from: Pattern
comment by Pattern · 2021-01-23T04:57:42.598Z · LW(p) · GW(p)

As a voter, I don't have to decide where to draw the approval line. The lower I draw it, the less I approve of the people I'm including. (1 dimension model.)


Something that isn't usually talked about - maybe the coalition incentives. ("We'll approve your candidate if you approve ours.") Whether that leads to compromise which is good or collusion which is bad... (Consequences of adoption.)

comment by Raemon · 2020-12-29T00:48:24.101Z · LW(p) · GW(p)

TFW when you're trying to decide if you're writing one long essay, or a sequence, and you know damn well it'll read better as a sequence but you also know damn well that everyone will really only concentrate all their discussion on one post and it'll get more attention if you make one overly long post than splitting it up nicely.

Replies from: Dagon, mikkel-wilson
comment by Dagon · 2020-12-29T01:51:46.629Z · LW(p) · GW(p)

I wonder if there are potential LessWrong commenting features that would help with this.  Like being able to scope a comment to a section of a post, or a post, or a set of posts, or a sequence, or a set of related sequences.

comment by MikkW (mikkel-wilson) · 2020-12-29T01:17:28.934Z · LW(p) · GW(p)

Maybe post it first as a single post, then break it up into a sequence later?

comment by Raemon · 2020-08-02T20:13:09.316Z · LW(p) · GW(p)

An interesting thing about Supernatural Fitness (a VR app kinda like Beat Saber) is that they are leaning hard into being a fitness app rather than a game. You don't currently get to pick songs, you pick workouts, which come with pep talks and stretching and warmups.

This might make you go "ugh, I just wanna play a song" and go play Beat Saber instead. But, Supernatural Fitness is _way_ prettier and has some conceptual advances over Beat Saber.

And... I mostly endorse this and think it was the right call. I am sympathetic to "if you give people the ability to choose whatever, they mostly choose to be couch potatoes, or goodhart on simple metrics", and if you want people to do anything complicated and interesting you need to design your app with weird constraints in mind.

(Example: LessWrong deliberately doesn't show users the view-count of their posts. We already have the entire internet as the control group for what happens if you give people view-counts – they optimize for views, and you get clickbait. Is this patronizing? Yeah. Am I 100% confident it's the right call? No. But, I do think if you want to build a strong intellectual culture, it matters what kinds of Internet Points you give [or don't give] people, and this is at least a judgment call you need to be capable of making)

But... I still do think it's worth looking at third options. Sometimes, I might really want to just jam to some tunes, and I want to pick the specific

In the case of Supernatural Fitness, I think it is quite important that your opening experience puts you in the mindset of coaches and workouts, and that songs are clustered together in groups of 15+ minutes (so you actually get a cardio workout), and that they spend upfront effort teaching you the proper form for squats and encouraging you to maintain that form rather than "minimizing effort" (which I think Beat Saber ends up teaching you, and if you're coming from Beat Saber, you might have already internalized habits around)

At first I thought "maybe they should make you learn proper form first, but eventually give you the ability to choose individual songs." Then I simulated myself doing that, and thought "well, I would probably end up just doing less workouts."

My current guess for my coherent-extrapolated-preference for the game is for something like "Individual song plays are more like rewards for actually completing a workout" (i.e. if you make it all the way through you get credits that can be spent on playing individual songs). Or, alternately, maybe players can assemble workouts out of at least 3 songs.

It's probably complicated somewhat by the app licensing popular music, and the licenses might be temporary.

Replies from: Pattern
comment by Pattern · 2020-08-05T19:08:50.527Z · LW(p) · GW(p)

(Example: LessWrong deliberately doesn't show users the view-count of their posts. We already have the entire internet as the control group for what happens if you give people view-counts – they optimize for views, and you get clickbait. Is this patronizing? Yeah. Am I 100% confident it's the right call? No. But, I do think if you want to build a strong intellectual culture, it matters what kinds of Internet Points you give [or don't give] people, and this is at least a judgment call you need to be capable of making)

One could argue that view counts aren't view counts - they're click counts.

And people still have a metric they can optimize: the number of comments the post received.

comment by Raemon · 2020-06-14T17:45:15.807Z · LW(p) · GW(p)

I've noticed in the past month that I'm really bottlenecked on my lack-of-calibration-training. Over the past couple years I've gotten into the habit of trying to operationalize predictions, but I haven't actually tracked them in any comprehensive way.

This is supposed to be among the more trainable rationality skills, and nowadays it suddenly feels really essential. How long are lockdowns going to last? What's going to happen with coronavirus cases? What's going to happen with various political things going on that might affect me? Will the protests turn out to cause a covid wave? What am I even going to _want_ in a month? In six months?In a year?

I don't have a justified sense of how likely things are, and how confident I am in that.

I looked around for good tools for making predictions, and found that PredictionBook.com is just pretty great. I've created a bookmark for the "new prediction" page so that it's easy to get to. Some neat things that make it nice are:

1. You get to enter the "prediction resolution date" however you want ("in a week", "6/31/2020", "10 days from now" all work), so whatever my intuitive sense of when to resolve things is easy to enter.

2. It stores the last few prediction resolution dates, so if you want to keep using "N days from now" for similar reference class predictions, you can do so.

3. It emails you when the prediction should resolve, so you don't forget.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2020-06-14T18:42:16.873Z · LW(p) · GW(p)

Buy Wits & Wagers, use their cards for bite-sized numeric predictions you can state ranges for and check immediately. Best source of deliberate practice I know of.

Replies from: Raemon
comment by Raemon · 2020-06-14T18:53:34.426Z · LW(p) · GW(p)

I've played Wits and Wagers for this reason. But the issue is it doesn't actually map that well to the skills I actually want (which is "calibrate estimate of how likely and event is to happen", where the type of event is filtered for 'the sorts of events I actually care about.')

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2020-06-15T15:11:58.245Z · LW(p) · GW(p)

Interesting. I believe some combination of

  • Wits & Wagers (not playing, practicing)
  • Poker
  • Software development
  • Ambient practice

has made me pretty decent at calibration. By calibration I mean translating my feeling of uncertainty into a quantitative guess at uncertainty where that guess tracks with reality. I do not mean estimating accurately, I mean these two things:

1. Thinking about a sort of event I actually care about, coming up with a point estimate, then guessing the range around that point estimate such that the true answer is in that range roughly 50% of the time or roughly 90% of the time depending on what I'm going for.

2. Thinking about a sort of event I actually care about, coming up with a lower bound on a point estimate, coming up with an upper bound on a point estimate, shifting those bounds until my feelings of uncertainty that they're actually lower/upper bounds are approximately equal for both of them, then taking the appropriate mean as my point estimate and having that point estimate be basically as good as I would have come up with in a more analytical way and also way faster to come up with.

comment by Raemon · 2020-02-18T00:14:47.676Z · LW(p) · GW(p)

Jim introduced me to this song on Beat Saber, and noted: "This is a song about being really good at moral mazes".

I asked "the sort of 'really good at moral mazes' where you escape, or the sort where you quickly find your way the center?" He said "the bad one."

And then I gave it a listen, and geez, yeah that's basically what the song is about. 

I like that this Beat Saber map includes something-like-a-literal-maze in the middle where the walls are closing around you. (It's a custom map, not the one that comes from the official DLC)

https://www.youtube.com/watch?v=GVWPZxK-3ls

comment by Raemon · 2019-08-25T20:47:46.867Z · LW(p) · GW(p)

Thinking through problems re: Attention Management

Epistemic status: thinking in realtime. don't promise that this all makes sense

Default worlds

  • Clickbaitiness/drama/and/or/wrongness as attention magnet
  • Or: Slow, ponderous laying out of background intuitions that take years to write and percolate
  • Can we do better?

What questions would be helpful here?

  • What would better look like?
  • What are the obstacles?
  • What are the constraints
  • What are variables and parameters that can change?
  • What causes people to not update straightforwardly
    • Information being inherently complex / lots of dependences
    • Socially motivated reasoning (i.e. it'd be disadvantageous for you to believe a thing)
    • Information not fitting into your existing model (might classify this as "the sort of motivated reasoning that's more 'necessary', i.e. it may be useful to streamline your understanding of the world with models")
    • are there other major reasons here?

Note: Robin Hanson's "Against News" seems kinda relevant here.

What Are We Measuring and Why?

Or, "what exactly is the problem here."

I notice I have a hard time answering this question (or at least, my cached answers don't actually deal with it properly. Thanks Ruby (and How to Measure Anything [LW · GW]) for reminding me of this approach.

Here is a bad answer to this question, will try writing a good one later maybe.

Basically, it seems important for "the right people" (i.e. people who are actually stakeholders in a given decisionmaking structure, or who'd put actual work into open-ended problems if they were convinced it were important) to be able to be alerted to concepts or arguments that are relevant to them.

The idealized measurement is something like "if people were able to look back with perfect information from an idealized self 100 years in the future, they'd think that they were properly alerted about things that were in retrospect important, and and not overly alerted/attention-hogged on things that were relatively less important."

Another aspect of the idealized measure would be something like "100 years from now, with perfect information and full integration of that information, you'd think that ideas that should have been important to other people were properly communicated to them, and they took appropriate actions.

With perhaps a third aspect of "in the moment, you also think the various feedback cycles that propagated information and gave bits of info on how you (and others) are responding to that information, were accurate/helpful.

Some desiderata (i.e. what does "better" look like?)

Note: I wrote this before writing the previous section

  • You can tell at a glance (or more easily) who's effortposts* are worth reading
    • note: using effortpost as shorthand for "put a lot of goodfaith effort into communicate their idea, via writing or otherwise)
  • The people who are coordinating on a given thing who actually matter reliably read, talk with or listen, if-and-only-if it's a good use of their time (this last clause obviously is doing a hell of a lot of work here. Time, attention, and information are all precious)
  • People in positions of influence or power have the ability to update on information that is true/useful (insofar as this is useful)
  • People who write effortposts get better feedback, and/or get better at noticing or being calibrated on feedback (in particular, if Bob reads Alice's effortpost and goes "hmm, maybe", and the result is slowly, subtly shifting his mind over years, (perhaps not exactly the way Alice intended), Alice gets more/better feedback that this is going on.
comment by Raemon · 2019-08-08T20:06:51.546Z · LW(p) · GW(p)

Noticing surprise to help you notice confusion.

Epistemic Status: I was about to write a post on this, and then realized I hadn't actually tried to use this technique that much since coming up with a year ago. I think this is mostly because I didn't try rather than because the technique was demonstrably not good (although obviously it wasn't so useful that practicing the skill was self-reinforcing). For now I'm writing a shortform post and giving it a more dedicated effort for the next month.

Eliezer talks about "Noticing Confusion" a lot, which took a long time for me to really grok. Confusion is specifically hard to notice, and that's kinda what makes it confusion. My take from Sunset at Noon [LW · GW]:

The phenomenon here is when bits of evidence don't add up, and you get a subtle sense of wrongness. But then instead of heeding that wrongness and making sense of it, you round the evidence to zero, or you round the situation to the nearest plausible cliché.

This manifests (in me, anyway) as a slight feeling of "hrmm. ? huh", that then usually passes before I do anything with it. Later on, I might retroactively realize I was confused (once it becomes extremely apparent why the evidence wasn't adding up. i.e. if I notice my friends are acting a bit weird, but I shrug it off. But later it turns out they were planning a surprise birthday for me. Afterwards I'll remember the sense of weirdness and think "man, I should have been able to figure out something was up.")

Noticing confusion in the moment is hard.

But, surprise is more noticeable.

Like, the magnitude of how strong a signal confusion is, on a 1-10 scale, is... less than 1. But the magnitide of the signal of surprise is like a 2 or 3. (Compared to stubbing my toe, which is like an 8). I still need to practice noticing surprise, but it's at least achievable.

Surprise is different from confusion – surprise seems like "something happened that I wasn't predicting." And confusion is more like "something is happening that's outside my model, but my brain is wired to fit thing into my model if it can, so I don't necessarily notice."

Sometimes failed-predictions fall within your model. You might be surprised if a given person is struck my lightning, but not necessarily confused, because "get struck by lightning" is a clear explanation.

Whereas if a thousand people in a town were all struck by lightning in one day, you might be confused, because that's an extreme enough outlier that it might suggest there is some phenomenon you don't understand at work.

Failed predictions overlap enough with "things outside my model" that it seems like, if you can get good at noticing surprise, you can then check for "Is this confusing? Does it fit neatly into my existing models?" and then go on to generate hypotheses and prune them.

(Sunset at noon [LW · GW] is a lengthy essay that, among other things, goes into more detail about how this all fits into my picture of rationality)

Replies from: AprilSR
comment by AprilSR · 2019-08-10T01:54:45.419Z · LW(p) · GW(p)

I think 1000 people being struck by lightning would register as a gigantic surprise, not a less-than-1-signal-confusion.

Replies from: Raemon
comment by Raemon · 2019-08-11T03:03:33.260Z · LW(p) · GW(p)

I don't know where the threshold is, but I'd think there is some number of simultaneous lightning strikes where the likelihood of them happening at once is outweighed by there being some kind of phenomenon that wasn't in my model. (i.e. looks like about 900,000 lightning strikes happen yearly in Louisiana, so if a million happened in one day in one town that seems outside of model. Dunno if 1000 in one town in one day is something that's been recorded)

Replies from: Pattern
comment by Pattern · 2019-08-11T03:31:45.514Z · LW(p) · GW(p)

Create a machine that creates lightning strikes.

comment by Raemon · 2019-04-03T19:39:40.228Z · LW(p) · GW(p)

Posts I'm vaguely planning to write someday:

  • Competition in the EA landscape:
    • there should generally be more of it
    • but, network effects make particular classes of organization really want to be a monopoly, which makes it hard to figure out how to "be a good meta-team player" with regards to competition.
  • What's up with CFAR, and what ideas from it still need to get upstream-merged into the LessWrong-o-sphere
  • Open Problems With Secrecy
comment by Raemon · 2019-02-06T00:09:00.233Z · LW(p) · GW(p)

Something I've recently updated heavily on is "Discord/Slack style 'reactions' are super important."

Much moreso than Facebook style reacts, actually.

Discord/Slack style reacts allow you to pack a lot of information into a short space. When coordinating with people "I agree/I disagree/I am 'meh'" are quite important things to be able to convey quickly. A full comment or email saying that takes up way too much brain space.

I'm less confident about whether this is good for LW. A lot of the current LW moderation direction is downstream of a belief: "it's harder to have good epistemics at the same time you're doing social coordination, especially for contentious issues." We want to make sure we're doing a good job at being a place for ideas to get discussed, and we've consciously traded for that against LW being a place you can socially coordinate.

I think discord-style reacts might still be relevant for seeing at a glance how people think about ideas. There are at least some classes of reacts like "this seems confused" or "this was especially clear" that *if* you were able to segregate them from social/politics, they'd be quite valuable. But I'm not sure if you can.

Replies from: romeostevensit
comment by romeostevensit · 2019-02-06T00:38:56.184Z · LW(p) · GW(p)

I agree that slack is a better interaction modality for multiple people trying to make progress on problems. The main drawback is chaotic channel ontologies leading to too many buckets to check for users (though many obv. find this aspect addictive as well).

Replies from: Raemon
comment by Raemon · 2019-02-06T00:48:48.239Z · LW(p) · GW(p)

How much of this has to do with "slack sort of deliberately gives you a bunch of lego blocks and lets you build whatever you want out of them, so of course people build differently shaped things out of them?".

I could imagine a middle ground where there's a bit more streamlining of possible interaction ontologies.

(If you meant channels specifically, it's also worth noting that right now I thinking about "reactions" specifically. Channels I think are particularly bad, wherein people try to create conversations with names that made sense at the time, but then turned into infinite buckets. Reacts seem to have much less confusion, and when they do it's because a given org/server needed to establish a convention, and when you visit another org they're using a different convention)

Replies from: romeostevensit
comment by romeostevensit · 2019-02-07T04:38:04.543Z · LW(p) · GW(p)

would likely be solved if slack had a robust 3 level ontology rather than two level. Threaded conversations don't work very well.

comment by Raemon · 2019-01-21T05:04:19.790Z · LW(p) · GW(p)

Beeminder, except instead of paying money if you fail, you pay the money when you create you account, and if you fail at your thingy, you can never use the app again.

Replies from: Elo
comment by Elo · 2019-01-21T05:33:09.771Z · LW(p) · GW(p)

That's beeminder except bm comes with one freebie

Replies from: Raemon
comment by Raemon · 2019-01-21T05:48:38.627Z · LW(p) · GW(p)

I mean, at the very least, it's "Beeminder, except with a different pricing curve, and also every time you fail at everything you need to create a new email address, and recreate all your goals."

comment by Raemon · 2018-10-31T01:42:25.614Z · LW(p) · GW(p)

I notice that I often want to reply to LW posts with a joke, sometimes because it's funny, sometimes just as a way to engage a bit with the post when I liked it but don't otherwise have anything meaningful to say.

I notice that there's some mixed things going on here.

I want LW to be a place for high quality discussion.

I think it's actually pretty bad that comprehensive, high quality posts often get less engagement [LW · GW] because there's not much to add or contradict. I think authors generally are more rewarded by comments than by upvotes.

A potential solution is the "Offtopic" comment section we've been thinking about but haven't implemented yet, where either *I* can opt into marking a comment as "offtopic" (i.e. making less of a claim of other people finding it a good use of their time), or an author can if they don't like jokes.

Replies from: Ruby, DanielFilan, Pattern
comment by Ruby · 2019-07-21T23:11:12.196Z · LW(p) · GW(p)

Me: *makes joke*

Vaniver: I want you to post it on LessWrong so I can downvote it.

comment by DanielFilan · 2018-10-31T05:21:12.526Z · LW(p) · GW(p)

I think authors generally are more rewarded by comments than by upvotes.

Curious if you've done some sort of survey on this. My own feelings are that I care less about the average comment on one of my posts than 10 karma, and I care less about that than I do about a really very good comment (which might intuitively be worth like 30 karma) (but maybe I'm not provoking the right comments?). In general, I don't have an intuitive sense that comments are all that important except for the info value when reading, and I guess the 'people care about me' value as an incentive to write. I do like the idea of the thing I wrote being woven into the way people think, but I don't feel like comments are the best way for that to happen.

comment by Pattern · 2019-07-23T02:17:05.646Z · LW(p) · GW(p)
A potential solution is the "Offtopic" comment section we've been thinking about but haven't implemented yet, where either *I* can opt into marking a comment as "offtopic" (i.e. making less of a claim of other people finding it a good use of their time), or an author can if they don't like jokes.

While this sounds like a great idea, eventually there will be on topic jokes.

comment by Raemon · 2018-05-14T16:25:57.100Z · LW(p) · GW(p)

A couple links that I wanted to refer to easily:

This post on Overcoming Bias – a real old Less Wrong progress report, is sort of a neat vantage point on the "interesting what's changed, what's stayed the same."

This particular quote from the comments was helpful orientation to me:

The general rule in groups with reasonably intelligent discussion and community moderation, once a community consensus is reached on a topic, is that
– Agreement with consensus, well articulated, will be voted up strongly
– Disagreement with consensus, well articulated, will be voted up and start a lengthy discussion
– Agreement with consensus, expressed poorly, will be voted up weakly or ignored
– Disagreement with consensus, expressed poorly, will be voted down viciously
People who complain about groupthink are typically in the habit of doing #4 and then getting upset because they don't get easy validation of their opinions the way people who agree inarticulately do.
As an example on LW, consider Annoyance, who does both #2 and #4 with some regularity and gets wildly varying comment scores because of it.

I was also reading through this old post of gwern's on wikipedia, which feels like it has some relevance for LessWrong.

Replies from: Raemon
comment by Raemon · 2018-05-14T17:08:11.173Z · LW(p) · GW(p)

Apparently I'm on a gwern kick now.

His about page has a lot of interesting perspective on the Long Now, and designing Long Content that will remain valuable into the future.

Blog posts might be the answer. But I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again, and are quickly abandoned - and perhaps as Assange says, not a moment too soon. (But isn’t that sad? Isn’t it a terrible ROI for one’s time?) On the other hand, the best blogs always seem to be building something: they are rough drafts - works in progress19. So I did not wish to write a blog. Then what? More than just evergreen content, what would constitute Long Content as opposed to the existing culture of Short Content? How does one live in a Long Now sort of way?
My answer is that one uses such a framework to work on projects that are too big to work on normally or too tedious. (Conscientiousness is often lacking online or in volunteer communities22 and many useful things go undone.) Knowing your site will survive for decades to come gives you the mental wherewithal to tackle long-term tasks like gathering information for years, and such persistence can be useful23 - if one holds onto every glimmer of genius for years, then even the dullest person may look a bit like a genius himself24. (Even experienced professionals can only write at their peak for a few hours a day25.) Half the challenge of fighting procrastination is the pain of starting - I find when I actually get into the swing of working on even dull tasks, it’s not so bad.
So this suggests a solution: never start.
Merely have perpetual drafts, which one tweaks from time to time. And the rest takes care of itself.

I think this might be a helpful approach for LW, especially at it crosses the 10-year mark – it's now old enough that some of it's content is showing it's age.

This ties in with some of my thoughts in Musings on Peer Review [LW · GW], and in particular the notion that it feels "wrong" to update a blogpost after people have commented on it.

I find myself liking the idea of "creating a perpetual draft" rather than a finished product.

Replies from: Elo, SaidAchmiz, Hazard
comment by Elo · 2018-05-14T21:15:19.090Z · LW(p) · GW(p)

We need to encourage edit culture. Maybe bringing old posts to the top of the post list when edited. Or an optional checkbox to do so. Maybe we need a second feed for renewed content.

I will think about the tools needed to help edit culture develop.

Replies from: Hazard
comment by Hazard · 2018-07-12T13:30:26.583Z · LW(p) · GW(p)

Has any more talk/development happened on this? I'm quite interested to know what you come up with. It's easy for me to imagine what it would be like to write in a wiki/perpetual draft style, I'm much fuzzier on what it might look like to read in that style.

Replies from: Elo
comment by Elo · 2018-07-12T20:48:45.674Z · LW(p) · GW(p)

No updates. Gwern writes perpetually in drafts.

comment by Said Achmiz (SaidAchmiz) · 2018-05-14T19:00:33.232Z · LW(p) · GW(p)

I agree entirely with this, and (again) would like to suggest that a wiki is, perhaps, the perfect tool for precisely this sort of approach.

comment by Hazard · 2018-07-12T13:29:39.054Z · LW(p) · GW(p)

Though I haven't acted on it, I do like the idea of the perpetual draft more than a bunch of discrete posts. I will try to write more in this manner.

comment by Raemon · 2018-01-09T01:31:05.312Z · LW(p) · GW(p)

Some Meta Thoughts on Ziz's Schelling Sequence, and "what kind of writing do I want to see on LW?" [note: if it were possible, I'd like to file this under "exploring my own preferences and curious about others' take" rather than "attempting to move the overton window". Such a thing is probably not actually possible though]

I have a fairly consistent reaction to Ziz posts (as well as Michael Vassar posts, and some Brent Dill posts, among others) which is "this sure is interesting but it involves a lot of effort to read and interpret."

I think this is fine. I think a lot of interesting thoughts come out of frameworks that are deliberately living in weird, pseudo-metaphorical-but-not-quite worlds. I think being able to interpret and think about that is a useful skill (in general, and in particular for stepping out of social reality).

I think I have a preference for such posts to live in the community section, rather than front-page, but in my ideal world they'd go through a process of "explore things creatively in comments or community section", followed by "think more critically about what kind of jargon and opaqueness is actually useful and which was just an artifact of low-friction thinking", followed by "turn it into something optimized for public consumption"

comment by Raemon · 2024-04-18T22:00:31.516Z · LW(p) · GW(p)

What would a "qualia-first-calibration" app would look like?

Or, maybe: "metadata-first calibration"

The thing with putting probabilities on things is that often, the probabilities are made up. And the final probability throws away a lot of information about where it actually came from.

I'm experimenting with primarily focusing on "what are all the little-metadata-flags associated with this prediction?". I think some of this is about "feelings you have" and some of it is about "what do you actually know about this topic?"

The sort of app I'm imagining would help me identify whatever indicators are most useful to me. Ideally it has a bunch of users, and types of indicators that have been useful to lots of users can promoted as things to think about when you make predictions.

Braindump of possible prompts:

– is there a "reference class" you can compare it to?

– for each probability bucket, how do you feel? (including 'confident'/'unconfident' as well as things like 'anxious', 'sad', etc)

– what overall feelings do you have looking at the question?

– what felt senses do you experience as you mull over the question ("my back tingles", "I feel the Color Red")

...

My first thought here is to have various tags you can re-use, but, another option is to just do totally unstructured text-dump and somehow do factor analysis on word patterns later?

Replies from: OrthernLight
comment by OrthernLight · 2024-04-27T15:40:02.329Z · LW(p) · GW(p)

"what are all the little-metadata-flags associated with this prediction?"

Some metadata flags I associate with predictions:

  • what kinds of evidence went into this prediction? ('did some research', 'have seen things like this before', 'mostly trusting/copying someone else's prediction')
    • if I'm taking other people's predictions into account, there's a metadata-flags for 'what would my prediction be if I didn't consider other people's predictions?'
  • is this a domain in which I'm well calibrated?
  • is my prediction likely to change a lot, or have I already seen most of the evidence that I expect to for a while?
  • how important is this?
comment by Raemon · 2020-06-24T22:06:17.814Z · LW(p) · GW(p)

Anyone know how predictions of less than 50% are supposed to be handled by PredictionBook? I predicted a thing would happen with 30% confidence. It happened. Am I supposed to judge the prediction right or wrong?

It shows me a graph of confidence/accuracy that starts from 50%, and I'm wondering if I'm supposed to be phrasing prediction in such a way that I always list >50% confidence (i.e. I should have predicted that X wouldn't happen, with 70% confidence, rather than that it would, with 30%)

Replies from: niplav, Raemon
comment by niplav · 2020-06-24T22:51:07.627Z · LW(p) · GW(p)

Judge it as "right". PB automatically converts your 10% predictions into 90%-not predictions for the calibration graph, but under the hood everything stays with the probabilities you provided. Hope this cleared things up.

comment by Raemon · 2020-07-11T17:02:23.793Z · LW(p) · GW(p)

Another predictionBook question: it gives me a graph showing my 50/60/70/80/90% confidence accuracy, but I'm not sure if/how it interfaces with my 85%, 63%, etc, claims. Do those get rounded, or not show up at all?

comment by Raemon · 2020-01-21T21:44:47.740Z · LW(p) · GW(p)

I'm not sure which of these posts is a subset of the other:

  • The Backbone Bottleneck
  • The Leadership Bottleneck
Replies from: bendini, mr-hire, mr-hire
comment by Bendini (bendini) · 2020-02-19T09:49:56.081Z · LW(p) · GW(p)

Thinking about my own experiences of seeing these bottlenecks in action, I don't think either is a subset of the other. It seems more like there's a ton of situations where the only way forward is for a few people to grow a spine and have the tough conversations, and an adjacent set of problems that need centralised competent leadership to solve, but it's in short supply for the usual economic reasons plus things like "rationalists won't defer authority to anyone they don't personally worship unless bribed with a salary".

Replies from: Raemon
comment by Raemon · 2020-02-19T18:23:48.824Z · LW(p) · GW(p)

I think leadership also depends on backbone tho. 

Replies from: bendini
comment by Bendini (bendini) · 2020-02-25T03:50:03.359Z · LW(p) · GW(p)

I agree, but I also think there's a bit of a chicken and egg problem there too. Leaders fear that enforcing order will result in a mutiny, but if that fear is based on an accurate perception of what will happen, telling leadership to grow a pair is not going to fix it.

comment by Matt Goldenberg (mr-hire) · 2020-02-19T19:41:31.745Z · LW(p) · GW(p)

Causality and dependency are two things that people want to be neat and unidirectional but they're not. There are feedback loops and mutual dependencies.

One part of being a good teacher is figuring out how to take a mutual dependency and explain just enough of one part in a "fake way" such that people can get it enough to understand the second part, which in turn allows them to "truly" get the first part.

Replies from: Raemon
comment by Raemon · 2020-02-19T21:01:25.761Z · LW(p) · GW(p)

Nod. (To be slightly more clear: the OP was less me expressing bewilderment about how to solve this problem, and more of me leaving some kinds of breadcrumbs about what I was currently thinking about while I mulled over what post to write next and how to construct it. Upon reflection a more useful shortform would have been "which of these concepts resonate better or are you more interested in reading about first?")

comment by Matt Goldenberg (mr-hire) · 2020-01-24T17:21:48.832Z · LW(p) · GW(p)

Sometimes when I can't explain a concept except into relation to another concept, I use that as a sign that I need to approach one of the concepts from a completely separate/unique angle to get a handle on it.

comment by Raemon · 2019-04-04T19:15:54.628Z · LW(p) · GW(p)

Somewhat delighted to see that google scholar now includes direct links to PDFs when it can find them instead of making you figure out how to use a given journal website.

Replies from: jason-gross, pktechgirl
comment by Jason Gross (jason-gross) · 2019-07-22T05:20:41.464Z · LW(p) · GW(p)

This has been true for years. At least six, I think? I think I started using Google scholar around when I started my PhD, and I do not recall a time when it did not link to pdfs.

comment by Elizabeth (pktechgirl) · 2020-08-03T23:29:35.986Z · LW(p) · GW(p)

There's a plug in that will look for PDFs for you that match the page you're on or the text you have highlighted.

comment by Raemon · 2024-09-29T21:29:20.893Z · LW(p) · GW(p)

Some people have reported bugs wherein "you post a top level comment, and then the comment box doesn't clear (still displaying the text of your comment." It doesn't happen super reliably. I'm curious if anyone else has seen this recently.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-03T16:22:09.242Z · LW(p) · GW(p)

Oh yeah, that happens to me occasionally.

comment by Raemon · 2021-04-14T19:39:21.156Z · LW(p) · GW(p)

At any given time, is there anything especially wrong about using citation count (weighted by the weightings of other paper's citation count) as a rough proxy for "what are the most important papers, and/or best authors, weighted?"

My sense is the thing that's bad about this is that it creates an easy goodhart metric. I can imagine worlds where it's already so thoroughly goodharted that it doesn't signal anything anymore. If that's the case, can you get around that by grounding it out in some number of trusted authors, and purging obviously fraudulent authors from the system?

I'm asking from the lens of "I'd like to have some kind barometer for which scientific papers (or, also, LW posts) are the best. And this just... actually seems pretty good, at least if you were only using it as a one-time-check."

Replies from: jimrandomh, zac-hatfield-dodds
comment by jimrandomh · 2021-04-15T14:24:58.575Z · LW(p) · GW(p)

It depends what you mean by "rough proxy", and whether you're applying it to scientific papers (where Goodhart has been out in force for decades, so a one-time check is off the table) or to LessWrong posts (where citation-count has never been something people cared about). Most things have zero citations, and this is indeed a negative quality signal. But after you get to stuff that's cited at all, citation count is mainly determined by the type and SEO of a paper, rather than its quality. Eg this paper. Citations also don't distinguish building upon something from criticizing it. That's much worse in the Goodhart arena than the one-time arena, but still pretty bad in the one-shot case.

Replies from: Raemon
comment by Raemon · 2021-04-15T19:52:38.846Z · LW(p) · GW(p)

Nod. "positive vs disagreement citation" is an important angle I wasn't thinking about.

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2021-04-15T03:54:32.302Z · LW(p) · GW(p)

Important for what? Best for what?

In a given (sub)field, the highest-cited papers tend to be those which introduced or substantially improved on a key idea/result/concept; so they're important in that sense. If you're looking for the best introduction though that will often be a textbook, and there might be important caveats or limitations in a later and less-cited paper.

I've also had a problem where a few highly cited papers propose $approach, many papers apply or puport to extend it, and then eventually someone does a well-powered study checking whether $approach actually works. Either way that's an important paper, but they tend to be under-cited either because either the results are "obvious" (and usually a small effect) or the field of $approach studies shrinks considerably.

It's an extremely goodhartable metric but perhaps the best we have for papers; for authors I tend to ask "does this person have good taste in problems (important+tractable), and are their methods appropriate to the task?".

comment by Raemon · 2020-02-27T22:06:44.132Z · LW(p) · GW(p)

An issue in online discourse is "tendency of threads to branch more than they come back together."

Sometimes branching threads are fine, in particular when you're just exploring ideas for fun or natural curiosity. But during important disagreements, I notice a tendency in myself to want to try to address every given individual point, when actually I think the thing to do is figuring out what the most important points are and focus on those. (I think this important in-part because time is precious)

I'm wondering if there are UI updates to forum software that could try to address this systematically. Maybe include a React labeled 'too many threads', or 'in the weeds'.

Replies from: Dagon
comment by Dagon · 2020-02-28T00:40:58.719Z · LW(p) · GW(p)

I don't know of any good way to signal or display that a comment has multiple parents, and thus "merges" two threads. There are a number of boards and discussion systems where a moderator closes a thread (either making it read-only or just deleting unwanted further follow-ups) to keep noise down.

Note that this is a problem in verbal debates as well - there are always sub-points that spawn further sub-points, and even if you notice a merge point, it's hard to remember that you did.

comment by Raemon · 2019-11-11T23:35:12.399Z · LW(p) · GW(p)

Meta/UI:

I currently believe it was a mistake to add the "unread green left-border" to posts and comments in the Recent Discussion section – it mostly makes me click a bunch of things to remove the green that I didn't really want to mark as read. Curious if anyone has opinions about that.

Replies from: Ruby, Benito, RobbBB, jp
comment by Ruby · 2019-11-12T00:09:41.303Z · LW(p) · GW(p)

I really like the green-unread on post pages. On Recent Discussion I have so much of it that I think I don't really pay attention to it.

comment by Ben Pace (Benito) · 2019-11-12T04:50:48.016Z · LW(p) · GW(p)

I find it very useful for telling whether comments are new. I’ve not been using it as an inbox (no clicking in order to make green go away).

comment by Rob Bensinger (RobbBB) · 2019-11-12T04:32:17.161Z · LW(p) · GW(p)

I haven't noticed a problem with this in my case. Might just not have noticed having this issue.

comment by jp · 2019-11-15T19:17:33.505Z · LW(p) · GW(p)

I intuitively think it's good, but have in fact noticed myself clicking to dismiss it despite not having read it or thought about whether I'd like to read it.

comment by Raemon · 2018-07-01T22:50:42.735Z · LW(p) · GW(p)

Lately I've come to believe in the 3% rate of return rule.

Sometimes, you can self-improve a lot by using some simple hacks, or learning a new thing you didn't know before. You should be on the look out for such hacks.

But, once you've consumed all the low-hanging fruit, most of what there is to learn involves... just... putting in the work day-in-and-day-out. And you improve so slowly you barely notice. And only when you periodically look back do you realize how far you've come.

It's good to be aware of this, to set expectations.

I've noticed this re: habits, gratitude and exercise, after looking back on how I was 4 years ago.

But I hadn't noticed until recently that I'd made similar improvements at *improvising music on the spot*.

A few years ago I tried things in the genre of rap-battling, or making up songs on the fly, and it was quite hard and I felt bad when I did.

But a) recently I've noticed myself having an easier time doing this (to the extent that others are at least somewhat impressed)

And b), I encountered masters of the art. A friend-of-friend shared a podcast where they improvise *an entire musical* in realtime.

https://www.earwolf.com/show/off-book/

And it's *good*. They have the skill to make up rhymes on the fly *and* make up stories on the fly *and* have evolving characters undergoing emotional arcs on the fly and it all.

And it's all quite silly, but it still, like, fits together.

After listening to it, my housemates immediately gave it a try... and it actually basically _worked_. It was obviously way less good than the podcast, but it was good enough that we felt good about it, and I could see the gears of how to get better at it.

I think most of my own progress here came from practicing making NON-improvised songs. The skill still transfered in terms of finding good rhymes and structure.

If you _deliberate_ practice I'm sure you can progress much faster.

comment by Raemon · 2018-05-13T22:25:12.966Z · LW(p) · GW(p)

In Varieties of Argument [LW · GW], Scott Alexander notes:

Sometimes meta-debate can be good, productive, or necessary.... If you want to maintain discussion norms, sometimes you do have to have discussions about who’s violating them. I even think it can sometimes be helpful to argue about which side is the underdog.
But it’s not the debate, and also it’s much more fun than the debate. It’s an inherently social question, the sort of who’s-high-status and who’s-defecting-against-group-norms questions that we like a little too much. If people have to choose between this and some sort of boring scientific question about when fetuses gain brain function, they’ll choose this every time; given the chance, meta-debate will crowd out everything else.

This is a major thing we're trying to address with LW2. But I notice a bit of a sense-of-doom about it, and just had some thoughts.

I was reading the Effective Altruism forum today, and saw a series of posts on the cost effectiveness of vaccines. It looked like decent original research, and in many senses it seems more important than most of the other stuff getting discussed (on either the EA forum or on LW). Outputting research like that seems like one of the core things EA should actually be trying to do. (More specifically – translating that sort of knowledge into impact.)

But, it's way less fun to talk about – you need to actually be a position to either offer worthwhile critiques of the information there, or to make use of the information.

(Did I read it myself? No. Lol)

And you can maybe try to fix this by making that sort of research high status – putting it in the curated section, giving out bonus karma, maybe even cash prizes. But I think it'll continue to *feel* less rewarding than something that results in actual comments.

My current thought is that the thing that's missing here is a part of the pipeline that clearly connects research to people who are actually going to do something with it. I'm not sure what to do with that

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-05-14T18:53:27.066Z · LW(p) · GW(p)

And you can maybe try to fix this by making that sort of research high status – putting it in the curated section, giving out bonus karma, maybe even cash prizes. But I think it’ll continue to feel less rewarding than something that results in actual comments.

Figure out what sorts of user behavior you wish to incentivize (reading posts people wouldn’t otherwise read? commenting usefully on those posts? making useful posts?), what sorts you wish to limit (posting, in general? snarky comments?), and apply EP/GP.

comment by Raemon · 2019-07-10T22:41:16.837Z · LW(p) · GW(p)