Comment by raemon on Active Curiosity vs Open Curiosity · 2019-03-21T22:48:43.087Z · score: 2 (1 votes) · LW · GW

fwiw, I'd kind of like to see this book epistemically-spot-checked before building too much off it (I was chatting about it recently and some of the claims seemed iffy to me. It seemed like it should probably be able to identify easier-to-check claims and check to make sure he's at least getting obvious things right)

My understanding is that the last time people got really into Left Brain / Right Brain as a dichotomy it ended getting kinda pop-sci-simplified (which eventually resulted in it falling out of favor), and I'd like to "do it right this time" if it's going to be a thing people are building theories and models around.

Comment by raemon on LW Open Source – Getting Started · 2019-03-21T20:05:16.872Z · score: 4 (2 votes) · LW · GW

Thanks. I've added a newline for the first one, and just deleted the entire paragraph about the branch naming because we never really stuck to it.

Comment by raemon on Rest Days vs Recovery Days · 2019-03-20T19:44:40.786Z · score: 14 (4 votes) · LW · GW

Something that I didn't get the first time I read this concept (but which Qiaochu explained differently to me), is the specific thing of "check in with my stomach." (Unreal, I'm interested in feedback on whether this is accurate)

It's not that you're just doing whatever you "feel" like, in a generic sense. You're doing something like Focusing on your stomach in particular, which several people have reported useful for getting introspective access into parts of themselves that they aren't normally in tune with.

Part of the idea, I think, is that many people by default do things that are in tune with some particular part of their body/mind, and there's a cluster of wants/needs drives that get more ignored by default.

So, in response to the other thread about videogames vs books (which seem a priori like they should be similar), is... well, yeah they are. But in Unreal's case it the stomach found some of them yummy and some not.

(I'd guess the meta-level principle isn't to Listen To Your Stomach in particular, just to make sure to listen to yourself generally and see which parts of you aren't getting attended to and make sure you are free to do them. But there may be systematic reasons to hypothesize that the stomach is a more useful-than-average way to do that for most people)

Comment by raemon on What's your favorite LessWrong post? · 2019-03-20T02:06:46.462Z · score: 4 (2 votes) · LW · GW

Beyond the Reach of God

Comment by raemon on Privacy · 2019-03-19T20:17:11.107Z · score: 2 (1 votes) · LW · GW

We've thought about things in that space, although any of the ideas would be a fairly major change, and we haven't come up with anything we feel good enough about to commit to.

(We have done some subtle things to avoid making downvotes feel worse than they need to, such as not including the explicit number of downvotes)

Comment by raemon on Privacy · 2019-03-19T07:37:49.995Z · score: 3 (2 votes) · LW · GW

Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused... on some other level that I couldn’t predict easily.’)

Comment by raemon on Privacy · 2019-03-19T04:35:13.652Z · score: 2 (1 votes) · LW · GW
In conversations like this, both sides are confused,

Nod. I did actually consider a more accurate version of the comment that said something like "at least one of us is at least somewhat confused about something", but by the time we got to this comment I was just trying to disengage while saying the things that seemed most important to wrap up with.

Comment by raemon on Privacy · 2019-03-18T20:32:31.534Z · score: 2 (1 votes) · LW · GW

I'd say thoughts aren't incentivized enough on the margin, but:

1. A major bottleneck is how fine-tuned and useful the incentives are. (i.e. I'd want to make LW karma more closely track "reward good epistemic processes" before I made the signal stronger. I think it currently tracks that well enough that I prefer it over no-karma).

2. It's important that people can still have private thoughts separate from the LW karma system. LW is where you come when you have thoughts that seem good enough to either contribute to the commons, or to get feedback on so you can improve your thought process... after having had time to mull things over privately without worrying about what anyone will think of you.

(But, I also think, on the margin, people should be much less scared about sharing their private thoughts than they currently are. Many people seem to be scared about sharing unfinished thoughts at all, and my actual model of what is "threatening" says that there's a much narrower domain where you need to be worried in the current environment)

3. One conscious decision we made was not not display "number of downvotes" on a post (we tried it out privately for admins for awhile). Instead we just included "total number of votes". Explicitly knowing how much one's post got downvoted felt much worse than having a vague sense of how good it was overall + a rough sense of how many people *may* have downvoted it. This created a stronger punishment signal than seemed actually appropriate.

Comment by raemon on Privacy · 2019-03-18T20:23:27.461Z · score: 7 (4 votes) · LW · GW

I'm not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you're making are at least less obvious than you think they are.

If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point.

But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.

Comment by raemon on Privacy · 2019-03-16T23:06:18.553Z · score: 12 (7 votes) · LW · GW

Yes, I disagree with that point, and I feel like you've been missing the completely obvious point that bounded agents have limited capabilities.

Choices are costly.

Choices are really costly.

Your comments don't seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn't seem very relevant.

(I recall claims on LessWrong that a decision process can do no worse with more information, but I don't recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi's claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by "just" since it sounds like you're using it as a jargon term that's meant to encapsulate more information than I'm receiving right now).

I've periodically mentioned that my arguments about "just worlds implemented on humans". "Just worlds implemented on non-humans or augmented humans" might be quite different, and I think it's worth talking about too.

But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions.

Notice that in this conversation, where you are and I are both smarter than average, it is not obvious to both of us what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.

Comment by raemon on Privacy · 2019-03-16T20:17:11.368Z · score: 6 (4 votes) · LW · GW

further update: I do think rewards are something like 10x less problematic than punishments, because humans are risk averse and fear punishment more than they desire reward. ("10x" is a stand-in for "whatever the psychological research says on how big the difference is between human response to rewards and punishments")

Comment by raemon on Privacy · 2019-03-16T00:46:22.872Z · score: 7 (2 votes) · LW · GW
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation.

This was most of what I meant to imply. I am mostly talking about rewards, not punishments.

I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.

Comment by raemon on Privacy · 2019-03-16T00:42:39.641Z · score: 3 (2 votes) · LW · GW

(Separately, I am right now making arguments in terms that I'm fairly confident both of us value, but I also think there are reasons to want private thoughts that are more like "having a Raemon_healthy soul", than like being able to contribute usefully to the intellectual commons.

(I noticed while writing this that the latter might be most of what a Benquo finds important for having a healthy soul, but unsure. In any case healthy souls are more complicated and I'm avoiding making claims about them for now)

Comment by raemon on Privacy · 2019-03-16T00:35:58.712Z · score: 3 (4 votes) · LW · GW

Hmm. I think I meant something more like your second interpretation than your first interpretation but I think I actually meant a third thing and am not confident we aren't still misunderstanding each other.

An intended implication, (which comes with an if-then suggestion, which was not an essential part of my original claim but I think is relevant) is:

If you value being able to think freely and have epistemologically sound thoughts, it is important to be able to think thoughts that you will neither be rewarded nor punished for... [edit: or be extremely confident than you have accounted for your biases towards reward gradients]. And the rewards are only somewhat less bad than the punishments.

A followup implication is that this is not possible to maintain humanity-wide if thought-privacy is removed (which legalizing blackmail would contribution somewhat towards). And that this isn't just a fact about our current equilibria, it's intrinsic to human biology.

It seems plausible (although I am quite skeptical) that a small group of humans might be able to construct an epistemically sound world that includes lack-of-intellectual-privacy, but they'd have to have correctly accounted for wide variety of subtle errors.

[edit: all of this assumes you are running on human wetware. If you remove that as a constraint other things may be possible]

Comment by raemon on Privacy · 2019-03-16T00:14:21.776Z · score: 2 (1 votes) · LW · GW

(I feel somewhat confused by the above comment, actually. Can you taboo "bad" and try saying it in different words?)

Comment by raemon on Privacy · 2019-03-16T00:11:40.212Z · score: 4 (2 votes) · LW · GW

My intent was not that it's "bad", just, if you do not attempt to control the conclusions of others, they will predictably form conclusions of particular types, and this will have effects. (It so happens that I think most people won't like those effects, and therefore will attempt to control the conclusions of others.)

Comment by raemon on Privacy · 2019-03-15T23:36:05.768Z · score: 5 (3 votes) · LW · GW

I think scapegoating has a particular definition – blaming someone for something that they didn't do because your social environment demands someone get blamed. And that this isn't relevant to most of my concerns here. You can get unjustly punished for things that have nothing to do with scapegoating.

Comment by raemon on Privacy · 2019-03-15T23:29:53.791Z · score: 7 (5 votes) · LW · GW

So, much of my thread was respond to this sentence:

Implication: "judge" means to use information against someone.

The point being, you can have entirely positive judgment, and have it still produce distortions. All that has to be true is that some forms of thought are more legibly good and get more rewarded, for a fully transparent system to start producing warped incentives on what sort of thoughts get thought.

i.e. say I have three options of what to think about today:

  • some random innocuous status quo thought (neither gets me rewarded nor punished)
  • some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I'm not sure what kind of world we're stipulating here. In some "just"-worlds, this sort of thought gets punished (because it's usually dumb). In some "just worlds" it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it's hit or miss because there's a collection of people trying different strategies with their rewards.
  • some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
  • a thought that is clearly, legibly good, almost certainly net positive, either by following well worn paths, or being "creatively out of the box" in a set of ways that are known to have pretty good returns.
  • Even in one of the possible-just-worlds, it seems like you're going to incentivize the last one much more than the 2nd or 3rd.

    This isn't that different from the status quo – it's a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world.

    ...

    Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up)

    Comment by raemon on Privacy · 2019-03-15T23:12:58.315Z · score: 9 (4 votes) · LW · GW
    If privacy in general is reduced, then they get to see others' thoughts too.

    This response seems mostly orthogonal to what I was worried about. It is quite plausible that most hiring decisions would become better in fully transparent (and also just?) world. But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.

    And I might think this is bad, not only because of fewer-objectively-useful thoughts get thunk, but also because... it just kinda sucks and I don't get to be myself?

    (As well as, fully-transparent-and-just-world might still be a more stressful world to live in, and/or involve more cognitive overhead because I need to model how others will think about me all the time. Hypothetically we could come to an equilibrium wherein we *don't* put extra effort into signaling legibly good thought processes. This is plausible, but it is indeed a background assumption of mine that this is not possible to run on human wetware)

    Comment by raemon on Privacy · 2019-03-15T22:53:52.943Z · score: 5 (4 votes) · LW · GW

    I think you're pointing in an important direction, but your phrasing sounds off to me.

    (In particular, 'scapegoating' feels like a very different frame than the one I'd use here)

    If I think out loud, especially about something I'm uncertain about, that other people have opinions on, a few things can happen to me:

  • Someone who overhears part of my thought process might think (correctly, even!) that my thought process reveals that I am not very smart. Therefore, they will be less likely to hire me. This is punishment, but it's very much not "scapegoating" style punishment.
  • Someone who overhears my private thought process might (correctly, or incorrectly! either) come to think that I am smart, and be more likely to hire me. This can be just as dangerous. In a world where all information is public, I have to attend to how the process by which I act and think looks. I am incentivized to think in ways that are legibly good.
  • "Judgment" is dangerous to me (epistemically) even if the judgment is positive, because it incentives me against exploring paths that look bad, or are good for incomprehensible reasons.

    Comment by raemon on You Have About Five Words · 2019-03-15T20:56:11.481Z · score: 2 (1 votes) · LW · GW

    I think the near-synonym nature is more about convergent evolution. (i.e. words aim to be reflect a concept, working memory is about handling concepts).

    https://en.wikipedia.org/wiki/Working_memory

    Comment by raemon on You Have About Five Words · 2019-03-15T18:11:23.026Z · score: 6 (3 votes) · LW · GW

    "Something that your mind thinks of as one unit, even if it's in fact a cluster of things."

    The "Go to the store" is four words. But "go" actually means "stand up. walk to the door. open the door. Walk to your car. Open your car door. Get inside. Take the key out of your pocket. Put the key in the ignition slot..." etc. (Which are in turn actually broken into smaller steps like "lift your front leg up while adjusting your weight forward")

    But, you are capable of taking all of that an chunking it as the concept "go somewhere" (as as well as the meta concept of "go to the place whichever way is most convenient, which might be walking or biking or taking a bus"), although if you have to use a form of transport you are less familiar with, remembering how to do it might take up a lot of working memory slots, leaving you liable to forget other parts of your plan.

    Comment by raemon on LW Update 2019-03-12 -- Bugfixes, small features · 2019-03-15T00:34:44.029Z · score: 2 (1 votes) · LW · GW

    Possibly the comment was from greaterwrong, or maybe markdown editor? (I'm not sure how either of those currently handle links)

    Comment by raemon on Risk of Mass Human Suffering / Extinction due to Climate Emergency · 2019-03-15T00:04:16.993Z · score: 4 (2 votes) · LW · GW

    I mostly endorse this, but want to add that the only proposal of "climate changed driven extinction" I've heard about specifically requires human action, wherein we attempt to do a massive geoengineering project to curtail climate change that *requires continuous intervention*, and then prematurely *stop* that intervention.

    (This is half-remembered from a talk Seth Baum gave awhile back)

    Comment by raemon on You Have About Five Words · 2019-03-14T21:53:54.550Z · score: 5 (2 votes) · LW · GW

    I think the actual final limit is something like:

    Coordinated actions can't take up more bandwidth than someone's working memory (which is something like 7 chunks, and if you're using all 7 chunks then they don't have any spare chunks to handle weird edge cases).

    A lot of coordination (and communication) is about reducing the chunk-size of actions. This is why jargon is useful, habits and training are useful (as well as checklists and forms and bureaucracy), since that can condense an otherwise unworkably long instruction into something people can manage.

    "Go to the store and get eggs" comes with a bunch of implicit knowledge about cars or bikes or where the store is and what eggs are, etc.

    Comment by raemon on You Have About Five Words · 2019-03-14T21:02:37.450Z · score: 3 (2 votes) · LW · GW

    I suppose, but, again, "all numbers are made up" was the first sentence in this post, and half an order of magnitude feels within bounds of "the general point of the essay holds up."

    I also don't currently know of anyone writing on LessWrong or EA forum who should have reason to believe they are as coordinated as the neo-Nazis are here. (See elsethread comment on my take on the state of EA coordination, which was the motivation for this post).

    (In Romeo's terms, the neo-nazis are also using a social tech with unfolding complexity, where their actual coordinated action is "recite the pledge every day", which lets them them encode additional information. But to get this you need to spend your initial coordinated action on that unfolding action)

    Comment by raemon on You Have About Five Words · 2019-03-14T20:44:55.812Z · score: 7 (4 votes) · LW · GW

    Heh, "read the sequences" clocks in at 3 words.

    Comment by raemon on You Have About Five Words · 2019-03-14T00:32:52.125Z · score: 4 (2 votes) · LW · GW

    The whole point is that coordination looks different at different scales.

    So, I think I was looking at this through a nonstandard frame (Maybe more nonstandard than I thought). There are two different sets of numbers in this post:

    — 4.3 million words worth of nuance

    — 200,000 words of nuance

    — 50,000 words

    — 1 blogpost (1-2k words)

    — 4 words

    And separately:

    — 1-4 people

    — 10 people

    — 100 people

    — 1000 people

    — 10,000 people+

    While I'm not very confident about any of the numbers, I am more confident in the first set of numbers than the second set.

    If I look out into the world, I see clear failures (and successes) of communication strategies that cluster around different strata of communication bandwidth. And in particular, there is clearly some point at which the bandwidth collapses to 3-6 words.

    Comment by raemon on You Have About Five Words · 2019-03-13T22:14:40.609Z · score: 4 (2 votes) · LW · GW

    Nod. The claim here is specifically about how much nuance can be relevant to your coordination, not how many people you can coordinate with. (If this failed to come across, that also says something about communicating nuance being hard)

    Comment by raemon on You Have About Five Words · 2019-03-13T22:08:38.879Z · score: 2 (1 votes) · LW · GW

    I don't think this directly bears on how to build an action coordination website, more than in lieu of such a site you should expect action coordination to succeed at the 4-word level of complexity. I haven't thought as much about how to account for this when trying hard to build a coordination platform.

    But, I do think that kickstarters tend to succeed more if the 4-word version of them are intuitively appealing.

    Comment by raemon on You Have About Five Words · 2019-03-13T22:06:59.620Z · score: 2 (1 votes) · LW · GW

    (The mental-action I was performing was "observing what seems to actually happen and then grab the numbers that I remembered coinciding with those actions", rather than working backwards from a model of numbers, which may or may not have been a good procedure, but in any case means that being off by a factor of 100 doesn't influence the surrounding text much)

    Comment by raemon on You Have About Five Words · 2019-03-13T21:51:55.158Z · score: 2 (1 votes) · LW · GW

    Whoops. I was confusing pages with words.

    Comment by raemon on You Have About Five Words · 2019-03-13T21:10:47.585Z · score: 2 (1 votes) · LW · GW

    I do think that'd be a valuable post (and that sort of thing is going on on the EA forum right now, with people proposing various ways to solve a particular scaling problem). I don't know that I have particularly good ideas there, although I do have some. The point of this post was just "don't be surprised when your messages loses nuance if you haven't made special efforts to prevent it from doing so" (or, if it gets out-competed by a less nuanced message that was designed to be scalable and/or viral)

    I wrote this post in part so that I could more easily reference later at some point when I had either concrete ideas about what to do, or when I think someone is mistaken in their strategy because they're missing this insight.

    Comment by raemon on You Have About Five Words · 2019-03-13T20:33:50.022Z · score: 18 (3 votes) · LW · GW

    My claim is "a large number of people can't reasonably be expected to read more than a few words in common", which I think is subtly different (in addition to the thing where this post wasn't about ways to address the problem, it was about the default state of the problem in the absence of an explicit coordination mechanism)

    If your book-length-treatise reaches 1000 people, probably 10-50 of those people read the book and paid careful attention, 100 people read the book, a couple hundred people skimmed the book, and the rest just absorbed a few key points secondhand.

    I think it is in fact a failure of law that that the law has grown to the point where a single person can't possibly know it all, and only specialists can know most of it (because this creates an environment where most people don't know what laws they're breaking which enables certain kinds of abuse)

    I think the way EA and LessWrong work is that there's a large body of work people are vaguely expected to read (In the case of LessWrong, I think the core sequences are around [edit: a million words, I initially was using my cached pageCount rather than wordCount] not sure how big the overall EA corpus is). EA and LW are filtered by "nerds who like to read", so you get to be on the higher end of the spectrum of how many people have read how much.

    But, it still seems like a few things end up happening:

    Important essays definitely lose nuance. "Politics in the Mind Killer" is one of the common examples of something where the original essay got game-of-telephoned pretty hard by oral culture.

    Similarly, EA empirically runs into messaging issues where, even though 80k had tried intentionally to downplay the "Earning to Give" recommendation, but people still primarily associated 80k with Earning to Give years later. And when they finally successfully switched the message to "EA is talent constrained", that got misconstrued as well.

    Empirically people also successfully rely on a common culture to some degree. My sense is that the people who tend to do serious work and get jobs and stick around are ones who have read a lot of at least a good chunk of the words, and they somewhat filter themselves into groups that have read particular subsets. The fact that there are 1000+ people misunderstanding politics is the mind killer doesn't mean there's not also 100-200 people who remember the original claim.

    (There are probably different clusters of people who have read different clusters of words, i.e. people who have read the sequences, people who have read Doing Good Better, people who have read a smattering of essays from each as well as the old Givewell blogs, etc)

    One problem facing EA is that there is not much coordination on which words are the right ones to read. Doing Good Better was written with a goal of being "the thing you gave people as their cultural onboarding tool", AFAICT. But which 80k essays are you supposed to have read? All of them? I dunno, that's a lot and I certainly haven't, and it's not obvious that that's a better use of my time than reading up on machine learning or the AI Alignment Forum or going off to learn new things that aren't part of the core community material.

    Comment by raemon on You Have About Five Words · 2019-03-13T20:09:14.539Z · score: 3 (2 votes) · LW · GW

    Nod. Social pressure and/or organizational efforts to read a particular thing together (esp. in public where everyone can see that everyone else is reading) does seem like a thing that would work.

    It comes with drawbacks such as "if it turns out you need to change the 80,000 word text because you picked the wrong text or need to amend it, I expect there to be a lot of political drama surrounding that, and the process by which people building momentum towards changing it probably would be subject to the bandwidth limits I'm pointing to [edit: unless the organization has specifically built in tools to alleviate that]"

    (Reminder that I specifically said "all numbers are made up and/or sketchily sourced". I'm pointing to order of magnitude. I did consider naming this blogpost "you have about five words" or "you have less than seven words". I think it was a somewhat ironic failure of mine that I went with "you have four words" since it degrades less gracefully than "you have about five words.")

    Comment by raemon on You Have About Five Words · 2019-03-13T20:00:38.121Z · score: 8 (4 votes) · LW · GW

    The point is not "rationing out your words" is the correct way to coordinate people. The point is that you need to attend, as part of your coordination strategy, to the fact that most people won't read most of your words. Insofar as your coordination strategy relies on lots of people hearing an idea, the idea needs to degrade gracefully as it loses bandwidth.

    Walmart I expect to do most of it's coordination via oral tradition. (At the supermarket I worked at, I got one set of cultural onboarding from the store manager, who gave a big speech... which began an ended with a reminder that "the four virtues of the Great Atlantic and Pacific Tea company are integrity, respect, teamwork and responsibility." Then, I learned most of the minutia of how to run a cash register, do janitorial duties or be a baker via on-the-job training, by someone who spent several weeks telling me what to do and giving me corrective feedback)

    (Several years later, I have some leftover kinesthetic knowledge of how to run a cash register, and the dangling words "integrity, respect, teamwork, responsibility" in my head, although also I probably only have that because I thought the virtues were sort of funny and wrote a song about it)

    Comment by raemon on You Have About Five Words · 2019-03-13T19:54:05.627Z · score: 2 (1 votes) · LW · GW

    I'm actually two levels of surprised here. I'd have naively expected McCain, Romney and Hillary to have competent enough staffers to make sure they had a slogan, and sort of passively assumed they had one. It'd be surprising if they didn't have one, and if they did have one, surprising that I hadn't heard it. (I hung out in blue tribe spaces so it's not that weird that I'd have failed to hear McCain's or Romneys)

    Quick googling says that Hillary's team thought about 84 slogans before settling on "Stronger Together", which I don't remember hearing. (I think instead I heard a bunch of anti-Trump slogans like "Love Trumps Hate", which maybe just outcompeted it?)

    Comment by raemon on You Have About Five Words · 2019-03-13T19:50:11.615Z · score: 12 (6 votes) · LW · GW

    So, I think I optimized this piece a bit too much as poetry at the expense of clarity. (I was trying to keep it brief overall, and have the sections sort of correspond in length to how much reading you could expect people to read at that scale).

    Obviously people in the real world do successfully coordinate on things, and this piece doesn't address the various ways you might try to do so. The core claim here is just that if you haven't taken some kind of special effort to ensure your nuanced message will scale, it will probably not scale.

    Hierarchies are a way to address the problem. Oral tradition that embeds itself in people's socializing process is a way to address the problem. Smaller groups is a way to address the problem. Social pressure to read a specific thing is a way to address the problem. But each of these address it only in particular ways and come with particular tradeoffs.

    Comment by raemon on Blegg Mode · 2019-03-13T01:16:06.929Z · score: 9 (3 votes) · LW · GW

    Perhaps also worth noting: I was looking through two other recent posts, Tale of Alice Almost and In My Culture, through a similar lens. They each give me the impression that they are relating in some way to a political dispute which has been abstracted away, with a vague feeling that the resulting post may somehow still be a part of the political struggle.

    I'd like to a have a moderation policy (primarily about whether such posts get frontpaged) that works regardless of whether I actually know anything about any behind-the-scenes drama. I've mulled over a few different such policies, each of which would result in different outcomes as to which of the three posts would get frontpaged. But in each case the three posts are hovering near the edge of however I'd classify them.

    (The mod team was fairly divided on how important a lens this was and/or exactly how to think about, so just take this as my own personal thoughts for now)

    Comment by raemon on Blegg Mode · 2019-03-13T00:57:51.533Z · score: 9 (3 votes) · LW · GW

    Quick note that the mod team had been observing this post and the surrounding discussion and not 100% sure how to think about it. The post itself is sufficiently abstracted that unless you're already aware of the political discussion, it seemed fairly innocuous. Once you're aware of the political discussion it's fairly blatant. It's unclear to me how bad this is.

    I do not have much confidence in any of the policies we could pick and stick to here. I've been mostly satisfied with the resulting conversation on LW staying pretty abstract and meta level.

    Comment by raemon on You Have About Five Words · 2019-03-13T00:50:33.186Z · score: 2 (5 votes) · LW · GW

    Yes, but it's important to note that if you haven't purposefully built that hierarchy, you can't rely on it existing. (And, it's still a fairly common problem within an org for communication to break down as it scales – I'd argue that most companies don't end up successfully solving this problem)

    The motivating example for this post at-the-time-of-writing was that in the EA sphere, there's a nuanced claim made about "EA being talent constrained", which large numbers of people misinterpreted to mean "we need people who are pretty talented" and not "we need highly specific talents, and the reason EA is talent constrained is that the median EA does not have these talents."

    There were nuanced blogposts discussing it, but in the EAsphere, the shared information is capped at roughly "1 book worth of content and jargon, which needs to cover a diverse array of concepts, so any given concept won't necessarily have much nuance", and in this case it appeared to hit the literal four word limit.

    Comment by raemon on What Vibing Feels Like · 2019-03-12T21:57:01.059Z · score: 11 (4 votes) · LW · GW

    FYI, the way I define "instrumental rationality", which I think is "specific enough to be useful" without being so specific as to be overly constraining, is:

    "The study of how to improve your cognitive processes in order to make better decisions."

    And for epistemic rationality, "the study of how to improve your cognitive processes in order to have beliefs that more accurately reflect reality."

    In both cases, I think it's actually plausible that the OP has some bearing. [Although, disclaimer that I'm not 100% sure I grok the OP, and might be talking about something subtly different].

    I think it's a crucial rationalist skill to be able to apply "thinking" of the sort the OP is gesturing at avoiding. But, it seems quite important to me that there's types of ideas and literal truths that are harder to grasp if you're only capable of thinking in a highly analytical way.

    One lens to look at this through: You need to both babble and prune in order to find useful things to say, and generating good babble can involve a lot of cognitive work that looks superficially irrational. This is fine, although you will eventually want to make sure the babble can pass through some kind of pruning filter.

    • example 1: drawing connections between things you wouldn't otherwise be able to notice, even if reason you drew those connections didn't make much sense. I.e you happened to be staring at a tree and it pointed you towards a tree metaphor
    • example 2: if you're highly engaging your prune module to check if things make sense, you may be overly committing to a given ontology that isn't actually quite right. Or your S1 might be picking up on things that are important that you can't fully articulate, and you're tempted to throw that information out completely rather than stew on it until you have a better idea of what's going on.

    This isn't an argument for "vibing" in particular being useful, but it's a more general argument that even epistemic rationality often requires you to be operate in modes that seem superficially "a-rational", to give you access to more ideas and information.

    Elsethread I used the "music" as an example of something that requires good vibes in order to execute well, and I think there are kinds intellectual creativity that are more in the genre-of-rationality (i.e. puzzle solving) that may also benefit more from being in a playful mode, although I'm less confident about the details of this.

    You Have About Five Words

    2019-03-12T20:30:18.806Z · score: 53 (19 votes)
    Comment by raemon on Renaming "Frontpage" · 2019-03-12T20:20:17.254Z · score: 2 (1 votes) · LW · GW

    Yeah, "Core" felt more like it should be "core reading that you're expected to have an understanding of if you're participating on the site" a la sequences.

    Comment by raemon on How much funding and researchers were in AI, and AI Safety, in 2018? · 2019-03-12T20:19:16.026Z · score: 12 (3 votes) · LW · GW

    Nod. Definitely open to better versions of the question that carve at more useful joints. (With a caveat that since the question is more oriented towards "what are the easiest street lamps to look under" rather than "what is the best approximation")

    So, I guess my return question is: do you have suggestions on subfields to focus on, or exclude, from "AI capabilities research" that more reliably points to "AGI", that you think there's likely to exist public data on? (Or some other way to carve up AI research space)

    It does seem good to have a separate category for "things like removing bias from word embeddings" that is separate from "Technical AGI alignment". (I think it's still useful to have a sense of how much effort humanity is putting into that, just as a rough pointer at where our overall priorities are)

    Comment by raemon on Feature Wish List for LessWrong · 2019-03-12T04:16:36.227Z · score: 2 (1 votes) · LW · GW

    Yeah. We've been thinking about this quite a bit although it'll still be awhile before we get to it.

    Comment by raemon on What Vibing Feels Like · 2019-03-12T02:06:48.024Z · score: 10 (5 votes) · LW · GW

    I'm not 100% sure I grokked this post, but a thing that it makes me think of, which may or may not be relevant:

    One thing that's particularly hard (in my experience) is to maintain positive vibes while giving critical feedback. It's not just that you need to couch the feedback in a way that doesn't sting. Even if it doesn't sting, it can shift someone out of a satisfying and/or useful flow state.

    This is particularly important in domains where being in a state of playfulness or flow is important to whatever your task is. The most salient domain to me is music.

    I have observed people who can totally give feedback to a group of musicians that feels like it takes the energy of the room, transforms it somewhat, and returns it to the musicians with words like "yeah man! groovy. Let's try it again and try out [x quality]."

    Whereas when I'm music-directing and hear something that sounds off, the default thing that happens is that I stop, think for a minute (and I think have a somewhat angry looking expression on my face because that's what my resting-thinking-face looks like), and then figure out what to say and then say it, and then by that point it doesn't matter how I word it or what expression I have, I've already harmed the energy in the room.

    The solution as I understand it tends to involve two things:

    • cultivating "resting good vibes" as an overall stance, which is somewhat complicated (and maybe is what the OP is describing, not sure if we're talking about the same thing), so that even if you're sitting and thinking for a minute it communicates something more like tranquility than "I'm thinking about how to politely criticize you."
    • gaining domain expertise in the subject matter you're critiquing, so that you don't have to pause as much, since part of what kills the energy is the pause itself. If it takes me a minute to figure out what as wrong with a thing, then at best I can transmute the energy from "high excitement" to "tranquil reflection", and sometimes I really needed the high excitement. Whereas nowadays, I'm slightly better than I was 5 years ago at quickly noticing what was off about a thing and having a cached way of talking about it.
    Comment by raemon on In My Culture · 2019-03-11T22:58:40.678Z · score: 6 (3 votes) · LW · GW

    Random additional note: introspection is a skill, and extrospection is a skill, and part of what feeds into my "it seems like this is complicated" belief is that people can be good or bad at both, and common knowledge about who is good or bad at either is hard to establish.

    Comment by raemon on In My Culture · 2019-03-11T22:57:29.467Z · score: 15 (5 votes) · LW · GW
    Or at least, it seems to me that there's a principle of "don't claim you understand better than others what's going on in their heads" in the shared context of people you and I hang out with. But maybe I'm mistaken? Maybe this is not the case, and in fact that is just another piece of my personal culture?

    My read on the context-culture is that this isn't very agreed upon, and/or depends a lot on context. (I had a sense that this particular point was probably the thing that triggered this entire post, but was waiting to talk about that until I had time to think seriously about it)

    [Flagging: what follows is my read on the rationalist context culture, which... somewhat ironically can't make much use of the technique suggested in the OP. I'm trying to stick to descriptive claims about what I've observed, and a couple of if-then statements which I think are locally valid]

    A founding principle of the rationality community is "people are biased and confused a lot, even smart people, even smart people who've thought about it a bit". So it seemed to me that if the rationality was going to succeed at the goal of "help people become less confused and more rational", it's necessary for some kind of social move in the space of "I think you're more confused or blind-spotted than you realize", at least some times.

    But it's also even easier to be wrong about what's going on in someone else's head than what's going on in your head. And there are also sometimes incentives to use "I think someone is being confused" as a social weapon. And making a claim like that and getting it wrong

    My observations are that rationalists do sometimes do this (in Berkeley and on LW and elsewhere), and it often goes poorly unless there is a lot of trust or a lot of effort is put in, but it doesn't feel like there's much like a collective immune response that I'd expect to see if it were an established norm.

    Comment by raemon on Renaming "Frontpage" · 2019-03-10T05:50:18.227Z · score: 12 (3 votes) · LW · GW

    Brainstorm:

    I actually kind of like 'whiteboard', which sounds specific enough to mean something, provides some vague connotations that point in the right direction, but not the sort of thing you think you'll understand well enough to have strong opinions about initially before mousing over it and getting a tooltip.

    (Intended connotation is "the place where you right down ideas and models and observations, which are clearly supposed to be in the overall genre of science, but not (necessarily) late-stage-high-effort-peer-reviewed science")

    Comment by raemon on Renaming "Frontpage" · 2019-03-10T05:38:44.845Z · score: 7 (3 votes) · LW · GW
    More generally, I'd recommend that each category has a name that bluntly states what the filter does (e.g. if it only uses karma as filter say "high karma").

    I agree with this in principle, the trick is that the a) the filter here is "the mods have judged this to fit into a loose cluster of explanations/explorations-that-don't-hinge-on-recent-events", and we specifically don't want the connotation to be "and therefore this is better" (in an overall abstract sense), nor is the intent for it to be a quality signal so much as a genre-signal.

    So "moderator's pick" or "favored" or even "original ideas" doesn't really capture it. "Mostly ideas" sort of gets closest of the above suggestions, mostly by virtue of communicating that the filter is vague.

    Renaming "Frontpage"

    2019-03-09T01:23:05.560Z · score: 44 (13 votes)

    How much funding and researchers were in AI, and AI Safety, in 2018?

    2019-03-03T21:46:59.132Z · score: 40 (7 votes)

    LW2.0 Mailing List for Breaking API Changes

    2019-02-25T21:23:03.476Z · score: 12 (3 votes)

    How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative?

    2019-02-21T21:36:07.707Z · score: 24 (8 votes)

    If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix?

    2019-02-21T21:32:56.366Z · score: 47 (15 votes)

    Avoiding Jargon Confusion

    2019-02-17T23:37:16.986Z · score: 50 (18 votes)

    The Hamming Question

    2019-02-08T19:34:33.993Z · score: 31 (10 votes)

    Should questions be called "questions" or "confusions" (or "other")?

    2019-01-22T02:45:01.211Z · score: 17 (6 votes)

    What are the open problems in Human Rationality?

    2019-01-13T04:46:38.581Z · score: 60 (18 votes)

    LW Update 2019-1-09 – Question Updates, UserProfile Sorting

    2019-01-09T22:34:31.338Z · score: 30 (6 votes)

    Open Thread January 2019

    2019-01-09T20:25:02.716Z · score: 24 (6 votes)

    Events in Daily?

    2019-01-02T02:30:06.788Z · score: 16 (5 votes)

    What exercises go best with 3 blue 1 brown's Linear Algebra videos?

    2019-01-01T21:29:37.599Z · score: 30 (8 votes)

    Thoughts on Q&A so far?

    2018-12-31T01:15:17.307Z · score: 26 (7 votes)

    Can dying people "hold on" for something they are waiting for?

    2018-12-27T19:53:35.436Z · score: 27 (9 votes)

    Solstice Album Crowdfunding

    2018-12-18T20:51:31.183Z · score: 39 (11 votes)

    How Old is Smallpox?

    2018-12-10T10:50:33.960Z · score: 39 (13 votes)

    LW Update 2018-12-06 – All Posts Page, Questions Page, Posts Item rework

    2018-12-08T21:30:13.874Z · score: 18 (3 votes)

    What is "Social Reality?"

    2018-12-08T17:41:33.775Z · score: 24 (7 votes)

    LW Update 2018-12-06 – Table of Contents and Q&A

    2018-12-08T00:47:09.267Z · score: 58 (14 votes)

    On Rationalist Solstice and Epistemic Caution

    2018-12-05T20:39:34.687Z · score: 59 (22 votes)

    Anyone use the "read time" on Post Items?

    2018-12-01T23:16:23.249Z · score: 21 (6 votes)

    Winter Solstice 2018 Roundup

    2018-11-28T03:09:44.938Z · score: 55 (17 votes)

    Upcoming: Open Questions

    2018-11-24T01:39:33.385Z · score: 43 (14 votes)

    LW Update 2018-11-22 – Abridged Comments

    2018-11-22T22:11:10.960Z · score: 12 (8 votes)

    Introducing the AI Alignment Forum (FAQ)

    2018-10-29T21:07:54.494Z · score: 88 (29 votes)

    [Beta] Post-Read-Status on Lessestwrong

    2018-10-25T23:13:00.775Z · score: 23 (5 votes)

    Open Source Issue Roundup

    2018-10-06T20:09:32.257Z · score: 25 (7 votes)

    Being a Robust Agent

    2018-10-04T21:58:25.522Z · score: 66 (29 votes)

    LW Update 2018-10-01 – Private Messaging Works

    2018-10-01T21:28:47.017Z · score: 34 (8 votes)

    Modes of Petrov Day

    2018-09-20T18:48:59.140Z · score: 67 (22 votes)

    LW Update 2018-09-18 – Email Subscriptions for Curated

    2018-09-19T00:30:57.974Z · score: 32 (8 votes)

    Moderation Reference

    2018-09-12T19:06:57.443Z · score: 31 (12 votes)

    LW Update 2018-08-23 – Performance Improvements

    2018-08-24T02:02:30.916Z · score: 20 (7 votes)

    [Feature Idea] Epistemic Status

    2018-08-21T20:22:45.687Z · score: 39 (14 votes)

    How to Build a Lumenator

    2018-08-12T05:11:06.715Z · score: 43 (15 votes)

    LW Update 2018-08-03 – Comment Sorting

    2018-08-03T23:39:48.114Z · score: 25 (7 votes)

    Strategies of Personal Growth

    2018-07-28T18:27:06.763Z · score: 112 (52 votes)

    LW Update 2018-07-27 – Sharing Drafts

    2018-07-28T02:54:36.835Z · score: 34 (12 votes)

    Would you benefit from audio versions of posts?

    2018-07-26T04:53:20.733Z · score: 24 (8 votes)

    Replace yourself first if you're moving to the Bay

    2018-07-22T20:57:25.903Z · score: 58 (32 votes)

    LW Update 2018-07-18 – AlignmentForum Bug Fixes

    2018-07-19T02:10:57.487Z · score: 13 (3 votes)

    LW Update 2018-7-14 – Styling Rework, CommentsItem, Performance

    2018-07-14T01:13:17.998Z · score: 31 (9 votes)

    Announcing AlignmentForum.org Beta

    2018-07-10T20:19:41.201Z · score: 71 (34 votes)

    Stories of Summer Solstice

    2018-07-08T07:16:10.473Z · score: 56 (17 votes)

    LW Update 2018-07-01 – Default Weak Upvotes

    2018-07-01T23:47:06.434Z · score: 30 (10 votes)

    LessWrong is hiring

    2018-06-19T01:38:56.783Z · score: 75 (23 votes)

    LW Update 2018-06-11 – Vulcan Refactor, Karma Overhaul, Colored Links, Moderation Log

    2018-06-12T00:49:05.508Z · score: 34 (17 votes)

    Post EA-Global Debrief

    2018-06-11T02:56:06.480Z · score: 13 (3 votes)