Posts

How would we check if "Mathematicians are generally more Law Abiding?" 2020-01-12T20:23:05.479Z · score: 28 (5 votes)
Please Critique Things for the Review! 2020-01-11T20:59:49.312Z · score: 51 (13 votes)
Clumping Solstice Singalongs in Groups of 2-4 2020-01-05T20:50:51.247Z · score: 15 (2 votes)
Meta-discussion from "Circling as Cousin to Rationality" 2020-01-03T21:38:16.387Z · score: 12 (5 votes)
Voting Phase UI: Aggregating common comments? 2019-12-31T03:48:41.024Z · score: 10 (1 votes)
What are the most exciting developments from non-Europe and/or non-Northern-Hemisphere? 2019-12-29T01:30:05.246Z · score: 13 (2 votes)
Propagating Facts into Aesthetics 2019-12-19T04:09:17.816Z · score: 83 (24 votes)
"You can't possibly succeed without [My Pet Issue]" 2019-12-19T01:12:15.502Z · score: 53 (24 votes)
Karate Kid and Realistic Expectations for Disagreement Resolution 2019-12-04T23:25:59.608Z · score: 80 (27 votes)
What are the requirements for being "citable?" 2019-11-28T21:24:56.682Z · score: 44 (11 votes)
Can you eliminate memetic scarcity, instead of fighting? 2019-11-25T02:07:58.596Z · score: 66 (22 votes)
The LessWrong 2018 Review 2019-11-21T02:50:58.262Z · score: 102 (28 votes)
Picture Frames, Window Frames and Frameworks 2019-11-03T22:09:58.181Z · score: 26 (6 votes)
Healthy Competition 2019-10-20T20:55:48.265Z · score: 57 (21 votes)
Noticing Frame Differences 2019-09-30T01:24:20.435Z · score: 138 (50 votes)
Meetups: Climbing uphill, flowing downhill, and the Uncanny Summit 2019-09-21T22:48:56.004Z · score: 27 (6 votes)
[Site Feature] Link Previews 2019-09-17T23:03:12.818Z · score: 35 (9 votes)
Modes of Petrov Day 2019-09-17T02:47:31.469Z · score: 68 (26 votes)
Are there technical/object-level fields that make sense to recruit to LessWrong? 2019-09-15T21:53:36.272Z · score: 26 (10 votes)
September Bragging Thread 2019-08-30T21:58:45.918Z · score: 52 (15 votes)
OpenPhil on "GiveWell’s Top Charities Are (Increasingly) Hard to Beat" 2019-08-24T23:28:59.705Z · score: 11 (2 votes)
LessLong Launch Party 2019-08-23T22:18:39.484Z · score: 13 (4 votes)
Do We Change Our Minds Less Often Than We Think? 2019-08-19T21:37:08.004Z · score: 21 (3 votes)
Raph Koster on Virtual Worlds vs Games (notes) 2019-08-18T19:01:53.768Z · score: 22 (11 votes)
What experiments would demonstrate "upper limits of augmented working memory?" 2019-08-15T22:09:14.492Z · score: 30 (12 votes)
Partial summary of debate with Benquo and Jessicata [pt 1] 2019-08-14T20:02:04.314Z · score: 90 (27 votes)
[Site Update] Weekly/Monthly/Yearly on All Posts 2019-08-02T00:39:54.461Z · score: 36 (8 votes)
Gathering thoughts on Distillation 2019-07-31T19:48:34.378Z · score: 36 (9 votes)
Keeping Beliefs Cruxy 2019-07-28T01:18:13.611Z · score: 53 (21 votes)
Shortform Beta Launch 2019-07-27T20:09:11.599Z · score: 72 (20 votes)
Can you summarize highlights from Vernon's Creativity? 2019-07-26T01:12:31.724Z · score: 16 (4 votes)
"Shortform" vs "Scratchpad" or other names 2019-07-23T01:21:48.979Z · score: 15 (2 votes)
Should I wear wrist-weights while playing Beat Saber? 2019-07-21T19:56:54.102Z · score: 8 (2 votes)
Robust Agency for People and Organizations 2019-07-19T01:18:53.416Z · score: 53 (19 votes)
Doublecrux is for Building Products 2019-07-17T06:50:26.409Z · score: 32 (10 votes)
"Rationalizing" and "Sitting Bolt Upright in Alarm." 2019-07-08T20:34:01.448Z · score: 31 (11 votes)
LW authors: How many clusters of norms do you (personally) want? 2019-07-07T20:27:41.923Z · score: 40 (9 votes)
What product are you building? 2019-07-04T19:08:01.694Z · score: 41 (22 votes)
How to handle large numbers of questions? 2019-07-04T18:22:18.936Z · score: 13 (3 votes)
Opting into Experimental LW Features 2019-07-03T00:51:19.646Z · score: 21 (5 votes)
How/would you want to consume shortform posts? 2019-07-02T19:55:56.967Z · score: 20 (6 votes)
What's the most "stuck" you've been with an argument, that eventually got resolved? 2019-07-01T05:13:26.743Z · score: 15 (4 votes)
Do children lose 'childlike curiosity?' Why? 2019-06-29T22:42:36.856Z · score: 44 (14 votes)
What's the best explanation of intellectual generativity? 2019-06-28T18:33:29.278Z · score: 30 (8 votes)
Is your uncertainty resolvable? 2019-06-21T07:32:00.819Z · score: 32 (17 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 100 (54 votes)
Ramifications of limited positive value, unlimited negative value? 2019-06-09T23:17:37.826Z · score: 11 (6 votes)
The Schelling Choice is "Rabbit", not "Stag" 2019-06-08T00:24:53.568Z · score: 108 (44 votes)
Seeing the Matrix, Switching Abstractions, and Missing Moods 2019-06-04T21:08:28.709Z · score: 32 (20 votes)
FB/Discord Style Reacts 2019-06-01T21:34:27.167Z · score: 77 (19 votes)

Comments

Comment by raemon on Bay Solstice 2019 Retrospective · 2020-01-17T06:07:24.353Z · score: 26 (9 votes) · LW · GW

You can watch it here.

Comment by raemon on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T02:42:45.383Z · score: 4 (2 votes) · LW · GW

The review has definitely had an effect on me looking at new posts, and thinking "which of these would I feel good about including in a Best of the Year Book?" as well as "which of these would I feel good about including in an actual textbook?

This post is sort of on the edge of "timeless enough that I think it'd be fine for the 2020 Review", but I'm not sure whether it's quite distilled enough to fit nicely into, say, the 2021 edition of "the LessWrong Textbook." (this isn't necessarily a complaint about the post, just noting that different posts can be optimized for different things)

Comment by raemon on Bay Solstice 2019 Retrospective · 2020-01-17T00:45:21.717Z · score: 7 (3 votes) · LW · GW

So I made my own spreadsheet, which is publicly editable and incorporates every song, poem, story, and speech from the above two repositories.

This looks pretty useful, thanks!

Comment by raemon on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T00:13:07.945Z · score: 7 (3 votes) · LW · GW

Curated, with some thoughts:

I think the question of "how to safely change the way you think, in a way that preserves a lot of commonsense things" is pretty important. This post gave me a bit of a clearer sense of "Valley of Bad Rationality" problem.

This post also seemed like part of the general project of "Reconciling CFAR's paradigm(s?) with the established LessWrong framework. In this case I'm not sure it precisely explains any parts of CFAR that people tend to find confusing. But it does lay out some frameworks that I expect to be helpful groundwork for that.

I shared some of Ben's confusion re: what point the post was specifically making about puzzles:

I guess this generally connects with my confusion around the ontology of the post. I think it would make sense for the post to be 'here are some problems where puzzling at them helped me understand reality' and 'here are some problems where puzzling at them caused me to hide parts of reality from myself', but you seem to think it's an attribute of the puzzle, not the way one approaches it, and I don't have a compelling sense of why you think that.

There were some hesitations I had about curating it – to some degree, this post is a "snapshot of what CFAR is doing in 2020", which is less obviously "timeless content". The post depends a fair bit on the reader already knowing what CFAR is and how they relate to LessWrong. But the content was still focused on explaining concepts, which I expect to be generally useful.

Comment by raemon on Against Rationalization II: Sequence Recap · 2020-01-16T23:29:42.772Z · score: 3 (1 votes) · LW · GW

Congrats! Note that if you go to the library page and scroll down a bit, you'll find a "create sequence" button, which you can use if you want to create a formal sequence for this. 

(Also happy to help with this if the UI is confusing – we haven't really optimized our sequence UI as much as we'd like)

Comment by raemon on Please Critique Things for the Review! · 2020-01-16T21:09:17.656Z · score: 5 (2 votes) · LW · GW

Also, I haven't voted yet because I don't remember the details of the vast majority of the posts, and don't feel comfortable just voting based on my current general feeling about each post

Reminder here that it's pretty fine to vote proportional to "how good does the post seem" and "how confident you are in that assessment." (i.e. I expect it to improve the epistemic value of the vote if people in your reference class weakly vote on the posts that seem good)

Comment by raemon on Please Critique Things for the Review! · 2020-01-16T21:05:18.096Z · score: 5 (2 votes) · LW · GW

I think if there was a period where every few days a mod would post a few nominated posts and ask people to re-read and re-discuss them, that might have helped to engage people like me more. (Although honestly there's so much new content on LW competing for attention now that I might not have participated much even in that process.)

That's a pretty good idea, might try something like that next year.

the ones that did jump out at me I think I already commented on back when they were first posted and don't feel motivated to review them now.

Not sure how helpful this is, but fwiw: 

I think it's useful for the post authors to write reviews basically saying "here is how much thinking has evolved since writing this, and/or 'yup, I still just endorse this and think it's great".

In the same way, I think it'd be useful people did most of their commenting back-in-the-day write a short review that basically says "I still endorse the things I said back then", or "my thinking has changed a bit, here's how." (As I noted elsethread, I think it was also helpful when Vanessa combined several previous comments into one more distilled comment, although obviously that's a bit more work).

Comment by raemon on Bay Solstice 2019 Retrospective · 2020-01-16T20:53:26.996Z · score: 7 (3 votes) · LW · GW

Yeah, wanted to basically just echo these points.

Comment by raemon on The Rocket Alignment Problem · 2020-01-16T19:48:32.630Z · score: 3 (1 votes) · LW · GW

FYI I also didn’t learn much from this post. (But, the places I did learn it from were random comments buried in threads that didn’t make it easy for people to learn)

Comment by raemon on Please Critique Things for the Review! · 2020-01-16T18:41:38.927Z · score: 5 (2 votes) · LW · GW

Nod. Something perhaps worth saying explicitly was that I was expecting / hoping for each longtime user to review a smallish number of things (like, 1-5) over the course of the monthlong review process, focusing on posts that they had some kind of strong opinion about.

(Some people have done lots of smaller reviews, which I also think is good but for different reasons, and not something I think people should be feeling pressure to do if they’re not finding it worthwhile.)

Comment by raemon on tragedyofthecomments's Shortform · 2020-01-16T06:41:42.052Z · score: 8 (4 votes) · LW · GW

I assume tragedy is referring to roughly that sort of statement, and inferring something about how the statement comes across or what it sounds like the person is imagining. 

I think 'the bay area should' is a somewhat confused statement, or one that comes from a mistaken sense of what's going on. And there's a particular flavor of frustration that comes from thinking that there's actually some entity that has the power to do stuff, which doesn't exist, and I think if you properly understood that the entity doesn't exist you'd do some combination of "redirecting your energy towards things that are more likely to fix the problem" or "realize that being frustrated in the particular way that you are isn't actually helping."

(where I think "things that might actually work" are "refactor your social environment into something that has boundaries and goals, and figure out how to be a leader." The main problem is that the Bay Area is leadership bottlenecked, and that generally competent people are rare and the world is big, with many problems competing for their attention)

Comment by raemon on Go F*** Someone · 2020-01-16T02:31:52.320Z · score: 10 (5 votes) · LW · GW

I think the argument is that capitalism is incentivized to keep you lonely so you buy more stuff for exactly the reason you describe.

Comment by raemon on The Tails Coming Apart As Metaphor For Life · 2020-01-16T00:28:19.206Z · score: 3 (1 votes) · LW · GW

Gotcha. Yeah that makes sense.

Comment by raemon on The Tails Coming Apart As Metaphor For Life · 2020-01-15T22:53:42.618Z · score: 3 (1 votes) · LW · GW

Hmm, okay yeah that makes sense. I think my initial confusion is something like "the most interesting takeaway here is not the part where predictor regressed to the mean, but that extreme things tend to be differently extreme on different axis.

(At least, when I refer mentally to "tails coming apart", that's the thing I tend to mean)

Comment by raemon on The Tails Coming Apart As Metaphor For Life · 2020-01-15T22:38:39.099Z · score: 3 (1 votes) · LW · GW

How related is this to regression to the mean? It seems like a quite different phenomenon at first glance to me.

Comment by raemon on Local Validity as a Key to Sanity and Civilization · 2020-01-15T00:03:48.018Z · score: 1 (2 votes) · LW · GW

I'm actually somewhat rolling to disbelieve on how frequently people link to the article (I guess because of my own experience which I'm now typical minding).

I haven't personally linked to this because someone was failing at local validity. What I've done is refer to this article while (attempting to) work towards a deeper, comprehensive culture that takes it seriously. What I found valuable about this was not "here's a thing that previously we didn't know and now we know it", it's "we were all sort of acting upon this belief, but not in a coordinated fashion. Now we have a good reference and common knowledge of it, which enables us to build off of it." 

Comment by raemon on How to Identify an Immoral Maze · 2020-01-13T21:50:19.467Z · score: 7 (3 votes) · LW · GW

Curated, after chatting a bit with Zvi about a better intro.

I noted previously that this lay out gears that seemed clear and easy to reason about, which seemed quite useful. The concepts here seem quite important to consider for anyone building an organization, or organizational ecosystem. This seemed like a significant hack-away-at-the-edges of the questions that originally prompted Inadequate Equilibria

I'm interested in followup work that checks into the "Moral Mazes are Common and Damaging" hypothesis a bit more empirically (how common? how damaging?)

Comment by raemon on Naming the Nameless · 2020-01-13T00:31:01.286Z · score: 3 (1 votes) · LW · GW

I still feel some desire to finish up my "first pass 'help me organize my thoughts' review". I went through the post, organizing various claims and concepts. I came away with the main takeaway "Wowzers there is so much going on in this post. I think this could have been broken up into a full sequence, each post of which was saying something pretty important." 

There seem to be four major claims/themes here:

  • Aesthetics matter, being style-blind or style-rejecting puts you at a disadvantage
  • It particularly is disadvantageous to cede "the entire concept of aesthetics" to your political opponents.
  • Aesthetics impact your beliefs, in ways that are sneaky and it'd be good to be able to look at carefully
  • There's some tension between creators and expanders. "Scaling up" an aesthetic is a new thing (only a few centuries old) and we haven't yet figured out a way to do that that creators tend to be happy with.

I basically agree with each claim, although each of them depends on some vague assumptions that are hard to check empirically.

...

Meanwhile, here's my overall summary of this post's claims:

Overview of Aesthetics and Style Blindness

  • Aesthetics (such as use of color) don't necessarily intrinsically mean anything, but they often do mean something in a particular cultural context.
  • "Subcultural sublimation" – Our physical environment is built by corporations that employ designers, who in turn take inspiration from creative subcultures.
    • Tastemakers are a small proportion of the population, but have a disproportionate impact on what our visual world ends up looking like.
  • Commercial design ultimately borrows from creatives who are politically opposed to business and resent this commercial appropriation.
  • There exists a spectrum from 'style-blind' to 'style-sensitive' to 'style-experts'.

Politics and Style

  • Artists tend to be on the political left; arts and media occupations are among the most heavily weighted towards Democrats over Republicans.
    • [Is this true (in America?). If so, is it true in other countries?]
  • Abandoning "aesthetics as a whole" to your political opponents is probably a bad strategy (particularly relevant to libertarians)
  • Aesthetics relates to intellectual pursuits, like seeing ideas as cringy. Being able to tell why you see ideas as cringy is important.
  • There are some common defensive postures people take:
    • Reaction – become anti-aesthetic, be seen as tacky
    • Claim to be aloof from politics
    • Cooptation – Claim that you are the one actually embodying the ideal your political opponents are striving for (and then borrow the existing aesthetics)
  • You can also stake out new aesthetic territory (see: Ayn Rand)
  • Why are things beautiful or ugly? Can we doublecrux on aesthetics?

Arts and Imitation

  • Artistic trends have a life cycle, of creation, expansion, and destruction, or more specifically, the artist, the marketer, and the critic.
    • this isn't limited to art
  • Commerce and Invention are ancient, but scaling up is new, and gives disproportionate power to "expanders" who can take an aesthetic innovation, and capture most of the value of it.
    • This results in creators feeling defrauded by expanders
    • Expanders present themselves as creators, but are not.
  • Scaling up is probably net-good – it lets more people have nicer things – but there is some necessary project in the vicinity of "making amends between creators and expanders" that would be required for creative work not to have the dynamic where scaling up is seen as selling out.
Comment by raemon on Naming the Nameless · 2020-01-13T00:06:48.883Z · score: 5 (2 votes) · LW · GW

Re-reading this for review was a weird roller-coaster. I had remembered (in 2018) my strong takeaway that aesthetics mattered to rationality, and that "Aesthetic Doublecrux" would be an important innovation.

But I forgot most of the second half of the article. And when I got to it, I had such a "woah" moment that I stopped writing this review, went to go rewrite my conclusion in "Propagating Facts into Aesthetics" and then forgot to finish the actual review. The part that really strikes me is her analysis of Scott:
 

Sometimes I can almost feel this happening. First I believe something is true, and say so. Then I realize it’s considered low-status and cringeworthy. Then I make a principled decision to avoid saying it – or say it only in a very careful way – in order to protect my reputation and ability to participate in society. Then when other people say it, I start looking down on them for being bad at public relations. Then I start looking down on them just for being low-status or cringeworthy. 

Finally the idea of “low-status” and “bad and wrong” have merged so fully in my mind that the idea seems terrible and ridiculous to me, and I only remember it’s true if I force myself to explicitly consider the question. And even then, it’s in a condescending way, where I feel like the people who say it’s true deserve low status for not being smart enough to remember not to say it. This is endemic, and I try to quash it when I notice it, but I don’t know how many times it’s slipped my notice all the way to the point where I can no longer remember the truth of the original statement."

Where she responds:
 

Now, I could say "just don't do that, then" -- but Scott of 2009 would have also said he believed in being independent and rational and not succumbing to social pressure.  Good intentions aren't enough. [...]

I think it's much better to try to make the implicit explicit, to bring cultural dynamics into the light and understand how they work, rather than to hide from them.

[...]

If you take something about yourself that's "cringeworthy" and, instead of cringing yourself, try to look at why it's cringeworthy, what that's made of, and dialogue honestly with the perspective that disagrees with you -- then there is, in a sense, nothing to fear.

There's an "elucidating" move that I'm trying to point out here, where instead of defending against an allegation, you say "let's back up a second" and bring the entire situation into view.  It's what double crux is about -- "hey, let's find out what even is the disagreement between us."  Double crux is hard enough with arguments, and here I'm trying to advocate something like double-cruxing aesthetic preferences, which sounds absurdly ambitious.  But: imagine if we could talk about why things seem beautiful and appealing, or ugly and unappealing.  Where do these preferences come from, in a causal sense? Do we still endorse them when we know their origins?  What happens when we bring tacit things into consciousness, when we talk carefully about what aesthetics evoke in us, and how that might be the same or different from person to person?

Unless you can think about how cultural messaging works, you're going to be a mere consumer of culture, drifting in whatever direction the current takes you.

This seems like a key point. I haven't quite refactored it into an "open problem" or "question", but I perhaps feel a bit like Brienne, noting something like "Thinking in terms of 'what are the big open questions' is daunting, but this area feels really interesting as well as important and fruitful."

Comment by raemon on How would we check if "Mathematicians are generally more Law Abiding?" · 2020-01-12T23:56:15.848Z · score: 6 (3 votes) · LW · GW

A subquestion here might be "how do you get reasonably unbiased data on criminality that you're able to cross-check with mathematical ability and IQ?"

I'm guessing that somewhere out there are data-dumps that include criminality, IQ and occupation, but don't have much of a sense of how to look for it.

Comment by raemon on Please Critique Things for the Review! · 2020-01-12T22:53:46.128Z · score: 3 (1 votes) · LW · GW

the set of users reviews are open to

Note that all users can do review. (It's only voting, and nomination, that's restricted to highish karma)

Comment by raemon on Please Critique Things for the Review! · 2020-01-12T21:22:18.645Z · score: 7 (3 votes) · LW · GW

I wanted to highlight something particularly good about Vanessa's recent review of Realism About Rationality – partly answering an implied question of "what if you already commented on a post a year ago and don't have anything new to say?"

I think the Review is a good time to do distillation on past discussion. Vanessa's comment was nice because took what had previously been a lengthy set of back-and-forths, and turned into in a single more digestible comment.

Comment by raemon on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-12T21:08:28.772Z · score: 9 (4 votes) · LW · GW

I realize we didn't justify the Voting very hard. Here's my offhand attempt, which maybe we'll roll into the actual post after chatting about it more on Monday.

LessWrong runs, for good or for ill, off the same forces much of the rest of the internet runs on: people who are slightly bored at work. Naturally, posts get rewarded mostly by upvotes and comments, which disproportionately reward things for being exciting and for controversial (respectively). These are quite easy to goodhart on.

The Review (in general), and Voting (in particular) are an attempt to do a more nuanced thing – to take the accumulated taste of the LessWrong community, and use it to reflect hard on what was actually good, and then backpropagate that signal through people's more general sense of "what sort of posts are good to write and why?"

Without the Vote, the signal would basically be entirely "what the Mod Team Thinks Was Best", or, if we weren't doing this at all "what posts were memorable, and/or high karma". And this isn't ideal for a few reasons:

  • The Mod Team doesn't have domain expertise in all the areas that posts explore
  • Even though we're putting a lot of work into it, it's still a really daunting project to form opinions on all 75 posts. Having a mixture of people who've looked harder at different posts helps give more coverage of nuanced opinions.
  • Something something wisdom of crowds – each person is biased in some way, or has different knowledge. Getting many people to participate helps counterbalance various knowledge and biases that individuals have.

I meanwhile expect the voting here to be better than usual karma-voting, because it's more comparative. You're not just voting on "this post seems good!" but "this post seems better than this other post". What I found useful for my own voting was being forced to stop and think and build a model of what-sorts-of-posts-are-good-and-why.

Comment by raemon on On Being Robust · 2020-01-12T21:06:58.841Z · score: 5 (2 votes) · LW · GW

I think the important generator is: being robust seems like a solution to this "generalized planning fallacy"[1], where you don't correctly anticipate which corners should not be cut. So, even though you could theoretically excise some wasted motions by cutting pointless corners, you can't tell which corners are pointless. Therefore, a better policy is just not cutting corners by default.

Ah, that does make the point much clearer, thanks!

Comment by raemon on Realism about rationality · 2020-01-12T21:06:10.675Z · score: 5 (2 votes) · LW · GW

Which you could round off to "biologists don't need to know about evolution", in the sense that it is not the best use of their time.

The most obvious thing is understanding why overuse of antibiotics might weaken the effect of antibiotics.

Comment by raemon on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-12T20:53:34.202Z · score: 5 (2 votes) · LW · GW

When it comes to the interface I think it would be great if the interface would show me my past karma votes on the post. It's useful to have the information of how I found the post after reading it the first time at hand when trying to evaluate 75 posts at once.

Yeah, I definitely agree with this. I think we've put about as much work into the UI as we're going to this year (I originally budgeted a couple weeks of time for the Review and ended up spending 1.5 months on it, but I think, assuming it stays in roughly the same form next year, this is an obvious thing to include)

Comment by raemon on How to Identify an Immoral Maze · 2020-01-12T20:37:47.571Z · score: 5 (2 votes) · LW · GW

Something I note: I think this post almost-but-not-quite-stands alone. I think the rest of the sequence was necessary to make the overall points Zvi is meaning to make, but I think there's a narrower set of points that this post is making that only depend on you roughly having a sense that there's a special kind of middle-management-hell that can exist. 

I think you could summarize that in a few paragraphs and then have a pretty good standalone post, with pointers to the rest of the sequence for people that want to delve into the broader argument.

Comment by raemon on How to Identify an Immoral Maze · 2020-01-12T20:35:24.212Z · score: 7 (3 votes) · LW · GW

This was the post I was most personally looking forward to. I think it lays out of a number of gears that are easy to reason about, that seem useful regardless of whether the entire thesis hangs together. 

The issue with layers-of-management was my most important update from this sequence, and "what to do about that?" seems like one of the most important questions for groups of people trying to put a dent in the universe.

An open question in my mind is something like "how much better is it to found a new organization rather than open up a new department at an existing institution", in particular because it's not obviously better if you end up with an ecosystem that has less legible "management layers" in between organizations. (Zvi notes that investors may-or-may-not count as a management layer and it depends. "What does it depend on?" is my next question)

Comment by raemon on Please Critique Things for the Review! · 2020-01-12T20:13:23.814Z · score: 5 (2 votes) · LW · GW

Helpful thoughts, thanks!

I definitely don't expect the money to be directly rewarding in a standard monetary sense. (In general I think prizes do a bad job of providing expected monetary value). My hope for the prize was more to be a strong signal of the magnitude of how much this mattered, and how much recognition reviews would get.

It's entirely plausible that reviewing is sufficiently "not sufficiently motivating" that actually, the thing to do is pay people directly for it. It's also possible that the prizes should be lopsided in favor of reviews. (This year the whole process was a bit of an experiment so we didn't want to spend too much money on it, but it might be that just adding more funding to subsidize things is the answer)

But I had some reason to think "actually things are mostly fine, it's just that the Review was a new thing and not well understood, and communicating more clearly about it might help."

My current sense is:

  • There have been some critical reviews, so there is at least some motivation latent motivation to do so.
  • There are people on the site who seem to be generally interested in giving critical feedback, and I was kinda hoping that they'd be up for doing so as part of a broader project. (Some of them have but not as many as I'd hoped. To be fair, I think the job being asked for the 2018 Review is harder than what they normally do)
  • One source of motivation I'd expected to tap into (which I do think has happened a bit) is "geez, that might be going into the official Community Recognized Good Posts Book? Okay, before it wasn't worth worrying about Someone Being Wrong On the Internet, but now the stakes are raised and it is worth it."
Comment by raemon on Please Critique Things for the Review! · 2020-01-12T18:11:36.759Z · score: 3 (1 votes) · LW · GW

[edit: I re-read your comment and mostly retract mine, but am thinking about a new version of it]

Comment by raemon on Please Critique Things for the Review! · 2020-01-12T05:37:42.933Z · score: 5 (2 votes) · LW · GW

Everyone will get contacted about inclusion in the book with the opportunity to opt out. 

Comment by raemon on Please Critique Things for the Review! · 2020-01-12T04:58:20.953Z · score: 5 (2 votes) · LW · GW

Agree with these reasons this is hard. A few thoughts (this is all assuming you're the sort of person who basically thinks the Review makes sense as a concept and want to participate, obviously this may not apply to Mark)

Re: Prestige: I don't know if this helps, but to be clear, I expect to include good reviews in the Best of 2018 book itself. I'm personally hoping that each post comes with at least one review, and in the event that there are deeply substantive reviews those may be given top-billing equivalent. I'm not 100% sure what will happen with reviews in the online seqeunce.

(In fact, I expect reviews to be an potentially easier way to end up in the book than by writing posts, since the target area is more clearly specified.)

"It's Hard to Review Posts"

This is definitely true. Often what needs reviewing is less like "author made an unsubstantiated claim or logical error" and more like "is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable? Does it contain subtle flaws? Are there better frames for carving up the world than the one in the post?"

This is a hard problem, and doing a good job is honestly harder than one month work of work. But, this seems like a quite important problem for LessWrong to be able to solve. I think a lot of this site's value comes from people crystallizing ideas that shift one's frame, in domains where evidence is hard to come by. "How to evaluate that?" feels like an essential question for us to figure out how to answer.

My best guess for now is for reviews to not try to fully answer "does this post check out?" (in cases where that depends on a lot of empirical questions that are hard to check, or where "is this the right ontology?" are hard to check). But, instead, to try to map out "what are the questions I would want answered, that would help me figure out if this post checked out?"

(Example of this includes Eli Tyre's "Has there been a memetic collapse?" question, relating to Eliezer's claims in Local Validity)

Comment by raemon on We run the Center for Applied Rationality, AMA · 2020-01-11T21:55:27.102Z · score: 5 (2 votes) · LW · GW

I think Anna roughly agrees (hence her first comment), she was just answering the question of "why hasn't this already been done?"

I do think adversarial pressure (i.e. if you rule against a person they will try to sow distrust against you and it's very stressful and time consuming) is a reason that "reasonably doable" isn't really a fair description. It's doable, but quite hard, and a big commitment that I think is qualitatively different from other hard jobs.

Comment by raemon on Criticism as Entertainment · 2020-01-11T19:18:50.507Z · score: 13 (5 votes) · LW · GW

Note that Eliezer has regrets about that:

My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.

Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.

Comment by raemon on Being a Robust Agent · 2020-01-11T03:14:21.185Z · score: 7 (3 votes) · LW · GW

I'm writing my self-review for this post, and in the process attempting to more clearly define what I mean by "Robust Agent" (possibly finding a better term for it)

The concept here is pointing at four points:

  • Strategy of deliberate agency – not just being a kludge of behaviors, but having goals and decision-making that you reflectively endorse
  • Corresponding strategy of Gears-Level-Understanding of yourself (and others, and the world, but yourself-in-particular)
  • Goal of being able to operate in an environment where common wisdom isn't good enough, and/or you expect to run into edge cases.
  • Goal of being able to coordinate well with other agents.

"Robustness" mostly refers to the third and fourth points. It's possible the core strategy might actually make more sense to call "Deliberate Agency". The core thing is that you're deciding on purpose what sort of agent to be. If the environment wasn't going to change, you wouldn't care about being robust.

Or maybe, "Robust Agency" makes sense as a thing to call one overall cluster of strategies, but it's a subset of "Deliberate Agency."

Comment by raemon on Caring less · 2020-01-11T01:28:53.978Z · score: 5 (2 votes) · LW · GW

I found this slightly hard to parse, would be interested in someone writing this again... maybe just in slightly different words, maybe with real examples instead of A/B/C/D.

Comment by raemon on Realism about rationality · 2020-01-10T19:36:34.673Z · score: 3 (1 votes) · LW · GW

I guess the main thing I want is an actual tally on "how many people definitively found this post to represent their crux", vs "how many people think that this represented other people's cruxes"

Comment by raemon on On Being Robust · 2020-01-10T05:57:50.797Z · score: 8 (3 votes) · LW · GW

Hmm, this all roughly makes sense, but I feel like there was some kind of important generator here that you were aiming to convey that I didn't get. 

I think you should probably do most of these things, but not sure which order to do them in, and meanwhile, I think so long as you're afraid of being unmasked part of the problem seems like it's about the fear itself?

Comment by raemon on Realism about rationality · 2020-01-10T05:47:07.418Z · score: 7 (3 votes) · LW · GW

Hmm, I am interested in some debate between you and Daniel Filan (just naming someone who seemed to describe himself as endorsing rationality realism as a crux, although I'm not sure he qualifies as a "miri person")

Comment by raemon on What is Life in an Immoral Maze? · 2020-01-10T05:44:02.884Z · score: 3 (1 votes) · LW · GW

Okay, that sentiment makes sense (although "nothing whatsoever to do with competition" still sounds false, even if the active ingredient is the manipulation, and it wasn't necessary to hypothesize "super-perfect competition", regular competition still clearly plays a role.

Comment by raemon on What is Life in an Immoral Maze? · 2020-01-10T05:25:37.065Z · score: 3 (1 votes) · LW · GW

I don't know what a more representative company size was, mostly just guessing the causal factors leading to Zvi summarizing it as "middle management."

I think the model requires 2 things:

  1. being promoted far enough into the system that there's a basic assumption of competency across all dimensions
  2. being surrounded, in both directions, by at least 2 layers of management (separating you from anyone who's got more direct contact with reality). 

The second bit requires 5 levels (level 1 is in direct contact with object-level-workers, level 5 is in contact with the CEO who at least hopefully cares about the bigger picture. But level 3 is steps removed from either). I think it makes sense for this to cause epistemic warping, whether or not it comes with any pathologies relating to competition.

The first bit... probably depends on your industry and culture. My made-up-ass-pull-guess is that you need more like 4 levels of promotion before there's a plausible assumption that "everyone is competent" (so, combined with #2, companies with around seven layers). 

Comment by raemon on Being a Robust Agent · 2020-01-10T02:30:41.948Z · score: 5 (2 votes) · LW · GW

I guess the hangup is in pinning down "when things are actually good ideas in expectation", given that it's harder to know that without either lots of experience or clear theoretical underpinnings.

I think one of the things I was aiming for with Being a Robust Agent is "you set up the longterm goal of having your policies and actions have knowably good outcomes, which locally might be a setback for how capable you are, but allows you to reliably achieve longer term goals."

Comment by raemon on Being a Robust Agent · 2020-01-10T02:16:50.168Z · score: 3 (1 votes) · LW · GW

(not sure if this was clear, but I don't feel strongly about which definition to use, I just wanted to disambiguate between definitions people might have been using)

I think that Eliezer's other usage of "instrumental rationality" points to fields of study for theoretical underpinning of effective action.

This sounds right-ish (i.e. this sounds like something he might have meant). When I said "use probability and game theory and stuff" I didn't mean "be a slave to whatever tools we happen to use right now", I meant sort of as examples of "things you might use if you were trying to base your decisions and actions off of sound theoretical underpinnings."

So I guess the thing I'm still unclear on (people's common usage of words): Do most LWers think it is reasonable to call something "instrumentally rational" if you just sorta went with your gut without ever doing any kind of reflection (assuming your gut turned out to be trustworthy?). 

Or are things only instrumentally rational if you had theoretical underpinnings? (Your definition says "no", which seems fine. But it might leave you with an awkward distinction between "instrumentally rational decisions" and "decisions rooted in instrumental rationality.")

I'm still unsure if this is dissolving confusion, or if the original post still seems like it needs editing.

Comment by raemon on What is Life in an Immoral Maze? · 2020-01-10T02:04:03.660Z · score: 3 (1 votes) · LW · GW

I think an issue was that, in a 25 tier company, "middle management" (i.e. "tier 13?") is above what one might colloquially refer to as "middle management."

Comment by raemon on What is Life in an Immoral Maze? · 2020-01-10T02:02:58.782Z · score: 3 (1 votes) · LW · GW

Even the typical usage of "invested in a job" suggests a reason that someone would not want to be out of the job, as opposed to forcing them to stay when they do want to be out.

Okay, I think I get what you're saying more here. But the distinction that feels important is something like: "if a system manipulates you in such a way that, initially, you thought you were getting a good deal, but upon reflection you got a bad deal and now it's hard to change your mind about that deal", that's something that feels more appropriate to me to treat as an artificial barrier-to-exit, than as a mere sunk cost + opportunity cost.

I think there's a spectrum of barriers-to-exit, ranging from mild trivial inconveniences to "literally a slave owner will shoot you if you try to escape." I think most jobs have some nontrivial barrier in the form of inertia/inconvenience (which indeed affects the job market).

I think there's some flaw in term "super-perfect-competition" in that in implies some spectrum from imperfect-perfect-superperfect, and in fact situations can be a mixture of "how perfect the competition is" plus "how high the barriers to exit are", which varying effects depending how high each one is. (At the beginning, Zvi notes that [upper]-middle-management is nowhere near "Contract Drafting Em" levels of bad, but still bad enough to see particular effects."

I'm not actually that sold on the claim, but the barrier to exit thing still seems like a meaningful part of the model.

Comment by raemon on What is Life in an Immoral Maze? · 2020-01-10T01:49:59.806Z · score: 3 (1 votes) · LW · GW

I think the model here is intended to apply specifically to upper-senior-management (I think you touched on this elsethread. I think it was basically a mistake not to focus on that more specifically)

Comment by raemon on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-10T01:17:44.166Z · score: 5 (2 votes) · LW · GW

I think it's a particularly weak signal when you're trying to evaluate 75 posts at once.

Comment by raemon on What are the open problems in Human Rationality? · 2020-01-10T01:10:33.147Z · score: 5 (2 votes) · LW · GW

What sort of standards for intellectual honesty make sense, given that:

  • There's a large number of free variables in what information you present to people. You can be quite misleading while saying purely true information. "Not lying" doesn't seem sufficient a norm.
  • It's hard to build norms around complex behavior. Humans have an easier time following (and flagging violations of) bright lines, compared to more nuanced guidelines.
Comment by raemon on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2020-01-10T00:58:42.458Z · score: 9 (4 votes) · LW · GW

Curated.

This post crystallized what I now think of as one of the major open problems in rationality, and in the (related but distinct) domain of intellectual integrity. While it doesn't propose solutions, I think clearly articulating a problem, and becoming deconfused about it, is often a good first step for tackling hard problems.

Two criticisms I'd make of this post are:

  • It'd be slightly nicer if it actually had a crisp summary of the problem at the end. I felt like I understood the "open problem of 'real' honesty" by the end of the post, but there wasn't a succinct paragraph I could copy into another thread to explain it. (I think this is was somewhat complicated by the final paragraphs aiming more to tie this into a critique of Meta-Honesty than to spell out the open problem)
  • Relatedly... I found this underwhelming as a critique of Meta-Honesty. The fact that Meta-Honesty does not solve the most important open problem in honesty (which, notably, neither does this post!) doesn't say much about whether Meta-Honesty is still useful for other reasons. I think Zack underestimates how important clear norms around Not-Lying are. And meanwhile, when you're in a confusing domain without a way forward, hacking away at the edges is an important tool to have in your toolbox.
Comment by raemon on What is Life in an Immoral Maze? · 2020-01-10T00:11:21.223Z · score: 3 (1 votes) · LW · GW

I think trivial inconveniences are more than enough to count as a significant barrier to exit, and these are not trivial inconveniences being discussed. Small amounts of thought / education fail to change behavior all the time.