Open Thread July 2019

post by ryan_b · 2019-07-03T15:07:40.991Z · LW · GW · 91 comments

If it’s worth saying, but not worth its own post, you can put it here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library [? · GW], checking recent Curated posts [? · GW], and seeing if there are any meetups in your area [? · GW].

The Open Thread sequence is here [? · GW].

91 comments

Comments sorted by top scores.

comment by Perception23 · 2019-07-14T17:35:00.294Z · LW(p) · GW(p)

I'm brand new to LW, it's refreshing to be able to discuss things intelligently, I haven't come across many places on the internet that happens. I'm excited to hear all your references to some of my favorite, inspiring, interesting people. I stumbled on the site while trying to Google a scarcely answered question, and I don't even remember what it was because I went down the rabbit hole, reading post after post on so many different subjects. I was even recently discussing the AI winning Multiplayer poker with someone, talking about what it meant for the future of AI (exponential learning) and the problems it could potentially solve in x amount of years.

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-14T17:46:36.233Z · LW(p) · GW(p)

Welcome! :)

comment by ryan_b · 2019-07-03T15:19:43.856Z · LW(p) · GW(p)

I've been considering another run at Anki or similar, because I simultaneously found a new segment of a field to learn about and also because I am going to have to pivot my technical learning at work soon.

Reading Michael Nielson's essay on the subject, he makes frequent references to Feynman; I am wondering about the utility of using Anki to remember complex problems in better detail. The motivation is the famous story about Feynman where he always kept a bunch of his favorite open problems in mind, and then whenever he encountered a new technique he would test it against each problem. In this way, allegedly, he made several breakthroughs regarded as brilliant.

It feels to me like the point might be more broad and fundamental than mathematical techniques; I suspect if I could better articulate and memorize an important problem, I could make it more a part of my perspective rather than something I periodically take a crack at. If I can accomplish this, I expect I will be more likely to even notice relevant information in the first place.

Replies from: mr-hire, ryan_b
comment by Matt Goldenberg (mr-hire) · 2019-07-03T16:38:43.131Z · LW(p) · GW(p)

I haven't thought about doing this with open problems, but I really like the idea and feel like I've done it implicitly with a number of problems important to me (coordination issues, moloch, etc).

I do however, do this explicitily when I learn new solutions, making sure I integrate them into my world model, can instinctively frame problems using those models, and tie them to other potential solutions.

It feels like beginning to explicitly to do this with problems is a large next step that could really take effectiveness to the next level. Thanks!

comment by ryan_b · 2019-07-03T17:45:09.551Z · LW(p) · GW(p)

Short mashup from two sources:

Nielson proposes an informal model of mastery:

...their prior learning has given them better chunking abilities, and so situations most people would see as complex they see as simple, and they find it much easier to reason about.
...
In other words, having more chunks memorized in some domain is somewhat like an effective boost to a person's IQ in that domain.

where the chunks in question fit into the 7+/-2 of working memory. Relatedly, there is Alan Kay's quip:

"A change in perspective is worth 80 IQ points."

Which is to say the new perspective provides a better way to chunk complex information. In retrospect this feels obvious, but my model for multiple perspectives beforehand mostly a matter of eliminating blind spots they might have. I'll have to integrate the contains-better-chunks possibility, which basically means that seeking out new perspectives is more valuable than I previously thought.

comment by brunoparga · 2019-07-26T14:50:17.616Z · LW(p) · GW(p)

Hi, I'm Bruno from Brazil. I have been involved with stuff in the Lesswrongosphere since 2016. While I was in the US, I participated in the New Hampshire and Boston LW meetup groups, with occasional presence in SSC and EA meetups. I volunteered at EAG Boston 2017 and attended EAG London later that year. I did the CFAR workshop of February 2017 and hung out at the subsequent alumni reunion. After having to move back to Brazil I joined the São Paulo LW and EA groups and tried, unsuccessfully, to host a book club to read RAZ over the course of 2018. (We made it as far as mid-February, I think.)

I became convinced of the need to sort out the AI alignment problem after first reading RAZ. I knew I needed to level up on lots of basic subjects before I could venture into doing AI safety research. Because doing so could also have instrumental value to my goal of leaving Brazil for good, I studied at a Web development bootcamp and have been teaching there for a year now; I feel this has given me the confidence to acquire new tech skills.

I intend to start posting here in order to clarify my ideas, solve my confusion and eventually join the ranks of the AI safety researchers. My more immediate goal is to be able to live somewhere other than Brazil while doing some sort of relevant work (even if it is just self-study or something not directly related to AI safety that still allows me to study on the side, like my current gig here does).

Replies from: Alexei
comment by Alexei · 2019-08-02T23:42:49.001Z · LW(p) · GW(p)

Sounds great. Welcome!

comment by Zack_M_Davis · 2019-07-16T06:12:08.084Z · LW(p) · GW(p)

In order to combat publication bias, I should probably tell the Open Thread about a post idea that I started drafting tonight but can't finish because it looks like my idea was wrong. Working title: "Information Theory Against Politeness." I had drafted this much—

Suppose the Quality of a Blog Post is an integer between 0 and 15 inclusive, and furthermore that the Quality of Posts is uniformly distributed. Commenters can roughly assess the Quality of a Post (with some error in either direction) and express their assessment in the form of a Comment, which is also an integer between 0 and 15 inclusive. If the True Quality of a post is , then the assessment expressed in a Comment on that Post follows the probability distribution

(Notice the "wraparound" between 15 and 0: it can be hard for a humble Commenter to tell the difference between brilliance-beyond-their-ken, and utter madness!)

The entropy of the Quality distribution is = 4 bits: in order to inform someone about the Quality of a Post, you need to transmit 4 bits of information. Comments can be thought of as a noisy "channel" conveying information about the post.

The mutual information between a Comment, and the Post's Quality, is equal to the entropy of the distribution of Comments (which is going to be 4 bits, by symmetry), minus the entropy of a Comment given the Post's Quality (which is ≈ 1.58). So the "capacity" of a single Comment is around 4 − 1.58 = 2.42 bits. On average, in expectation across the multiverse, &c., we only need to read 4/2.42 ≈ 1.65 Comments in order to determine the Quality of a Post. Efficient!

Now suppose the Moderators introduce a new Rule: it turns out Comments below 10 are Rude and hurt Post Authors' Feelings. Henceforth, all Comments must be an integer between 10 and 15 inclusive, rather than between 0 and 15 inclusive!

... and then I was expecting that the restricted range imposed by the new Rule would decrease the expressive power of Comments (as measured by mutual information), but now I don't think this is right: the mutual information is about the noise in Commenter's perceptions, not the "coarseness" of the "buckets" in which it is expressed: lg(16) − lg(3) has the same value as lg(8) − lg(1.5).

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-16T06:23:06.803Z · LW(p) · GW(p)

It seems to me clear that, if Commenters have a policy of reporting the maximum of their perception and 10, then there is now less mutual information between the commenter's report and the actual post quality than there was previously. In particular, you now can't distinguish between a post of quality 2 and a post of quality 5 given any number of comments, whereas you could previously.

comment by roland · 2019-07-07T13:14:13.527Z · LW(p) · GW(p)

Is the CFAR hand­book pub­li­cly available? If yes, link please. If not, why not? It would be a great re­source for those who can’t at­tend the work­shops.

Replies from: tetraspace-grouping
comment by Tetraspace (tetraspace-grouping) · 2019-07-13T21:25:21.690Z · LW(p) · GW(p)

There's no official, endorsed CFAR handbook that's publicly available for download. The CFAR handbook from summer 2016, which I found on libgen, warns

While you may be tempted to read ahead, be forewarned - we've often found that participants have a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding. Many of the explanations here are intentionally approximate or incomplete, because we believe this content is best transmitted in person. It helps to think of this handbook as a companion to the workshop, rather than as a standalone resource.

which I think is still their view on the matter.

I have heard that they would be more comfortable with people learning rationality techniques in-person from a friend, so if you know any CFAR alumni you could ask them (they'd probably also have a better answer to your question).

Replies from: SaidAchmiz, Zack_M_Davis
comment by Said Achmiz (SaidAchmiz) · 2019-07-13T22:40:14.899Z · LW(p) · GW(p)

This is, however, a position which has always made me extremely suspicious of CFAR’s output. I wish they would, at the very least, acknowledge what a huge red flag this sort of thing is.

Replies from: Raemon
comment by Raemon · 2019-07-16T19:55:18.467Z · LW(p) · GW(p)

Noting that I do agree with this particular claim.

I see the situation as:

  • There are, in fact, good reasons that its hard to communicate and demonstrate some things, and that hyperfocus on "what can be made legible to a third party" results in a lot of looking under street lamps [LW · GW], rather than where the value necessarily is. I have very different priors than Said on how suspicious CFAR's actions are, as well as different firsthand experience that leads me to believe there's a lot of value in CFAR's work that Said presumably dismisses.
    • [this is not "zero suspicion", but I bet my suspicion takes a very different shape than Said's]
  • But, it's still important for group rationality to have sound game theory re: what sort of ideas gain what sort of momentum. An important meta-agreement/policy is for people and organizations to be clear about the epistemic status of their ideas and positions.
    • I think it takes effort to maintain the right epistemic state, as groups and as individuals. So I think it would have been better if CFAR explicitly stated in their handbook, or in a public blogpost*, that "yes, this limits how much people should trust us, and right now we think it's more important for us to focus on developing a good product than trying to make our current ideas legibly trustworthy."
    • As habryka goes into here [LW(p) · GW(p)], I think there are some benefits for researchers to focus internally for a while. The benefits are high bandwidth communication, and being able to push ideas farther and faster than they would if they were documenting every possibly-dead-end-approach at every step of the way. But, after a few years of this, it's important to write up your findings in a more public/legible way, both so that others can critique it and so that others can build on it.
      • CFAR seems overdue for this. But, also, CFAR has had lots of staff turnover by now and it's not that useful to think in terms of "what CFAR ought to do" vs "what people who are invested in the CFAR paradigm should do." (The Multi-Agent Models sequence [? · GW]is a good step here. I think good next steps would be someone writing up several other aspects of the CFAR paradigm with a similar degree of clarity, and good steps after that would be to think about what good critique/evaluation would look like)
      • I see this as needing a something of a two-way contract, where:
        • Private researchers credibly commit to doing more public writeups (even though it's a lot of work that often won't result in immediate benefit), and at the very least writing up quick, clear epistemic statuses of how people should relate to the research in the meanwhile.
        • Third party skeptics develop a better understanding of what sort of standards are reasonable, and "cutting researchers exactly the right amount of slack." I think there's good reason at this point to be like "geez, CFAR, can you actually write up your stuff and put reasonable disclaimers on things and not ride a wave of vague illegible endorsement?" But my impression is that even if CFAR did all the right things and checked all the right boxes, people would still be frustrated, because the domain CFAR is trying to excel at is in fact very difficult, and a rush towards demonstrability wouldn't be useful. And I think good criticism needs to understand that.

*I'm not actually sure they haven't made a public blogpost about this.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-16T20:51:03.474Z · LW(p) · GW(p)

“It is very difficult to find a black cat in a dark room—especially when the cat is not there.”

I’ve quoted this saying before, more than once, but I think it’s very applicable to CFAR, and I think it is long past time we acknowledged this.

The point is this: yes, it is possible that what CFAR is trying to excel at is in fact very difficult. It could, indeed, be that CFAR’s techniques are genuinely difficult to teach—and forget about trying to teach them via (non-personally-customized!) text. It could be, yes, that CFAR’s progress in toward the problem they are attacking is, while quite real, nonetheless fiendishly difficult to demonstrate, in any sort of legible way. All of these things are possible.

But there is another possibility. It could be that what CFAR is trying to excel at is not very difficult, but impossible. And because of this, it could be that CFAR’s techniques are not hard to teach, but rather there is nothing there to be taught. It could be that CFAR’s progress toward the problem they are attacking, is not hard to demonstrate, but rather nonexistent.

What you say is not, as far as it goes, wrong. But it seems to be predicated on the notion that there is a cat in the room—hidden quite well, perhaps, but nonetheless findable; and also, secondarily, on the notion that what CFAR is doing to look for said cat is even approximately the sort of thing which has any chance at all of finding it.

But while it may have been sensible to start (fully 10 years ago, now!) with the assumption of an existing and definitely present cat, that was then; we’ve since had 10 years of searching, and no apparent progress. Now, it is a very large room, and the illumination is not at all bright, and we can expect no help from the cat, in our efforts to find it. So perhaps this is the sort of thing that takes 10 years to produce results, or 50, or 500—who knows! But the more time passes with no results, the more we should re-examine our initial assumption. It is even worth considering the possibility of calling off the search. Maybe the cat is just too well-hidden (at least, for now).

Yet CFAR does claim that they’ve found something, don’t they? Not the whole cat, perhaps, but its tail, at least. But they can’t show it to us, because, apparently—contrary to what we thought we were looking for, and expecting to find—it turns out that our ideas about what sort of thing a cat is, and what it even means to find a cat, and how ones knows when one has found it, were mistaken!

Well, I’ve worn out this analogy enough, so here’s my conclusion:

As far as I can tell, CFAR has found nothing (or as near to nothing as hardly matters). The sorts of things they seem to be claiming to have found (to the extent that they’re being at all clear) don’t really seem anything like the sorts of things they set out to look for. And that extent is not very large, because they’re being far, far more secretive than seems to me to be remotely warranted. And CFAR’s behavior in the past has not exactly predisposed me to believe that they are, or will be, honest in reporting their progress (or lack thereof).

If CFAR has found nothing, because the goal is ambitious and progress is difficult, they should say so. “We worked on figuring out how to make humans more rational for nearly a decade but so far, all we have is a catalog of a bunch of things that don’t really work”—that is understandable. CFAR’s actual behavior, not so much—or not, at least, without assuming dishonesty or other deficits of virtue.

P.S.:

CFAR has had lots of staff turnover by now

This is rather suspicious all on its own.

Replies from: Raemon, ESRogs
comment by Raemon · 2019-07-16T21:16:10.616Z · LW(p) · GW(p)

On the "is there something worth teaching there" front, I think you're just wrong, and obviously so from my perspective (since I have, in fact, learned things. Sunset at Noon [LW · GW] is probably the best writeup of what CFAR-descended things I've learned and why they're valuable to me).

This doesn't mean you're obligated to believe me. I put moderate probability on "There is variation on what techniques are useful for what people, and Said's mind is shaped such that the CFAR paradigm isn't useful, and it will never be legible to Said that the CFAR paradigm is useful." But, enough words have been spent trying to demonstrate things to you that seem obvious to me that it doesn't seem worth further time on it.

The Multi-Agent Model of Mind [? · GW] is the best current writeup of (one of) the important elements of what I think of as the CFAR paradigm. I think it'd be more useful for you to critique that than to continue this conversation.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-16T22:08:06.495Z · LW(p) · GW(p)

I have read your post Sunset at Noon [LW · GW]; I do not recall finding much to comment on at all, there (I didn’t really get the impression that it was meant to be a “writeup of … CFAR-descended things”!), but I will re-read it and get back to you.

As for the multi-agent model of mind, I have already critiqued it, though, admittedly, in a haphazard way—a comment here, a comment there… I have not bothered to critique more in-depth, or in a more targeted way, because… well, to be frank, because of the many times when I’ve attempted to really critique anything, on the new Less Wrong, and been rebuffed for being insufficiently “prosocial”, insufficiently “nice”, etc., etc.

Should I understand your suggestion to mean that if I post critiquing comments on some of the posts in the sequence you linked, they will not spawn lengthy threads about niceness, will not be met with moderator comments about whether I am inserting enough caveats about how of course I respect the OP very much, etc.?

I’m sorry if this seems blunt or harsh; I really don’t mean to be antagonistic toward you in particular (or anyone, really)! But if you reply to my critical comments by saying “but we have put forth our best effort; the ball’s in your court now” (an entirely fair response, if true!), then before I run with said ball, I need to know it’s not a waste of my time.

And, to be clear, if you respond with “no, all the politeness rules still apply, you have to stick to them if you want to critique these writeups [on Less Wrong]”, then—fair enough! But I’d like to know it in advance. (In such a case, I of course will not post any such critiques here; I may post them elsewhere, or not at all, as I find convenient.)

Replies from: Kaj_Sotala, Raemon
comment by Kaj_Sotala · 2019-07-17T07:30:20.801Z · LW(p) · GW(p)

I think that your past criticisms have been useful, and I've explicitly tried to take them into account in the sequence. E.g. the way I defined subagents in the first post of the sequence, was IIRC in part copy-pasted from an earlier response to you, and it was your previous comment that helped/forced me to clarify what exactly I meant. I'd in fact been hoping to see more comments from you on the posts, and expect them to be useful regardless of the tone.

comment by Raemon · 2019-07-16T22:29:34.838Z · LW(p) · GW(p)

I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts. But:

a) if I had written the posts, I would see them as "yes, now these are actually at the stage where the sort of critique Said does is more relevant." I still think it'd be most useful if you came at it from the frame of "What product is Kaj trying to build [LW · GW], and if I think that product isn't useful, are there different products that would better solve the problem that Kaj's product is trying to solve?"

b) relatedly, if you have criticism of the Sunset at Noon content I'd be interested in that. (this is not a general rule about whether I want critiques of that sort. Most of my work is downstream of CFAR paradigm stuff, and I don't want most of my work to turn into a debate about CFAR. But it does seem interesting to revisit SaN through the "how content that Raemon attributes to CFAR holds up to Said" lens)

c) Even if Kaj prefers you not to engage with them (or to engage only in particular ways), it would be fine under the meta-rules for you to start a separate post and/or discussion thread for the purpose of critiquing. I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.

Replies from: SaidAchmiz, Raemon
comment by Said Achmiz (SaidAchmiz) · 2019-07-16T22:57:28.001Z · LW(p) · GW(p)

I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts.

Sure.

I still think it’d be most useful if you came at it from the frame of “What product is Kaj trying to build, and if I think that product isn’t useful, are there different products that would better solve the problem that Kaj’s product is trying to solve?”

Sure, but what if (as seems likely enough) I think there aren’t any different products that better solve the problem…?

I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.

So, just as a general point (and this is related to the previous paragraph)…

The problem with the norm of writing critiques as separate posts, is that it biases (or, if you like, nudges) critiques toward the sort that constitute points or theses in their own right.

In other words, if you write a post, and I comment to say that your post is dumb and you are dumb for thinking and writing this and the whole thing is wrong and bad (except, you know, in a tactful way), well, that is, at least in some sense, appropriate (or we might say, it is relevant, in the Gricean sense); you wrote a post, I posted a comment about that post. Fine.

But if you write a post, and I write a post of my own the entirety of whose content and message is “post X which Raemon just wrote is wrong and bad etc.”, well, what is that? Who writes a whole post just to say that someone else is wrong? It seems… odd; and also, antagonistic, somehow. “What was the point of this post?”, commenters may inquire; “Surely you didn’t write a whole post just to say that another post is wrong? What’s your take, then? What Raemon said is wrong, but then what’s right?”—and what do I answer? “I have no idea what’s right, but that is wrong, and… that’s all I wanted to say, really.” As I said, this simply looks odd (socially speaking). (And certainly one is much less likely to get any traction or even engagement—except the dubious sort of engagement; the kind which is all meta, no substance.)

And the thing is, many of my critiques (of CFAR stuff, yes, and of many other things that are discussed in rationalist spaces) boil down to just “what you are saying is wrong”. If you ask me what I think the right answer is, in such cases, I will have nothing to offer you. I don’t know what the right answer is. I don’t think you know what the right answer is, either; I don’t think anyone has the right answer. Beyond saying that (hypothetical) you are wrong, I often really don’t have much to add.

But such criticisms are extremely important! Refraining from falsely believing ourselves to have the right answer, or even a good answer, or even “the best answer so far”, when what we actually have is simply wrong—this is extremely important! It is very tempting to think that we’ve found an answer, when we have not. Avoiding this trap is what allows us to keep looking, and eventually (one hopes!) find the actual right answer.

I understand that you are coming at this from a view in which an idea that someone proposes, a “framework”, etc., has value, and we take that idea and we build on it; or perhaps we say “but what about this instead”, and we offer our own idea or framework, and maybe we synthesize them, and together, cooperatively, we work toward the answer. Under that view, what you say makes sense.

My commentary (not quite an objection, really) is just that it’s crucial to instead be able to say “no, actually, that is simply wrong [because reasons X Y Z]”, and have that be the end of (that branch of) the conversation. You had an idea, that idea was wrong, end of story, back to the drawing board.

That having been said, I do find your response entirely reasonable and satisfactory, as far as this specific case goes; thank you. I will reread both your post and Kaj’s sequence, and comment on both (the latter, contingent on Kaj’s approval).

Replies from: Raemon
comment by Raemon · 2019-07-29T20:32:37.635Z · LW(p) · GW(p)
And the thing is, many of my critiques (of CFAR stuff, yes, and of many other things that are discussed in rationalist spaces) boil down to just “what you are saying is wrong”. If you ask me what I think the right answer is, in such cases, I will have nothing to offer you. I don’t know what the right answer is. I don’t think you know what the right answer is, either; I don’t think anyone has the right answer. Beyond saying that (hypothetical) you are wrong, I often really don’t have much to add.

If all you have to say is "this seems wrong", that... basically just seems fine. [edit to clarify: I mean making a comment, not a post].

I don't expect most LessWrong users would get annoyed at that. The specific complaint we've gotten about you has more to do with the way you Socratic-ly draw people into lengthy conversations that don't acknowledge the difference in frame, and leave people feeling like it was a waste of time. (This has more to do with implicitly demanding asymmetric effort between you and the author, than about criticism).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-29T20:47:07.998Z · LW(p) · GW(p)

I’m not quite sure what you’re saying. Yes, no doubt, no one’s complained about me doing the thing I described—because, obviously, I haven’t ever done it! You say that it “basically seems just fine”, but… I don’t expect that it would actually seem “just fine” if I (or anyone else) were to actually do it.

Of course, I could be wrong. What are three examples of posts that others have written, that boil down simply to “other post X, written by person Y, is wrong”, and which have gotten a good reception? Perhaps if we did a case study or three, we’d gain some more insight into this thing.

(As for the “specific complaint”—there I just don’t know what you mean. Feel free to elaborate, if you like.)

Replies from: Raemon
comment by Raemon · 2019-07-29T22:35:51.336Z · LW(p) · GW(p)

Slight clarification – I think I worded the previous comment confusingly. I meant to say, if the typical LessWrong user wrote a single comment in reply to a post saying "this seems wrong", I would expect that to basically be fine.

I only recommend the "create a whole new post" thing when an author specifically asks you to stop commenting.

Replies from: Raemon, SaidAchmiz
comment by Raemon · 2019-07-29T22:38:15.889Z · LW(p) · GW(p)

(In some cases I think creating a whole new post would actually be just fine, based on how I've seen, say, Eliezer, Robin Hanson, Zvi, Ben Hoffman and Sarah Constantin respond to each other in longform on occasion. In other cases creating a whole new post might go over less well, and/or might be a bit of an experiment rather than a tried-and-true-solution, but I think it's the correct experiment to try)

Also want to be a clear - if authors are banning or asking lots of users to avoid criticism, I do think the author should take something of a social hit as "a person who can't accept any criticism". But I nonetheless think it's still a better metanorm for established authors to have control over their post's discussion area.

[The LessWrong team is currently trying to develop a much clearer understanding of what good moderation policies are, which might result in some of my opinions changing over the next few weeks, this is just a quick summary of what I currently believe]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-29T22:47:59.704Z · LW(p) · GW(p)

Also want to be a clear—if authors are banning or asking lots of users to avoid criticism, I do think the author should take something of a social hit as “a person who can’t accept any criticism”. But I nonetheless think it’s still a better metanorm for established authors to have control over their post’s discussion area.

Quite. A suggestion, then, if I may: display “how many people has this person banned from their posts” (with, upon a click or mouseover or some such, the full list of users available, who have been thus banned) prominently, when viewing a person’s post (somewhere near the post’s author line, perhaps). This way, if I open a post by one Carol, say, I can see at once that she’s banned 12 people from her posts; I take note of this (as that is unusually many); I then click/mouseover/etc., and see either that all the banned accounts are known trolls and curmudgeons (and conclude that Carol is a sensible person with a low tolerance for low-grade nonsense), or that all the banned accounts are people I judge to be reasonable and polite (and conclude that Carol is a prima donna with a low tolerance for having her ideas challenged).

Replies from: Raemon
comment by Raemon · 2019-07-29T22:57:07.964Z · LW(p) · GW(p)

Something in that space seems basically reasonable. Note that I haven't prioritized cleaning up (and then improving visibility for) the moderation log [? · GW] in part because the list of users who have ever banned users is actually just extremely short, and meanwhile there's a lot of other site features that seem higher priority.

I have been revisiting it recently and think it'd be a good thing to include in the nearish future (esp. if I am prioritizing other features that'd make archipelago-norms more likely to actually get used), but for the immediate future I actually think just saying to the few people who've expressed concerns 'yo, when you look at the moderation log almost nobody has used it' is the right call given limited dev time.

comment by Said Achmiz (SaidAchmiz) · 2019-07-29T22:42:51.257Z · LW(p) · GW(p)

I meant to say, if the typical LessWrong user wrote a single comment in reply to a post saying “this seems wrong”, I would expect that to basically be fine.

Ah, I see. Well, yes. But then, that’s also what I was saying: this sort of thing is generally fine as a comment, but as a post…

I only recommend the “create a whole new post” thing when an author specifically asks you to stop commenting.

I entirely understand your intention here, but consider: this would be even worse, “optics”-wise! “So,” thinks the reader, “this guy was so annoying, with his contrarian objections, that the victim of his nitpicking actually asked him to stop commenting; but he can’t let it go, so he wrote a whole post about it?!” And of course this is an uncharitable perspective, and one which isn’t consistent with “good truth-seeking norms”, etc. But… do you doubt that this is the sort of impression that will, if involuntarily, be formed in the minds of the commentariat?

Replies from: Raemon
comment by Raemon · 2019-07-29T22:53:08.514Z · LW(p) · GW(p)

I'm fairly uncertain here. But I don't currently share the intuition.

Note that the order of events I'm suggesting is:

1. Author posts.

2. Commenter says "this seems wrong / bad". Disagreement ensues

3. Author says "this is annoying enough that I'd prefer you not to comment on my posts anymore." [Hopefully, although not necessarily, the author does this knowing that they are basically opting into you now being encouraged by LessWrong moderators to post your criticism elsewhere if you think it's important. This might not currently be communicated that well but I think it should be]

4. Then you go and write a post titled 'My thoughts on X' or 'Alternative Conversation about X' or whatever, that says 'the author seems wrong / bad.'

By that point, sure it might be annoying, but it's presumably an improvement from the author's take. (I know that if I wanted to write a post about some high level Weird Introspection Stuff that took a bunch of Weird Introspection Paradigm stuff for granted, I'd personally probably be annoyed if you made the discussion about whether the Weird Introspection Paradigm was even any good, and much less annoyed if you wrote another post saying so.

I might be typical minding, but two important bits from my perspective are 'getting to have the conversation that I actually wanted to have', and 'not being forced to provide my own platform for someone else who I don't think is arguing in good faith'

comment by Raemon · 2019-07-16T22:52:17.316Z · LW(p) · GW(p)

Addenda: my Strategies of Personal Growth [LW · GW] post is also particularly downstream of CFAR. (I realize that much of it is something you can elsewhere. My perspective is that the main product CFAR provides is a culture that makes it easier to orient this sort of thing, and stick with it. CFAR iterates on "what combination of techniques can you present to a person in 4 days that best help jump-start them into that culture?", and they chose that feedback-loop-cycle after exploring others and finding them less effective)

One salient thing from the Strategies of Personal Growth perspective (which I attribute to exploration by CFAR researchers) is that many of the biggest improvements you can gain come from healing and removing psychological blockers.

comment by Said Achmiz (SaidAchmiz) · 2019-07-16T22:13:53.090Z · LW(p) · GW(p)

Separately from my other comment, I will say—

“There is variation on what techniques are useful for what people, and Said’s mind is shaped such that the CFAR paradigm isn’t useful, and it will never be legible to Said that the CFAR paradigm is useful.”

This happens to be phrased such that it could be literally true, but the implication—that the CFAR paradigm is in fact useful (to some people), and that it could potentially be useful (to some people) but the fact of its usefulness could be illegible to me—cannot be true. (Or, to be more precisely, it cannot be true simultaneously with the claim being true that “the CFAR paradigm” constitutes CFAR succeeding at finding [at least part of] what they were looking for. Is this claim being made? It seems like it is—if not, that should be made clear!)

The reason is simple: the kind of thing that CFAR (claimed to have) set out to look for, is the kind of thing that should be quite legible even to very skeptical third parties. “We found what we were looking for, but you just can’t tell that we did” is manifestly an evasion.

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-16T22:47:29.721Z · LW(p) · GW(p)
The reason is simple: the kind of thing that CFAR (claimed to have) set out to look for, is the kind of thing that should be quite legible even to very skeptical third parties.

What is your current model of what CFAR "claimed to have set out to look for"? I don't actually know much of an explicit statement of what CFAR was trying to look for, besides the basic concepts of "applied rationality".

comment by ESRogs · 2019-07-16T21:42:07.423Z · LW(p) · GW(p)
But while it may have been sensible to start (fully 10 years ago, now!)

Correction: CFAR was started in 2012 (though I believe some of the founders ran rationality camps the previous summer, in 2011), so it's been 7 (or 8) years, not 10.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-16T21:56:05.651Z · LW(p) · GW(p)

Less Wrong, however, was launched in 2009, and that is what I was referring to (namely, Eliezer’s posts about the community and Bayesian dojos and so forth).

comment by Zack_M_Davis · 2019-07-13T22:48:46.337Z · LW(p) · GW(p)

a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding

This is certainly theoretically possible, but I'm very suspicious of it on reversal test grounds: if additional prior reading is bad, then why isn't less prior reading even better? Should aspiring rationalists not read the Sequences for fear of an incomplete understanding spoiling themselves for some future $3,900 CfAR workshop? (And is it bad that I know about the reversal test without having attended a CfAR workshop?)

I feel the same way about schoolteachers who discourage their students from studying textbooks on their own (because they "should" be learning that material by enrolling in the appropriate school course). Yes, when trying to learn from a book, there is some risk of making mistakes that you wouldn't make with the help of a sufficiently attentive personal tutor (which, realistically, you're not going to get from attending lecture classes in school anyway). But given the alternative of placing my intellectual trajectory at the mercy of an institution that has no particular reason to care about my welfare, I think I'll take my chances.

Note that I'm specifically reacting to the suggestion that people not read things for their own alleged benefit. If the handbook had just said, "Fair warning, this isn't a substitute for the workshop because there's a lot of stuff we don't know how to teach in writing," then fine; that seems probably true. What I'm skeptical of is hypothesized non-monotonicity whereby additional lower-quality study allegedly damages later higher-quality study. First, because I just don't think it's true on the merits: I falsifiably predict that, e.g., math students who read the course textbook on their own beforehand will do much better in the course than controls who haven't. (Although the pre-readers might annoy teachers whose jobs are easier if everyone in the class is obedient and equally ignorant.) And second, because the general cognitive strategy of waiting for the designated teacher to spoonfeed you the "correct" version carries massive opportunity costs when iterated (even if spoonfeeding is generally higher-quality than autodidactism, and could be much higher-quality in some specific cases).

Replies from: PeterMcCluskey, ChristianKl, Elo
comment by PeterMcCluskey · 2019-07-14T21:21:47.023Z · LW(p) · GW(p)

You use math as an example, but that's highly focused on System 2 learning. That suggests that you have false assumptions about what CFAR is trying to teach.

There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc. Most of those analogies are fairly imperfect, and some have partially useful written instructions (in the case of meditation, the written version might have lagged in-person instruction by many centuries). Circling is the example that I'd consider most apt, but it won't mean much to people who haven't taken a good circling workshop.

A different analogy, which more emphasizes the costs of false assumptions: people often imagine that economics teaches something like how to run a good business or how to predict the stock market, because there isn't any slot in their worldview for what a good economics course actually teaches. There are plenty of mediocre executive summaries of economics, which fail to convey to most people that economics requires a pervasive worldview shift (integrating utilitarianism, empiricism about preferences, and some counterintuitive empirical patterns).

The CFAR handbook is more like the syllabus for an economics course than it is like an economics textbook, and a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers. (This analogy is imperfect because economics textbooks have been written, unlike a CFAR textbook.)

Maybe CFAR is making a mistake, but it appears that the people who seem most confident about that usually seem to be confused about what it is that CFAR is trying to teach.

Reading the sequences, or reading about the reversal test, are unlikely to have much relevance to what CFAR teaches. Just be careful not to imagine that they're good examples of what CFAR is about.

Replies from: SaidAchmiz, Zack_M_Davis
comment by Said Achmiz (SaidAchmiz) · 2019-07-15T00:40:30.629Z · LW(p) · GW(p)

Sometimes, we don’t know how to teach a subject in writing because the subject matter is inherently about action (rather than concepts, analysis, explanation, prediction, numbers, words, etc.).

But sometimes, we don’t know how to teach a subject in writing because there is, in fact, nothing (or, at best, nothing much) to be taught. Sometimes, a subject is actually empty (or mostly empty) of content.

In the latter case, attempting to write it down reveals this (and opens the alleged “content” to criticism)—whereas in person, the charisma of the instructors, the social pressure of being in a group of others who are there to receive the instruction, possibly the various biases associated with having made some costly sacrifice (time, money, etc.) to be there, possibly the various biases associated with the status dynamics at play (e.g. if the instructors are respected, or at least if those around you act as if they are), all serve to mask the fundamental emptiness of what is being “taught”.

I leave it to the reader to discern which of the given examples fall into which category. I will only note that while the subjects found in the former category are often difficult to teach, nevertheless one’s mastery of them, and their effectiveness, is usually quite easy to verify—because action can be demonstrated.

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2019-07-15T18:57:24.277Z · LW(p) · GW(p)

Meditation is action, in some important sense, and mostly can't be demonstrated.

It is hard to reliably distinguish between the results of peer pressure and actual learning. I think CFAR's best reply to this has been it's refund policy: last I knew they offered full refunds to anyone who requested it within one year (although I can't find any online mention of their current policy).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-07-15T19:20:20.808Z · LW(p) · GW(p)

Meditation is action, in some important sense, and mostly can’t be demonstrated.

Everything is “action” in “some sense”. (Whether that sense is “important”, in any given case, is a matter of perspective.)

As far as I am concerned—for the purposes of this topic—if it can’t be demonstrated, it ain’t action.

It is hard to reliably distinguish between the results of peer pressure and actual learning.

I submit to you that if this is true of any given case, then that is an excellent signal that no actual learning has taken place. (And the more true it is—the harder it is to distinguish between actual learning and the results of various biases, social pressure included—the stronger the signal is.)

comment by Zack_M_Davis · 2019-07-14T22:34:02.786Z · LW(p) · GW(p)

There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc.

Yes, I agree: for these subjects, the "there's a lot of stuff we don't know how to teach in writing" disclaimer I suggested in the grandparent would be a big understatement.

a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers

Useless, I can believe. (The extreme limiting case of "there's a lot of stuff we don't know how to teach in this format" is "there is literally nothing we know how to teach in this format.") But harmful? How? Won't the unexpected syllabus section titles at least disabuse them of their bad assumptions?

Reading the sequences [...] are unlikely to have much relevance to what CFAR teaches.

Really? The tagline on the website says, "Developing clear thinking for the sake of humanity’s future." I guess I'm having trouble imagining a developing-clear-thinking-for-the-sake-of-humanity's-future curriculum for which the things we write about on this website would be irrelevant. The "comfort zone expansion" exercises I've heard about would qualify, but Sequences-knowledge seems totally relevant to something like, say, double crux [LW · GW].

(It's actually pretty weird/surprising that I've never personally been to a CfAR workshop! I think I've been assuming that my entire social world has already been so anchored on the so-called "rationalist" community for so long, that the workshop proper would be superfluous.)

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2019-07-15T18:53:08.854Z · LW(p) · GW(p)

The idea that CFAR would be superfluous is fairly close to the kind of harm that CFAR worries about. (You might have been right to believe that it would have been superfluous in 2012, but CFAR has changed since then in ways that it hasn't managed to make very legible.)

I think meditation provides the best example for illustrating the harm. It's fairly easy to confuse simple meditation instructions (e.g. focus on your breath, sit still with a straight spine) with the most important features of meditation. It's fairly easy to underestimate the additional goals of meditation, because they're hard to observe and don't fit well with more widely accepted worldviews.

My experience suggests that getting value out of meditation is heavily dependent on a feeling (mostly at a system 1 level) that I'm trying something new, and there were times when I wasn't able to learn from meditation, because I mistakenly thought that focusing on my breath was a much more central part of meditation than it actually is.

The times when I got more value out of meditation were times when I tried new variations on the instructions, or new environments (e.g. on a meditation retreat). I can't see any signs that the new instructions or new environment were inherently better at teaching meditation. It seems to have been mostly that any source of novelty about the meditation makes me more alert to learning from it.

My understanding is that CFAR is largely concerned that participants will mistakenly believe that they've already learned something that CFAR is teaching, and that will sometimes be half-true - participants may know it at a system 2 level, when CFAR is trying to teach other parts of their minds that still reject it.

I think I experienced that a bit, due to having experience with half-baked versions of early CFAR before I took a well-designed version of their workshop. E.g. different parts of my mind have different attitudes to acknowledging my actual motivations when they're less virtuous than the motivations that my system 2 endorses. I understood that pretty well at some level before CFAR existed, yet there are still important parts of my mind that cling to self-deceptive beliefs about my motives.

CFAR likely can't teach a class that's explicitly aimed at that without having lots of participants feel defensive about their motives, in a way that makes them less open to learning. So they approach it via instruction that is partly focused on teaching other things that look more mundane and practical. Those other things often felt familiar enough to me that I reacted by saying: I'll relax now and conserve my mental energy for some future part of the curriculum that's more novel. That might have led me to do the equivalent of what I did when I was meditating the same way repeatedly without learning anything new. How can I tell whether that caused me to miss something important?

comment by ChristianKl · 2019-07-25T15:43:45.809Z · LW(p) · GW(p)

A key problem is moving from knowing about a technique to action. When you know 20 techniques and use no one of them it's harder to get you to actually use the 21st technique that you are taught then if you start out with less techniques in your head and have no established pattern of not doing any of the exercises that you were taught.

There's less that needs unlearning if you haven't been exposed to material beforehand.

I would still err on the side of being more public with information, but I do understand that there is a tradeoff.

comment by Elo · 2019-07-14T02:22:01.538Z · LW(p) · GW(p)

I can offer an explanation that might fit. Rationalists tend toward expertise mode thinking (expert from the torbert action logic framework). Behaviour like reading the book is in line with the expert behaviour.

Cfar techniques and related in-person methods are not always about being the expert, they are about doing the best thing. Being a better expert is not always the same as being the better munchkin, the better person or the person who can step out of their knowledge beliefs.

In theory, the expert thing is the best thing. In theory there's no difference between theory and practice, in practice, there's a big difference between theory and practice.

Having said that, I've never done cfar, and I teach workshops monthly in Sydney and I think they are wrong to discourage sharing of their resources. As the same time I accept the idea of intellectual property being protected even if that's not the case they are claiming.

(I'm in the process of writing up my resources into a collection)

Replies from: roland, SaidAchmiz
comment by roland · 2019-07-14T10:29:01.839Z · LW(p) · GW(p)

As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.

I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don't see it as a good justification for not making the CFAR handbook available.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-14T22:40:43.605Z · LW(p) · GW(p)

I suspect that this is the real reason.

It's pretty uncharitable of you to just accuse CfAR of lying like that! If the actual reason were "Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don't want to undercut our core product," then the handbook would have just said that.

Replies from: jessica.liu.taylor, roland
comment by jessicata (jessica.liu.taylor) · 2019-07-15T00:06:31.769Z · LW(p) · GW(p)

Wait, are you invoking the principle of charity as an epistemic axiom ("assume people don't lie")? Why would that be truth-aligned at all?

If you didn't mean to invoke the principle of charity, why not just say it's likely to be incorrect based on priors, CFAR's reputation, etc, instead of using the word "uncharitable" as an insult?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-15T03:32:51.854Z · LW(p) · GW(p)

You caught me—introspecting, I think the grandparent was written in a spirit of semi-deliberate irony. ("Semi" because it just felt like the "right" thing to say there; I don't think I put a lot of effort into modeling how various readers would interpret it.)

Roland is speculating that the real reason for intentionally incomplete explanations in the handbook is different from the stated reason, and I offered a particularly blunt phrasing ("we don't want to undercut our core product") of the hypothesized true reason, and suggested that that's what the handbook would have said in that case. I think I anticipated that a lot of readers would find my proposal intuitively preposterous: "everyone knows" [LW · GW] that no one would matter-of-factly report such a self-interested rationale (especially when writing on behalf of an organization, rather than admitting a vice among friends). That's why the earlier scenes in the 2009 film The Invention of Lying, or your post "Act of Charity", are (typically) experienced as absurdist comedy rather than an inspiring and heartwarming portrayal of a more truthful world.

But it shouldn't be absurd for the stated reason and the real reason to be the same! Particularly for an organization like CfAR which is specifically about advancing the art of rationality. And, I don't know—I think sometimes I talk in a way that makes me seem more politically naïve than I actually am, because I feel as if the "naïve" attitude is in some way normative? ("You really think someone would do that? Just go on the internet and tell lies?") Arguably this is somewhat ironic (being deceptive about your ability to detect deception is probably not actually the same thing as honesty), but I haven't heretofore analyzed this behavioral pattern of mine in enough detail to potentially decide to stop doing it??

I think another factor might be that I feel guilty about being "mean" to CfAR in the great-great-great grandparent comment? (CfAR isn't a person and doesn't have feelings, but my friend who works there is and does.) Such that maybe the emotional need to signal that I'm still fundamentally loyal to the "mainstream rationality" tribe (despite the underlying background situation where I've been collaborating with you and Ben and Michael to discuss what you see as fatal deficits of integrity in "the community" as presently organized) interacted with my preëxisting tendency towards semi-performative naiveté in a way that resulted in me writing a bad blog comment? It's a good thing you were here to hold me to account for it!

Replies from: clone of saturn, Raemon
comment by clone of saturn · 2019-07-15T09:23:25.007Z · LW(p) · GW(p)

I thought your comment was fine and the irony was obvious, but this kind of misunderstanding can be easily avoided by making the straightforward reading more boring, like so:

Given that CfAR is an organization which is speci­fi­cally about seeking truth, one could safely assume that if the actual reason were “Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don’t want to undercut our core product,” then the handbook would have just said that. To do otherwise would be to call the whole premise into question!

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-07-15T16:35:02.446Z · LW(p) · GW(p)

obvious

Yeah, I would have expected Jessica to get it, except that I suspect she's also executing a strategy of habitual Socratic irony (but without my additional innovation of immediately backing down and unpacking the intent when challenged), which doesn't work when both sides of a conversation are doing it.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-15T16:51:44.268Z · LW(p) · GW(p)

I actually didn't get it. I was confused but I didn't consciously generate the hypothesis that it was ironic.

I think I don't share the background assumption that it is overwhelmingly obvious that CFAR wouldn't tell the truth about this in their handbook. I also reflectively endorse a policy of calling out things that could easily be mistaken for sincere (though not obvious sarcasm), in order to ensure common knowledge.

comment by Raemon · 2019-07-15T19:52:24.278Z · LW(p) · GW(p)

Quick note for your model of how people interpret various kinds of writing, my initial read of your comment was to put a 60% probability on "Zack is currently undergoing a pendulum swing in the direction away from calling people out on lying, and overcompensating." (which was wrong and/or overconfident on my part)

comment by roland · 2019-07-15T10:04:23.267Z · LW(p) · GW(p)

It’s pretty uncharitable of you to just accuse CfAR of lying like that!

I wasn't, I rather suspect them of being biased.

comment by Said Achmiz (SaidAchmiz) · 2019-07-14T07:16:00.099Z · LW(p) · GW(p)

In theory, the expert thing is the best thing. In theory there’s no difference between theory and practice, in practice, there’s a big difference between theory and practice.

We’ve all heard this sort of thing many times, of course. The best response is probably Schopenhauer’s:

“That’s all very well in theory, but it won’t do in practice.” In this sophism you admit the premisses but deny the conclusion, in contradiction with a well-known rule of logic. The assertion is based upon an impossibility: what is right in theory must work in practice; and if it does not, there is a mistake in the theory; something has been overlooked and not allowed for; and, consequently, what is wrong in practice is wrong in theory too.

the torbert action logic framework

You are, I assume, referring to the ideas of this person? He appears to be some variety of management consultant. Is there any reason to take this “action logic” of his seriously? It seems to be yet another among the many, many self-help / management consulting / etc. “frameworks” or “systems” etc. Do any of his ideas have any empirical verification, or… well, anything, really?

Replies from: Elo
comment by Elo · 2019-07-14T07:36:02.796Z · LW(p) · GW(p)

take it seriously?

That's up to you. I've got a lot of value from the structure he outlines. It's a lot more reasoned than some of the other mysterious odd things I read.

If there is something wrong with the theory and the way it maps to the practice, is it better to read more theory or do more practice and make new theories? I would suggest it depends on the person and what they have found to work in the past. And also with an awareness to the loops of bad habits - "sharpen the saw" type problems. Sometimes it's more valuable to stop sharpening the saw, and start cutting down the tree. (rationality frame of thinking loves to sharpen more and cut less)

comment by Tetraspace (tetraspace-grouping) · 2019-07-16T22:33:48.141Z · LW(p) · GW(p)

I'm off from university (3rd year physics undergrad) for the summer and hence have a lot of free time, and I want to use this to make as much progress as possible towards the goal of getting a job in AI safety technical research. I have found that I don't really know how to do this.

Some things that I can do:

  • work through undergrad-level maths and CS textbooks
  • basic programming (since I do physics, this is at the level required to implement simple numerical methods in MATLAB)
  • the stuff in Andrew Ng's machine learning Coursera course

Thus far I've worked through the first half of Hutton's Programming in Haskell on the grounds that functional programming maybe teaches a style of thought that's useful and opens doors to more theoretical CS stuff.

I'm optimising for something slightly different that purely becoming good at AI safety, in that at the end I'd like to have some legible things to point to or list on a CV or something (or become better-placed to later acquire such legible things).

I'd be interested to hear from people who know more about what would be helpful for this.

comment by limerott · 2019-07-16T17:13:44.508Z · LW(p) · GW(p)

Hi! I've known LW for quite a while, but only now decided to join. I remember reading a comment here and thinking "I like how this person thinks". Needless to say, this is not a common experience I have on the internet. What I hope to get from this site are fruitful intellectual discussions that trip me over and reveal the flaws in my reasoning :)

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-16T16:29:53.432Z · LW(p) · GW(p)

Welcome limerott! I hope your time on the site will be well-spent and feel free to ask any questions here in the Open Thread, or via Intercom in the bottom right corner.

comment by Alexei · 2019-07-06T02:12:15.855Z · LW(p) · GW(p)

Hypothesis: there are less comments per user on LW 2.0 than the old LW, because the user base is more educated as to where they have a valuable opinion vs where they don’t.

Replies from: habryka4, PeterMcCluskey, Pattern
comment by habryka (habryka4) · 2019-07-06T02:19:26.385Z · LW(p) · GW(p)

Ruby can probably get us the answer to whether the premise is true next week. Seems likely true, but I would only give it 75%.

Replies from: Ruby
comment by Ruby · 2019-07-10T22:52:21.344Z · LW(p) · GW(p)

The question is fuzzier than it might seem at first. The issues is that population of commenters size changes too. You can have a world where the number of very frequent commenters has gone up but the average per commenter has gone done because the number of infrequent commenters has grown even faster than the number of frequent commenters.


There are also multiple possible causes for growth/decline, change in frequency, etc., that I don't think you could really link them to a mechanism as specific as being more educated about where your opinion is valuable. Though I'd definitely link the number of comments per person to the number of other active commenters and number of conversations going, a network effects kind of thing.


Anyhow, some graphs:

Indeed, the average (both mean and median) comments per active commenter each week has gone down.

But it's generally the case that the number of comments and commenters went way down, recovering only late 2017 at the time of Inadequate Equilibria and LessWrong 2.0

Replies from: Ruby
comment by Ruby · 2019-07-10T22:52:52.399Z · LW(p) · GW(p)

We can also look at the composition commenter commenting frequency. Here I've been the commenters for each week into a bin/bucket and seen how they've changed. Top graph is overall volume, bottom graph is the percentage of commenting population in each frequency bucket:

I admit that we must conclude that high-frequency commenters (4+ comments/week) have diminished in absolute numbers and as a percentage over time, though a slight upward trend in the last six months.

Replies from: ryan_b
comment by ryan_b · 2019-07-11T13:53:03.946Z · LW(p) · GW(p)

Great work!

Are there any obvious tie-ins to the launch of the Alignment Forum? It seems plausible that the people who were here almost exclusively for the AI research posts might have migrated there.

Alternatively, if Alignment Forum is in fact counted, it might be that the upward trend reflects growth in that segment.

comment by PeterMcCluskey · 2019-07-07T01:48:58.115Z · LW(p) · GW(p)

I'll suggest the main change is open threads get fewer comments because the system makes open threads less conspicuous, and provides alternatives, such as questions, that are decent substitutes for comments.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2019-07-10T15:59:46.855Z · LW(p) · GW(p)

I think the Hypothesis is not about Open Threads specifically

comment by Pattern · 2019-07-08T20:56:14.529Z · LW(p) · GW(p)

Upvoted for testability.

comment by Aleksi Liimatainen (aleksi-liimatainen) · 2019-07-14T17:10:10.295Z · LW(p) · GW(p)

I noticed I was confused about how humans can learn novel concepts from verbal explanations without running into the symbol grounding problem. After some contemplation, I came up with this:

To the extent language relies on learned associations between linguistic structures and mental content, a verbal explanation can only work with what's already there. Instead of directly inserting new mental content, the explanation must leverage the receiving mind's established content in a way that lets the mind generate its own version of the new content.

There's enough to say about this that it seems worth a post or several but I thought I'd float it here first. Has something like this been written already?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2019-07-14T19:56:25.209Z · LW(p) · GW(p)

Constructivist learning theory is a relevant keyword; its premise is pretty much directly the same as in your quote (my emphasis added):

An important restriction of education is that teachers cannot simply transmit knowledge to students, but students need to actively construct knowledge in their own minds. That is, they discover and transform information, check new information against old, and revise rules when they do not longer apply. This constructivist view of learning considers the learner as an active agent in the process of knowledge acquisition. Constructivist conceptions of learning have their historical roots in the work of Dewey (192 9), Bruner (1961), Vygotsky (1962), and Piaget (1980). Bednar, Cunningham, Duffy, and Perry (1992) and von Glasersfeld (1995) have proposed several implications of constructivist theory for instructional developers stressing that learning outcomes should focus on the knowledge construction process and that learning goals should be determined from authentic tasks with specific objectives. Similarly, von Glasersfeld (1995) states that learning is not a stimulus-response phenomenon, but a process that requires self-regulation and the development of conceptual structures through reflection and abstraction. It is important to note, in this respect, that constructivism is embodied in numerous ways and that these different views share important overlaps, but also c ontain major differences.

Constructivism is an approach to teaching and learning based on the premise that cognition (learning) is the result of "mental construction." In other words, students learn by fitting new information together with what they already know.

It's a big topic in educational research, so there's a lot of stuff about it out there. E.g. "How Learning Works: Seven Research-Based Principles for Smart Teaching" summarizes some research on it:

Students connect what they learn to what they already know, interpreting incoming information, and even sensory perception, through the lens of their existing knowledge, beliefs, and assumptions (Vygotsky, 1978 ; National Research Council, 2000 ). In fact, there is widespread agreement among researchers that students must connect new knowledge to previous knowledge in order to learn (Bransford & Johnson, 1972 ; Resnick, 1983 ). However, the extent to which students are able to draw on prior knowledge to effectively construct new knowledge depends on the nature of their prior knowledge, as well as the instructor's ability to harness it. In the following sections, we discuss research that investigates the effects of various kinds of prior knowledge on student learning and explore its implications for teaching.

One related example of it that I particularly like is in the paper Building Islands of Expertise in Everyday Family Activity. It discusses how a boy who's interested in trains initially learns stuff about trains, which then helps him learn more about other stuff as well:

By the time the boy turns 3-years old, he has developed an island of expertise around trains. His vocabulary, declarative knowledge, conceptual knowledge, schemas, and personal memories related to trains are numerous, well-organized, and flexible. Perhaps more importantly, the boy and his parents have developed a relatively sophisticated conversational space for trains. Their shared knowledge and experience allow their talk to move to deeper levels than is typically possible in a domain where the boy is a relative novice. For example, as the mother is making tea one afternoon, the boy notices the steam rushing out of the kettle and says: “That’s just like a train!” The mother might laugh and then unpack the similarity to hammer the point home: “Yes it is like a train! When you boil water it turns into steam. That’s why they have boilers in locomotives. They heat up the water, turn it into steam, and then use the steam to push the drive wheels. Remember? We saw that at the museum.”

In contrast, when the family was watching football—a domain the boy does not yet know much about—he asked “Why did they knock that guy down?” The mother’s answer was short, simple, stripped of domain-specific vocabulary, and sketchy with respect to causal mechanisms—“Because that’s what you do when you play football.” Parents have a fairly good sense of what their children know and, often, they gear their answers to an appropriate level. When talking about one of the child’s islands of expertise, parents can draw on their shared knowledge base to construct a more elaborate, accurate, and meaningful explanations. This is a common characteristic of conversation in general: When we share domain-relevant experience with our audience we can use accurate terminology, construct better analogies, and rely on mutually held domain- appropriate schema as a template through which we can scribe new causal connections.

As this chapter is being written, the boy in this story is now well on his way to 4- years old. Although he still likes trains and still knows a lot about them, he is developing other islands of expertise as well. As his interests expand, the boy may engage less and less often in activities and conversations centered around trains and some of his current domain-specific knowledge will atrophy and eventually be lost. But as that occurs, the domain-general knowledge that connected the train domain to broader principles, mechanisms, and schemas will probably remain. For example, when responding to the boy’s comment about the tea kettle, the mother used the train domain as a platform to talk about the more general phenomenon of steam.

Trains were platforms for other concepts as well, in science and in other domains. Conversations about mechanisms of locomotion have served as a platform for a more general understanding of mechanical causality. Conversations about the motivation of characters in the Thomas the Tank Engine stories have served as platforms for learning about interpersonal relationships and, for that matter, about the structure of narratives. Conversations about the time when downtown Pittsburgh was threaded with train tracks and heavy-duty railroad bridges served as a platform for learning about historical time and historical change. These broader themes emerged for the boy for the first time in the context of train conversations with his parents. Even as the boy loses interest in trains and moves on to other things, these broader themes remain and expand outward to connect with other domains he encounters as he moves through his everyday life.

Expertise research is also relevant; it talks about how people build up increasingly detailed mental representations of a domain they are learning, and which guide them when they decide what actions to take. The representations start out a coarse, but get increasingly detailed over time. This is an excerpt from the book Peak: Secrets from the New Science of Expertise, which notes a "chicken and egg" problem that requires grounding any skill in some pre-existing knowledge first:

As we’ve just seen from several studies, musicians rely on mental representations to improve both the physical and cognitive aspects of their specialties. And mental representations are essential to activities we see as almost purely physical. Indeed, any expert in any field can be rightly seen as a high-achieving intellectual where that field is concerned. This applies to pretty much any activity in which the positioning and movement of a person’s body is evaluated for artistic expression by human judges. Think of gymnastics, diving, figure skating, or dancing. Performers in these areas must develop clear mental representations of how their bodies are supposed to move to generate the artistic appearance of their performance routines. But even in areas where artistic form is not explicitly judged, it is still important to train the body to move in particularly efficient ways. Swimmers learn to perform their strokes in ways that maximize thrust and minimize drag. Runners learn to stride in ways that maximize speed and endurance while conserving energy. Pole-vaulters, tennis players, martial artists, golfers, hitters in baseball, three-point shooters in basketball, weightlifters, skeet shooters, and downhill skiers—for all of these athletes proper form is key to good performance, and the performers with the best mental representations will have an advantage over the rest. 
In these areas too, the virtuous circle rules: honing the skill improves mental representation, and mental representation helps hone the skill. There is a bit of a chicken-and-egg component to this. Take figure skating: it’s hard to have a clear mental representation of what a double axel feels like until you’ve done it, and, likewise, it is difficult to do a clean double axel without a good mental representation of one. That sounds paradoxical, but it isn’t really. You work up to a double axel bit by bit, assembling the mental representations as you go. 
It’s like a staircase that you climb as you build it. Each step of your ascent puts you in a position to build the next step. Then you build that step, and you’re in a position to build the next one. And so on. Your existing mental representations guide your performance and allow you to both monitor and judge that performance. As you push yourself to do something new—to develop a new skill or sharpen an old one—you are also expanding and sharpening your mental representations, which will in turn make it possible for you to do more than you could before.

Finally, while it only touches on this topic occasionally, Josh Waitzkin had some nice "from the inside" descriptions of this gradual construction of an increasing level of understanding in his book The Art of Learning:

I practiced the Tai Chi meditative form diligently, many hours a day. At times I repeated segments of the form over and over, honing certain techniques while refining my body mechanics and deepening my sense of relaxation. I focused on small movements, sometimes spending hours moving my hand out a few inches, then releasing it back, energizing outwards, connecting my feet to my fingertips with less and less obstruction. Practicing in this manner, I was able to sharpen my feeling for Tai Chi. When through painstaking refinement of a small movement I had the improved feeling, I could translate it onto other parts of the form, and suddenly everything would start flowing at a higher level. The key was to recognize that the principles making one simple technique tick were the same fundamentals that fueled the whole expansive system of Tai Chi Chuan.
This method is similar to my early study of chess, where I explored endgame positions of reduced complexity—for example king and pawn against king, only three pieces on the board—in order to touch high-level principles such as the power of empty space, zugzwang (where any move of the opponent will destroy his position), tempo, or structural planning. Once I experienced these principles, I could apply them to complex positions because they were in my mental framework. However, if you study complicated chess openings and middlegames right off the bat, it is difficult to think in an abstract axiomatic language because all your energies are preoccupied with not blundering. It would be absurd to try to teach a new figure skater the principle of relaxation on the ice by launching straight into triple axels. She should begin with the fundamentals of gliding along the ice, turning, and skating backwards with deepening relaxation. Then, step by step, more and more complicated maneuvers can be absorbed, while she maintains the sense of ease that was initially experienced within the simplest skill set.

Replies from: aleksi-liimatainen
comment by Aleksi Liimatainen (aleksi-liimatainen) · 2019-07-15T05:44:45.713Z · LW(p) · GW(p)

Thanks, this is exactly what I was looking for. Not a new idea then, though there's something to be said for semi-independent reinvention.

The obvious munchkin move would be to develop a reliable means of boostrapping a basic mental model of constructivist learning and grounding it in the learner's own direct experience of learning. Turning the learning process on itself should lead to some amount of recursive improvement, right? Has that been tried?

Replies from: MakoYass
comment by mako yass (MakoYass) · 2019-07-17T05:33:11.619Z · LW(p) · GW(p)
though there's something to be said for semi-independent reinvention.

:D

(I am delighted because constructivism is what is to be said for semi-independent reinvention, which aleksi just semi-independently constructed, thereby doing a constructivism on constructivism)

comment by ryan_b · 2019-07-10T19:14:55.557Z · LW(p) · GW(p)

Has anyone been to DynamicLand in Berkeley? If so, what did you think of it?

Replies from: habryka4, gwillen
comment by habryka (habryka4) · 2019-07-10T21:39:45.469Z · LW(p) · GW(p)

I was there before it was fully done. As a person with a strong interest in UX I found it quite exciting.

It definitely tries to be a modern Xerox Park or something like that, and it does really feel like it's doing a lot of really interesting things in the UI space. I have a really hard time telling whether any of the UI ideas they are experimenting with will actually turn out to be useful and widely adopted, but it definitely helped me think about UX in a better way.

Replies from: ryan_b
comment by ryan_b · 2019-07-11T15:41:01.909Z · LW(p) · GW(p)

It does seem to me like the kind of thing that would allow capitalizing strongly on something like a shared technical understanding. But that would be very difficult to pull off, because the overlap of people with shared technical understanding and advanced UI understanding is small.

If I were to say something like "DynamicLand can add UX to any layer of abstraction," how would that sound?

comment by gwillen · 2019-07-30T00:54:48.531Z · LW(p) · GW(p)

I have been! I thought it was an interesting experiment, and I really hope they have another community day so I can visit again. I think there's probably a lot to learn from it, and I think there are things that you can only learn effectively by trying out weird experiments in real life to see "how this feels". But I don't really expect anything to directly come of it -- the project is pretty janky, and while it's a fantastic platform for tiny cute demos, I don't think any concrete part of it (other than the general sense of "this is an inspiration to try to go recreate this neat type of interaction in a more robust way") is really useful.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2019-07-30T09:56:09.019Z · LW(p) · GW(p)

I haven't been there, but I was reading about Dynamic Land just yesterday (via Dominic Cummings' blog), and I've read some of Bret Victor's writings. I approve of the ideas tremendously, but it's not clear to me that in practice the work has provided any more of an advance in "visual programming" than other efforts in this area. Beyond the decades-old WIMP (ETA: and spreadsheets) interface, none of these, it seems to me, ever make more than toy demos. I have never seen them scale up to real power tools that someone would use to accomplish something. Ideas like these have been around long enough that toys and dreams will no longer do.

There are lathes that can make all of their own parts. Could Dynamic Land create Dynamic Land? What would such a system look like if it could?

Replies from: gwillen
comment by gwillen · 2019-07-30T21:04:51.796Z · LW(p) · GW(p)

I agree that it only makes toy demos, but it definitely goes beyond WIMP. It's a simulation of the sort of interface one might expect in a future where every surface is a screen -- it's a janky, extremely-low-fidelity simulation, which holds together barely well enough to serve the purpose, but it does serve, and it's an interesting way to try out this interaction style.

A surprisingly large part of Dynamic Land is actually (in some sense) self-hosting. There is a small core/kernel that is extrinsic, including e.g. the parser for the language. But a lot of low-level functionality is indeed implemented inside the interpreter (as I recall, they use a sort of pseudo-Lua, crossed with Smalltalk-like agent stuff -- my vague recollection is that there's something like a Lua interpreter with a custom preprocessor underlying it.)

comment by Richard_Kennaway · 2019-07-08T13:20:41.969Z · LW(p) · GW(p)

Recently out: "The Transhumanism Handbook" ed. Newton Lee (Springer, 2019). Costs money, of course, but you can see the table of contents, the abstracts, and the references for each paper for free. It contains:

5 chapters on yay, transhumanism!

10 on AI

12 on longevity

5 on biohacking

3 on cryptocurrency

5 on art

16 on society and ethics

10 on philosophy and religion

comment by Sherrinford · 2019-07-23T06:08:54.106Z · LW(p) · GW(p)

Is there exactly one RSS feed of lesswrong.com, i.e. https://www.lesswrong.com/feed.xml [? · GW] ? As I know too little about the technical side, is it easily possible for you to add different RSS feeds?

Replies from: habryka4
comment by habryka (habryka4) · 2019-07-23T16:09:40.702Z · LW(p) · GW(p)

There are quite a few RSS feeds. You can construct them by clicking on the "Subscribe via RSS" button on the frontpage:

Frontpage RSS

Replies from: Sherrinford
comment by Sherrinford · 2019-07-29T20:16:03.399Z · LW(p) · GW(p)

Thanks!

comment by ryan_b · 2019-07-10T14:44:37.673Z · LW(p) · GW(p)

Reading about libraries in Julia for geometric algebra, I found Grassmann.jl. This is going to require more knowledge of advanced algebra than I have to use effectively, but while reading about it I noticed the author describe how they can achieve very high dimension numbers. They claim ~4.6e18 dimensions.

That's a lotta dimensions!

Replies from: Max D Porter
comment by Max D Porter · 2019-07-28T20:20:33.255Z · LW(p) · GW(p)

Only barely related, but Grassmann numbers are hilariously weird. Among other properties, their square is always zero (though they’re generally non-zero).

comment by ryan_b · 2019-07-09T18:20:41.833Z · LW(p) · GW(p)

The American National Institute of Standards and Technology has a draft plan for AI standards up. There is an announcement on their website; an announcement post [? · GW] on LessWrong; the plan itself on NIST's website; an outline of said plan [? · GW] on LessWrong.

Edit: changed style of links in response to the Please Give Your Links Speaking Names [LW · GW] post.

comment by Jason Geringer (jason-geringer) · 2019-07-26T22:18:44.278Z · LW(p) · GW(p)

Hey what's up everybody, i'm jason. i found lesswrong as i was researching questions for a quiz app that i'm workin on called wisdom of the crowd app. and i got started doin that cause i started a facebook group called wisdom of the crowd. i noticed right away that you are vibrating at my same frequency so to speak. the rationality philosophy is what motivated me to start that fb group. it's only days old but i got the impression that the idea i had is pretty similar to what you guys are doing here. i love your library lol i'm gonna try to set up a group to test the "wisdom of the crowd" theory. should be able to run a lot more experiments too with the same parameters. So if any of you are interested in playin around with that you are super welcome. Other than that i'm a bit of a musician, been playin around recording covers and trying to make videos for em. making videos is pretty fun. I guess i'm busy like everyone else but open for collaboration. If you have any suggestions that would help me with what you got goin on here, so i don't miss somethin lol.

Replies from: ryan_b
comment by ryan_b · 2019-07-27T15:05:54.214Z · LW(p) · GW(p)

The front page features are very useful for getting up to speed. Recently Curated is newer posts the mods thought were important, From the Archives are old posts that were well received, and Continue Reading helps keep track of the sequences (the core content of the site) so you can consume them over time.

Welcome!

comment by Mary Chernyshenko (mary-chernyshenko) · 2019-07-20T21:03:36.074Z · LW(p) · GW(p)

Kind of stupid question, actually. I Googled up clothes for one-armed children (tried knitting, didn't go as planned, thought I'd donate it), and there were much fewer search results than I'd expected. Is it because one-armed people just have their clothes re-sewn from ordinary stuff, or what? Or are there different key words for it?

Replies from: eigen
comment by eigen · 2019-07-21T00:50:31.343Z · LW(p) · GW(p)

I would think they would just buy regular clothes. The same way that you cannot buy only one shoe of a pair of shoes.

Replies from: mary-chernyshenko
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-07-21T08:11:12.356Z · LW(p) · GW(p)

Still seems kind of inefficient, though :(

Replies from: ryan_b
comment by ryan_b · 2019-07-23T18:59:55.112Z · LW(p) · GW(p)

I wonder if there would be enough interest to support a kind of matching app that would let people put their Amazon wishlist up and then match them with someone who had the same item but opposite needs (ie left vs right shoe), and then split the cost.

Replies from: mary-chernyshenko
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-07-24T06:43:55.831Z · LW(p) · GW(p)

Could be. But it is still only shoes... and sending them to two different customers might drown any difference in cost.

comment by Liam Donovan (liam-donovan) · 2019-07-18T22:06:45.042Z · LW(p) · GW(p)

It seems like there are some intrinisic connections between the clusters of concepts known as "EA", "LW-style rationality", and "HRAD research"; is this a worrying sign?

Specifically, it seems like the core premise of EA relies largely on a good understanding of the world, in a systemic and explicit manner (beause existing heuristics aren't selected for "maximizing altruism"[1]), linking closely to LW, which tries to answer the same question. At the same time, my understanding of HRAD research is that it aims to elucidate a framework for how consequentialist agents "ought to reason" in theory [LW · GW], so the consequentialist reasoning of the first highly capable AI systems is legible to humans. Understanding how an idealized agent "ought to reason" or "ought to make decisions" seems highly relevant to the project of improving human rationality (which is then relevant to the EA project).

Now, imagine a world where HRAD is not a great use of resources (e.g. because AI risk is not a legitimate concern, because underlying philosophical assumptions are wrong, because the marginal tractability of alternate safety approaches is much higher, etc). Would the basic connections between ideas in last paragraph still hold? I'm worried that they would, leading any community with goals similar to EA to be biased towards HRAD research for reasons unrelated to the underlying state of the world.

Is this a legitimate concern? What else has been written on this issue?

[1] To expand on this a bit: LW-style rationality often underperforms accumulated heuristics, experience, and domain knowledge in established fields, and probably does best in new fields where quantification is valuable, with high uncertainty, low societal incentives to get a correct answer, dissimilarity to ancestral enviroments, high propensity to cognitive biases/emotional responses. I think almost all of these descriptors are true for the EA movement.

Replies from: ryan_b
comment by ryan_b · 2019-07-19T16:00:49.513Z · LW(p) · GW(p)

The intrinsic connection is primarily that they arose out of the same broad community, and there is heavy overlap between personnel as a consequence.

I say this is not a worrying sign, because the comparison isn't between their shared methods and some better memeplex, it is between their shared methods and the status quo. That is to say, there's no reason to believe something else would be happening in place of this; more likely everyone would have scattered throughout what was already happening.

It's very important to distinguish those etceteras you listed, because those are three different worlds. In the world where AI risk is in fact low, HRAD can still be very successful in mitigating it further and also fruitful in thinking about similar risks. In the world where the underlying philosophical assumptions are wrong, demonstrating that wrongness is valuable in-and-of itself to the greater safety project. In the world where alternate safety approaches have higher tractability, how would we even tell without comparison to the challenges encountered in HRAD?

HRAD is also the product of specific investigations into tractability and philosophical soundness. I expect they will iterate on these very questions again in the future. If it winds up a dead end I expect the associated communities to notice and then to shift focus elsewhere.

To sum up, we have noticed the skulls. Hail, humanity! We who are about to die salute you.


Replies from: liam-donovan
comment by Liam Donovan (liam-donovan) · 2019-07-20T04:35:50.283Z · LW(p) · GW(p)
The intrinsic connection is primarily that they arose out of the same broad community, and there is heavy overlap between personnel as a consequence.

I disagree with this though! I think anyone that wants to think along EA lines is inevitably going to want to investigate how to improve epistemic rationality, which naturally leads to thinking about decision making for idealized agents. Having community overlap is one thing, but the ideas seem so closely related that EA can't develop in any possible world without being biased towards HRAD research.

It's very important to distinguish those etceteras you listed

I mean surely there would be some worlds in which HRAD research was not the most valuable use of (some portion of*) EA money; it doesn't really matter whether the specific examples I gave work, just that EA would be unable to distinguish worlds where HRAD is an optimal use of resources from the world where it is not.

I expect the associated communities to notice and then to shift focus elsewhere.

But why? Is it not at all concerning that aliens with no knowledge of Earth or humanity could plausibly guess that a movement dedicated to a maximizing, impartial, welfarist conception of the good [EA · GW] would also be intrinsically attracted to learning about idealized reasoning procedures? The link between them is completely unconnected to the object-level question "is HRAD research the best use of [some] EA money?", or even to the specifics of how the LW/EA communities formed around specific personalities in this world.


Replies from: ryan_b
comment by ryan_b · 2019-07-24T15:09:30.790Z · LW(p) · GW(p)

I don't understand the source of your concern.

Is it not at all concerning that aliens with no knowledge of Earth or humanity could plausibly guess that a movement dedicated to a maximizing, impartial, welfarist conception of the good [EA · GW] would also be intrinsically attracted to learning about idealized reasoning procedures?

This is not at all concerning. If we are concerned about this then we should also be concerned that aliens could plausibly guess a movement dedicated to space exploration would be intrinsically attracted to learning about idealized dynamical procedures. It seems to me this is just a prior that groups with a goal investigate instrumentally useful things.

My model of your model so far is this: because the EA community is interested in LessWrong, and because LessWrong facilitated the group that work on HRAD research, the EA community will move their practices closer to implications of this research even in the case where it is wrong. Is that accurate?

My expectation is that EAs will give low weight to the details of HRAD research, even in the case where it is a successful program. The biggest factor is the timelines: HRAD research is in service of the long term goal of reasoning correctly about AGI; EA is about doing as much good as possible, as soon as possible. The iconic feature of the EA movement is the giving pledge, which is largely predicated on the idea that money given now is more impactful than money given later. There is a lot of discussion about alternatives and different practices, for example the donor's dilemma [EA · GW] and mission hedging [EA · GW], but these are operational concerns rather than theoretical/idealized ones.

Even if I assume HRAD is a productive line of research, I strongly expect that the path to changing EA practice leads from some surprising result, evaluated all the way up to the level of employment and investment decisions. This means the result would need to be surprising, then it would need to withstand scrutiny, then it would need to lead to conclusions big enough to shift activity like donations, employment, and investments, cost of change included and all. I would be deeply shocked if this happened, and then further shocked if it had a broad enough impact to change the course of EA as a group.