We run the Center for Applied Rationality, AMA
post by AnnaSalamon · 2019-12-19T16:34:15.705Z · LW · GW · 324 commentsContents
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participate... None 324 comments
CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
- Why we think there should be a CFAR;
- Whether we should change our name to be less general;
- How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that.
- How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why.
Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's!
(You might also want to check out our 2019 Progress Report and Future Plans [LW · GW]. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.)
[Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]
324 comments
Comments sorted by top scores.
comment by Ben Pace (Benito) · 2019-12-20T21:56:55.464Z · LW(p) · GW(p)
I feel like one of the most valuable things we have on LessWrong is a broad, shared epistemic framework, ideas with which we can take steps through concept-space together and reach important conclusions more efficiently than other intellectual spheres e.g. ideas about decision theory, ideas about overcoming coordination problems, etc. I believe all of the founding staff of CFAR had read the sequences and were versed in things like what it means to ask where you got your bits of evidence from, that correctly updating on the evidence has a formal meaning, and had absorbed a model of Eliezer's law-based approach [LW · GW] to reasoning about your mind and the world.
In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have an understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on. I also have felt this way increasingly about CFAR staff over the years (e.g. it's not clear to me whether all current CFAR staff have read The Sequences). And to be clear, I think if you don't have a shared epistemic framework, you often just can't talk to each other very well about things that aren't highly empirical, certainly not at the scale of more than like 10-20 people.
So I've been pretty confused by why Anna and other staff haven't seemed to think this is very important when designing the intellectual environment at CFAR events. I'm interested to know how you think about this?
I certainly think a lot of valuable introspection and modelling work still happens at CFAR events, I know I personally find it useful, and I think that e.g. CFAR has done a good job in stealing useful things from the circling people (I wrote about my positive experiences circling here [LW(p) · GW(p)]). But my sense for a number of the attendees is that even if they keep introspecting and finding out valuable things about themselves, 5 years from now they will not have anything to add to our collective knowledge-base (e.g. by writing a LW sequence that LWers can understand and get value from), even to a LW audience who considers all bayesian evidence admissible even if it's weird or unusual, because they were never trying to think in a way that could be communicated in that fashion. The Gwerns and the Wei Dais and the Scott Alexanders of the world won't have learned anything from CFAR's exploration.
As an example of this, Val (who was a cofounder but doesn't work at CFAR any more) seemed genuinely confused [LW(p) · GW(p)] when Oli asked for third-party verifiable evidence for the success of Val's ideas about introspection. Oli explained that there was a lemons problem (i.e. information asymmetry) when Val claimed that a mental technique has changed his life radically, when all of the evidence he offers is of the kind "I feel so much better" and "my relationships have massively improved" and so on. (See Scott's Review of All Therapy Books for more of what I mean here, though I think this is a pretty standard idea.) He seemed genuinely confused why Oli was asking for third-party verifiable evidence, and seemed genuinely surprised that claims like "This last September, I experienced enlightenment. I mean to share this as a simple fact to set context" would be met with a straight "I don't believe you." This was really worrying to me, and it's always been surprising to me that this part of him fit naturally into CFAR's environment and that CFAR's natural antibodies weren't kicking against it hard.
To be clear, I think several of Val's posts in that sequence were pretty great (e.g. The Intelligent Social Web is up for the 2018 LW review, and you can see Jacob Falkovich's review on how the post changed his life [LW(p) · GW(p)]), and I've personally had some very valuable experiences with Val at CFAR events, but I expect, had he continued in this vein at CFAR, that over time Val would just stop being able to communicate with LWers, and drift into his own closed epistemic bubble, and to a substantial degree pull CFAR with him. I feel similarly about many attendees at CFAR events, although fewer since Val left. I never talked to Pete Michaud very much, and while I think he seemed quite emotionally mature (I mean that sincerely) he seemed primarily interested in things to do with authentic relating and circling, and again I didn't get many signs that he understood why building explicit models or a communal record of insights and ideas was important, and because of this it was really weird to me that he was executive director for a few years.
To put it another way, I feel like CFAR has in some ways given up on the goals of science, and moved toward the goals of a private business, whereby you do some really valuable things yourself when you're around, and create a lot of value, but all the knowledge you gain about about building a company, about your market, about markets in general, and more, isn't very communicable, and isn't passed on in the public record for other people to build on (e.g. see the difference between how all scientists are in a race to be first to add their ideas to the public domain, whereas Apple primarily makes everyone sign NDAs and not let out any information other than releasing their actual products, and I expect Apple will take most of their insights to the grave).
Replies from: AnnaSalamon, AnnaSalamon, Duncan_Sabien, AnnaSalamon, AnnaSalamon, orthonormal, ricraz↑ comment by AnnaSalamon · 2019-12-21T11:14:44.867Z · LW(p) · GW(p)
This is my favorite question of the AMA so far (I said something similar aloud when I first read it, which was before it got upvoted quite this highly, as did a couple of other staff members). The things I personally appreciate about your question are: (1) it points near a core direction that CFAR has already been intending to try moving toward this year (and probably across near-subsequent years; one year will not be sufficient); and (2) I think you asking it publicly in this way (and giving us an opportunity to make this intention memorable and clear to ourselves, and to parts of the community that may help us remember) will help at least some with our moving there.
Relatedly, I like the way you lay out the concepts.
Your essay (I mean, “question”) is rather long, and has a lot of things in it; and my desired response sure also has a lot of things in it. So I’m going to let myself reply via many separate discrete small comments because that’s easier.
(So: many discrete small comments upcoming.)
↑ comment by AnnaSalamon · 2019-12-21T14:16:54.136Z · LW(p) · GW(p)
Ben Pace writes:
In recent years, when I've been at CFAR events, I generally feel like at least 25% of attendees probably haven't read The Sequences, aren't part of this shared epistemic framework, and don't have any understanding of that law-based approach, and that they don't have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on.
The “many alumni haven't read the Sequences” part has actually been here since very near the beginning (not the initial 2012 minicamps, but the very first paid workshops of 2013 and later). (CFAR began in Jan 2012.) You can see it in our old end-of-2013 fundraiser post [LW · GW], where we wrote “Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality -- such as a local politician, a police officer, a Spanish teacher, and others -- are by and large quite happy with the workshop and feel it is valuable.” We didn't name this explicitly in that post, but part of the hope was to get the workshops to work for a slightly larger/broader/more cognitively diverse set than the set who for whom the original Sequences in their written form tended to spontaneously "click".
As to the “aren’t part of this shared epistemic framework” -- when I go to e.g. the alumni reunion, I do feel there are basic pieces of this framework at least that I can rely on. For example, even on contentious issues, 95%+ of alumni reunion participants seem to me to be pretty good at remembering that arguments should not be like soldiers, that beliefs are for true things, etc. -- there is to my eyes a very noticable positive difference between the folks at the alumni reunion and unselected-for-rationality smart STEM graduate students, say (though STEM graduate students are also notably more skilled than the general population at this, and though both groups fall short of perfection).
Still, I agree that it would be worthwhile to build more common knowledge and [whatever the “values” analog of common knowledge is called] supporting “a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on” and that are piecewise-checkable (rather than opaque masses of skills that are useful as a mass but hard to build across people and time). This particular piece of culture is harder to teach to folks who are seeking individual utility, because the most obvious payoffs are at the level of the group and of the long-term process rather than at the level of the individual (where the payoffs to e.g. goal-factoring and murphyjitsu are located). It also pays off more in later-stage fields and less in the earliest stages of science within preparadigm fields such as AI safety, where it’s often about shower thoughts and slowly following inarticulate hunches. But still.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T09:45:07.272Z · LW(p) · GW(p)
that CFAR's natural antibodies weren't kicking against it hard.
Some of them were. This was a point of contention in internal culture discussions for quite a while.
(I am not currently a CFAR staff member, and cannot speak to any of the org's goals or development since roughly October 2018, but I can speak with authority about things that took place from October 2015 up until my departure at that time.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2019-12-21T13:43:44.260Z · LW(p) · GW(p)
Agreed
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T04:26:03.622Z · LW(p) · GW(p)
Yeah, I predict that if one showed Val or Pete the line about fitting naturally into CFAR’s environment without triggering antibodies, they would laugh hard and despairingly. There was definitely friction.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T05:59:28.309Z · LW(p) · GW(p)
What did this friction lead to (what changes in CFAR’s output, etc.)?
↑ comment by AnnaSalamon · 2019-12-21T13:42:41.379Z · LW(p) · GW(p)
Ben Pace writes:
“... The Gwerns and the Wei Dais and the Scott Alexanders of the world won't have learned anything from CFAR's exploration.”
I’d like to distinguish two things:
- Whether the official work activities CFAR staff are paid for will directly produce explicit knowledge in the manner valued by the Gwern etc.
- Whether that CFAR work will help educate people who later produce explicit knowledge themselves in the manner valued by Gwern etc., and who wouldn't have produced that knowledge otherwise.
#1 would be useful but isn’t our primary goal (though I think we’ve done more than none of it). #2 seems like a component of our primary goal to me (“scientists” or “producers of folks who can make knowledge in this sense” isn’t all we’re trying to produce, but it’s part of it), and is part of what I would like to see us strengthen over the coming years.
To briefly list our situation with respect to whether we are accomplishing #2 (according to me):
- There are in fact a good number of AI safety scientists in particular who seem to me to produce knowledge of this type, and who give CFAR some degree of credit for their present tendency to do this.
- On a milder level, while CFAR workshops do not themselves teach most of the Sequences’ skills (which would exceed four days in length, among other difficulties), we do try to nudge participants into reading the Sequences (by referencing them with respect at the workshop, by giving all mainline participants and AIRCS participants paper copies of “How to actually change your mind” and HPMOR, and by explicitly claiming they are helpful for various things).
- At the same time, I do think we should make Sequences-style thinking a more explicit element of the culture spread by CFAR workshops, and of the culture that folks can take for granted at e.g. alumni reunions (although it is there nevertheless to quite an extent).
(I edited this slightly to make it easier to read after Kaj had already quoted from it.)
Replies from: adam_scholl, Kaj_Sotala↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T11:26:23.819Z · LW(p) · GW(p)
I think a crisp summary here is: CFAR is in the business of helping create scientists, more than the business of doing science. Some of the things it makes sense to do to help create scientists look vaguely science-ish, but others don't. And this sometimes causes people to worry (understandably, I think) that CFAR isn't enthused about science, or doesn't understand its value.
But if you're looking to improve a given culture, one natural move is to explore that culture's blindspots. And I think exploring those blindspots is often not going to look like an activity typical of that culture.
An example: there's a particular bug I encounter extremely often at AIRCS workshops, but rarely at other workshops. I don't yet feel like I have a great model of it, but it has something to do with not fully understanding how words have referents at different levels of abstraction. It's the sort of confusion that I think reading A Human's Guide to Words [? · GW] often resolves in people, and which results in people asking questions like:
- "Should I replace [my core goal x] with [this list of "ethical" goals I recently heard about]?"
- "Why is the fact that I have a goal a good reason to optimize for it?"
- "Are propositions like 'x is good' or 'y is beautiful' even meaningful claims?"
When I encounter this bug I often point to a nearby tree, and start describing it at different levels of abstraction. The word "tree" refers to a bunch of different related things: to a member of an evolutionarily-related category of organisms, to the general sort of object humans tend to emit the phonemes "tree" to describe, to this particular mid-sized physical object here in front of us, to the particular arrangement of particles that composes the object, etc. And it's sensible to use the term "tree" anyway, as long as you're careful to track which level of abstraction you're referring to with a given proposition—i.e., as long as you're careful to be precise about exactly which map/territory correspondence you're asserting.
This is obvious to most science-minded people. But it's often less obvious that the same procedure, with the same carefulness, is needed to sensibly discuss concepts like "goal" and "good." Just as it doesn't make sense to discuss whether a given tree is "strong" without distinguishing between "in terms of its likelihood of falling over" or "in terms of its molecular bonds," it doesn't make sense to discuss whether a goal is "good" without distinguishing between e.g. "relative to societal consensus" or "relative to your current preferences" or "relative to the preferences you might come to have given more time to think."
This conversation often seems to help resolve the confusion. At some point, I may design a class about this, so that more such confusions can be resolved. But I expect that if I do, some of the engineers in the audience will get nervous, since it will look an awful lot like a philosophy class! (I already get this objection regularly one-on-one). That is, I expect some may wonder whether the AIRCS staff, which claim to be running workshops for engineers, are actually more enthusiastic about philosophy than engineering.
We're not. Academic philosophy, at least, strikes me as an unusually unproductive field with generally poor epistemics. I don't want to turn the engineers into philosophers—I just want to use a particular helpful insight from philosophy to patch a bug which, for whatever reason, seems to commonly afflict AIRCS participants.
CFAR faces this dilemma a lot. For example, we spent a bunch of time circling for a while, and this made many rationalists nervous—was CFAR as an institution, which claimed to be running workshops for science-minded, sequences-reading, law-based-reasoning-enthused rationalists, actually more enthusiastic about woo-laden authentic relating games?
We weren't. But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds. This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally. So we looked to specialists in extraspection for a patch.
Replies from: RobbBB, adam_scholl, Benito, Wei_Dai, SaidAchmiz, howie-lempel↑ comment by Rob Bensinger (RobbBB) · 2019-12-22T15:14:00.209Z · LW(p) · GW(p)
I felt a "click" in my brain reading this comment, like an old "something feels off, but I'm not sure what" feeling about rationality techniques finally resolving itself.
If this comment were a post, and I were in the curating-posts business, I'd curate it. The demystified concrete examples of the mental motion "use a tool from an unsciencey field to help debug scientists" are super helpful.
Replies from: mr-hire, RobbBB↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T15:38:44.454Z · LW(p) · GW(p)
Just want to second that I think this comment is particularly important. There's a particular bug where I can get innoculated to a whole class of useful rationality interventions that don't match my smell for "rationality intervention", but the whole reason they're a blindspot in the first place is because of that smell... or something.
↑ comment by Rob Bensinger (RobbBB) · 2019-12-22T15:17:29.545Z · LW(p) · GW(p)
I feel like this comment should perhaps be an AIRCS class -- not on meta-ethics, but on 'how to think about what doing debugging your brain is, if your usual ontology is "some activities are object-level engineering, some activities are object-level science, and everything else is bullshit or recreation"'. (With meta-ethics addressed in passing as a concrete example.)
↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T11:26:41.995Z · LW(p) · GW(p)
(To be clear the above is an account of why I personally feel excited about CFAR having investigated circling. I think this also reasonably describes the motivations of many staff, and of CFAR's behavior as an institution. But CFAR struggles with communicating research intuitions, too; I think in this case these intuitions did not propagate fully among our staff, and as a result that we did employ a few people for a while whose primary interest in circling seemed to me to be more like "for its own sake," and who sometimes discussed it in ways which felt epistemically unhealthy to me. I think people correctly picked up on this as worrying, and I don't want to suggest that didn't happen; just that there is, I think, a sensible reason why CFAR as an institution tends to investigate local blindspots by searching for non-locals with a patch, thereby alarming locals about our epistemic allegiance).
↑ comment by Ben Pace (Benito) · 2019-12-24T23:48:54.791Z · LW(p) · GW(p)
Thanks, that was really helpful. I continue to have a sense of disagreement that this is the right way to do things, so I’ll try to point to some of that. Unfortunately my comment here is not super focused, though I am just trying to say a single thing.
I recently wrote down a bunch of my thoughts about evaluating MIRI, and I realised that I think MIRI has gone through alternating phases of internal concentration and external explanation, in a way that feels quite healthy to me.
Here is what I said [LW(p) · GW(p)]:
In the last 2-5 years I endorsed donating to MIRI (and still do), and my reasoning back then was always of the type "I don't understand their technical research, but I have read a substantial amount of the philosophy and worldview that was used to successfully pluck that problem out of the space of things to work on [? · GW], and think it is deeply coherent and sensible and it's been surprisingly successful in figuring out AI is an x-risk, and I expect to find it is doing very sensible things in places I understand less well." Then, about a year ago, MIRI published the Embdded Agency sequence [? · GW], and for the first time I thought "Oh, now I feel like I have an understanding of what the technical research is guided by, and what it's about, and indeed, this makes a lot of sense." My feelings have rarely been changed by reading ongoing research papers at MIRI, which were mostly just very confusing to me. They all seemed individually interesting, but I didn't see the broader picture before Embedded Agency.
I continued:
So, my current epistemic state is something like this: Eliezer and Benya and Patrick and others spent something like 4 MIRI-years hacking away at research, and I didn't get it. Finally Scott and Abram made some further progress on it, and crystalised it into an explanation I actually felt I sorta get. And most of the time I spent trying to understand their work in the meantime was wasted effort on my part, and quite plausibly wasted effort on their part. I remember that time they wrote up a ton of formal-looking papers for the puerto rico conference, to be ready in case a field suddenly sprang around them... but then nobody really read them or built on them. So I don't mind if, in the intervening 3-4 years, they again don't really try to explain what they're thinking about to me, until a bit of progress and good explanation comes along. They'll continue to write things about the background worldview, like Security Mindset, and Rocket Alignment, and Challenges to Christiano's Capability Amplification Proposal, and all of the beautiful posts that Scott and Abram write, but overall focus on getting a better understanding of the problem by themselves.
I think the output of this pattern is perhaps the primary ways I evaluate whether I think MIRI is making progress.
I'll just think aloud a little bit more on this topic:
- Writing by CFAR staff, for example a lot of comments on this post by Adam, Anna, Luke, Brienne, and others, are one of the primary ways I update my model of how these people are thinking, and how much it feels interesting and promising to me. I’m not talking about “major conclusions” or “rigorously evidenced results”, but just stuff like the introspective data Luke uses when evaluating whether he’s making progress (I really liked that comment), or Dan saying what posts are top of his mind that he would write to LW. Adam’s comments about looking for blind spots and Anna’s comment about tacit/explicit are even more helpful, but the difference between their comments and Luke/Dan’s comments isn’t as large as between Luke/Dan’s comments and nothing.
- It’s not that I don’t update on in-person tacit skills and communication, of course that’s a major factor I use when thinking about who to trust and in what way. But especially as I’m think of groups of people, over many years, doing research, I’m increasingly interested in whether they’re able to make records of their thoughts that communicate well with each other - how much writing they do. This kind of more legible tracking of ideas and thought is pretty key to me. This is in part because I think I personally would have a very difficult time doing research without good, long-term, external working memory, and also from some of my models of the difficulty of coordinating groups.
- Adam’s answer above is both intrinsically interesting and very helpful for discussing this topic. But when I asked myself if it felt sufficient to me, I said no. It matters to me a lot whether I expect CFAR to try hard to do the sort of translational work into explicit knowledge at some point down the line, like MIRI has successfully done multiple times and explicitly intends to do in future. CFAR and CFAR alumni explore a lot of things that typically signal having lost contact with scientific materialism and standards for communal, public evidence like “enlightenment”, “chakras”, “enneagram”, and “circling”. I think walking out into the hinterlands and then returning with the gold that was out there is great, and I’ve found one of those five things quite personally valuable - on a list that was selected for being the least promising on surface appearances - and I have a pretty strong “Rule Thinkers In, Not Out” vibe around that. But if CFAR never comes back and creates the explicit models, then it’s increasingly looking like 5, 10, 20 years of doing stuff that looks (from the outside) similar to most others who have tried to understand their own minds, and who have largely lost a hold of reality. The important thing is that I can’t tell from the outside whether this stuff turned out to be true. This doesn’t mean that you can’t either (from the inside), but I do think there’s often a surprising pairing where accountability and checkability end up actually being one of the primary ways you find out for yourself whether what you’ve learned is actually true and real.
- Paul Graham has a line, where he says startups should “take on as much technical debt as they can, and no more”. He’s saying that “avoiding technical debt” is not a virtue you should aspire to, and that you should let that slide in service of quickly making a product that people love, while of course admitting that there is some boundary line that if you cross, is just going to stop your system from working. If I apply that idea to research and use Chris Olah’s term “research debt”, the line would be “you should take on as much research debt as possible, and no more”. I don’t think all of CFAR’s ideas should be explicit, or have rigorous data tracking, or have randomised controlled trials, or be cached out in the psychological literature (which is a mess), and it’s fine to spend many years going down paths that you can’t feasibly justify to others. But, if you’re doing research, trying to take difficult steps in reasoning, I think you need to come back at some point and make a thing that others can build on. I don’t know how many years you can keep going without coming back, but I'd guess like 5 is probably as long as you want to start with.
- I guess I should‘ve said this earlier, but there’s also few things as exciting for me as when CFAR staff write their ideas about rationality into posts. I love reading Brienne’s stuff and Anna’s stuff, and I liked a number of Duncan’s things (Double Crux, Buckets and Bayes, etc). (It’s plausible this is more of my motivation than I let on, although I stand by everything I said above as true.) I think many other LessWrongers are also very excited about such writing. I assign some probability that this is itself is a big enough reason that should cause CFAR to want to write more (growth and excitement have many healthy benefits for communities).
- As part of CFAR’s work to research and develop an art of rationality (as I understand CFAR staff thinks of part of the work as being research, e.g. Brienne’s comment below), if it was an explicit goal to translate many key insights into explicit knowledge, then I would feel far more safe and confident around many of the parts that seem on first and second checking, like they are clearly wrong. If it isn’t, then I feel much more ‘all at sea’.
- I’m aware that there are more ways of providing your thinking with good feedback loops than “explaining your ideas to the people who read LW”. You can find other audiences. You can have smaller groups of people you trust and to whom you explain the ideas. You can have testable outcomes. You can just be a good enough philosopher to set your own course through reality and not look to others for whether it makes sense to them. But from my perspective, without blogposts like those Anna and Brienne have written in the past, I personally am having a hard time telling whether a number of CFAR’s focuses are on the right track.
I think I’ll say something similar to what Eliezer said at the bottom of his public critique of Paul's work, and mention that from my epistemic vantage point, even given my disagreements, I think CFAR has had and continues to have surprisingly massive positive effects on the direction and agency of me and others trying to reduce x-risk, and I think that they should definitely be funded to do all the stuff they do.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-27T07:44:14.153Z · LW(p) · GW(p)
Ben to check, before I respond—would a fair summary of your position be, "CFAR should write more in public, e.g. on LessWrong, so that A) it can have better feedback loops, and B) more people can benefit from its ideas?"
↑ comment by Wei Dai (Wei_Dai) · 2019-12-22T22:57:25.391Z · LW(p) · GW(p)
Philosophy strikes me as, on the whole, an unusually unproductive field full of people with highly questionable epistemics.
This is kind of tangential, but I wrote Some Thoughts on Metaphilosophy [LW · GW] in part to explain why we shouldn't expect philosophy to be as productive as other fields. I do think it can probably be made more productive, by improving people's epistemics, their incentives for working on the most important problems, etc., but the same can be said for lots of other fields.
I certainly don’t want to turn the engineers into philosophers
Not sure if you're saying that you personally don't have an interest in doing this, or that it's a bad idea in general, but if the latter, see Counterintuitive Comparative Advantage [LW · GW].
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T00:55:30.511Z · LW(p) · GW(p)
I have an interest in making certain parts of philosophy more productive, and in helping some alignment engineers gain some specific philosophical skills. I just meant I'm not in general excited about making the average AIRCS participant's epistemics more like that of the average professional philosopher.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T14:20:09.913Z · LW(p) · GW(p)
I was preparing to write a reply to the effect of “this is the most useful comment about what CFAR is doing and why that’s been posted on this thread yet” (it might still be, even)—but then I got to the part where your explanation takes a very odd sort of leap.
But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds.
It’s entirely unclear to me what this means, or why it is necessary / desirable. (Also, it seems like you’re using the term ‘extrospection’ in a quite unusual way; a quick search turns up no hits for anything like the definition you just gave. What’s up with that?)
This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally.
There… seems to be quite a substantial line of reasoning hidden here, but I can’t guess what it is. Could you elaborate?
So we looked to specialists in extraspection for a patch.
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
In short: I’m an engineer (my background is in computer science), and I’ve also studied philosophy. It’s clear enough to me why certain things that look like ‘philosophy’ can be, and are, useful in practice (despite agreeing with you that philosophy as a whole is, indeed, “an unusually unproductive field”). And we do, after all, have the Sequences, which is certainly philosophy if it’s anything (whatever else it may also be).
But I don’t at all see what the case for the usefulness of ‘circling’ and similar woo might be. Your comment makes me more worried, not less, on the whole; no doubt I am not alone. Perhaps elaborating a bit on my questions above might help shed some light on these matters.
Replies from: elityre, adam_scholl, ChristianKl, mr-hire↑ comment by Eli Tyre (elityre) · 2019-12-24T23:05:34.332Z · LW(p) · GW(p)
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
I'm speaking for myself here, not any institutional view at CFAR.
When I'm looking at maybe-experts, woo-y or otherwise, one of the main things that I'm looking at is the nature and quality of their feedback loops.
When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason "well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct." This doesn't seem that far off from what Circling is. (For instance, "I have a story that you're feeling defensive" -> "I don't feel defensive, so much as righteous. And...There's a flowering of heat in my belly.")
Circling does not seem like a perfect training regime, to my naive sensors, but if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that they would get increasingly skilled along a particular axis.
This makes it seem worthwhile training with masters in that domain, to see what skills they bring to bear. And I might find out that some parts of the practice which seemed off the mark from my naive projection of how I would design a training environment, are actually features, not bugs.
This is in contrast to say, "energy healing". Most forms of energy healing do not have the kind of feedback loop that would lead to a person acquiring skill along a particular axis, and so I would expect them to be "pure woo."
For that matter, I think a lot of "Authentic Relating" seems like a much worse training regime than Circling, for a number of reasons, including that AR (ironically), seems to more often incentivizes people to share warm and nice-sounding, but less-than true sentiments than Circling.
Replies from: SaidAchmiz, elityre↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-25T01:00:19.152Z · LW(p) · GW(p)
When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason “well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct.” This doesn’t seem that far off from what Circling is. (For instance, “I have a story that you’re feeling defensive” → “I don’t feel defensive, so much as righteous. And...There’s a flowering of heat in my belly.”)
Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.
(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)
Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address. Relatedly, you said:
This makes it seem worthwhile training with masters in that domain, to see what skills they bring to bear.
But are they masters?
Note the structure of your argument (which structure I have seen repeated quite a few times, in discussions of this and related topics, including in other sub-threads on this post). It goes like this:
-
There is a process P which purports to output X.
-
On the basis of various considerations, I expect that process P does indeed output X, and indeed that process P is very good at outputting X.
…
-
I now conclude that process P does output X, and does so quite well.
-
Having thus concluded, I will now adopt process P (since I want X).
But there’s a step missing, you see. Step 3 should be:
- Let me actually check to see whether process P in fact does output X (and how well it does so).
So, in this case, you have marshaled certain considerations—
When I think about how, in principle, one would train good intuitions … I reason … if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that …
—and on the basis of this thinking, reasoning, imagining, and seeming, have concluded, apparently, that people who’ve done a lot of Circling are “masters” in the domain of having “good intuitions about what other people are feeling at any given moment”.
But… are they? Have you checked?
Where is the evidence?
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-25T03:12:20.807Z · LW(p) · GW(p)
I'm going to make a general point first, and then respond to some of your specific objections.
General point:
One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.
But there's a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.
And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)
Specific points:
Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.
On the face of it, I wouldn't assume that it is reliable, but I don't have that strong a reason to assume that it isn't a priori.
Post priori, my experience being in Circles is that there is sometime incentive to obscure what's happening for you, in a circle, but that, at least with skilled facilitation, there is usually enough trust in the process that that doesn't happen. This is helped by the fact that there are many degrees of freedom in terms of one's response: I might say, "I don't want to share what's happening for me" or "I notice that I don't want to engage with that."
I could be typical minding, but I don't expect most people to lie outright in this context.
(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)
That seems like a reasonable hypothesis.
Not sure if it's a crux, in so far as if something works well in circling, you can intentionally import the circling context. That is, if you find that you can in fact transfer intuitions, process fears, track what's motivating a person, etc., effectively in the circling context, an obvious next step might be to try and and do this on topics that you care about, in the circling context. e.g. Circles on X-risk.
In practice it seems to be a little bit of both: I've observed people build skills in circling, that they apply in other contexts, and also their other contexts do become more circling-y.
Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address.
Sorry, I wasn't really trying to give a full response to your question, just dropping in with a little "here's how I do things."
You're referring to this question?
What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
I expect there's some talking past eachother going on, because this question seems surprising to me.
Um. I don't think there are examples of their output with regard to research or research intuitions. The Circlers aren't trying to do that, even a little. They're a funny subculture that engages a lot with an interpersonal practice, with the goals of fuller understanding of self and deeper connections with others (roughly. I'm not sure that they would agree that those are the goals.)
But they do pass some of my heuristic checks for "something interesting might be happening here." So I might go investigate and see what skill there is over in there, and how I might be able to re-purpose that skill for other goals that I care about.
Sort of like (I don't know) if I was a biologist in an alternative world, and I had an inkling that I could do population simulations on a computer, but I don't know anything about computers. So I go look around and I see who does seem to know about computers. And I find a bunch of hobbyists who are playing with circuits and making very simple video games, and have never had a thought about biology in their lives. I might hang out with these hobbyist and learn about circuits and making simple computer games, so that I can learn skills for making population simulations.
This analogy doesn't quite hold up, because its easier to verify that the hobbyists are actually successfully making computer games, and to verify that their understanding of circuits reflects standard physics. The case of the Circlers is less clean cut, because it is less obvious that they are doing anything real, and because their own models of what they are doing and how are a lot less grounded.
But I think the basic relationship holds up, noting that figuring out which groups of hobbyists are doing real things is much trickier.
Maybe to say it clearly: I don't think it is obvious, or a slam dunk, or definitely the case (and if you don't think so then you must be stupid or misinformed) that "Circling is doing something real." But also, I have heuristics that suggest that Circling is more interesting than a lot of woo.
In terms of evidence that make me think Circling is interesting (which again, I don't expect to be compelling to everyone):
- Having decent feedback loops.
- Social evidence: A lot of people around me, including Anna, think it is really good.
- Something like "universality". (This is hand-wavy) Circling is about "what's true", and has enough reach to express or to absorb any way of being or any way the world might be. This is in contrast to many forms of woo, which have an ideology baked into them that reject ways the world could be a priori, for instance that "everything happens for a reason". (This is not to say that Circling doesn't have an ideology, or a metaphysics, but it is capable of holding more than just that ideology.)
- Circling is concerned with truth, and getting to the truth. It doesn't reject what's actually happening in favor of a nicer story.
- I can point to places where some people seem much more socially skilled, in ways that relate to circling skill.
- Pete is supposedly able to be good at detecting lying.
- The thing I said about picking out people who "seemed to be doing something", and turned out to be circlers.
- Somehow people do seem to cut past their own bullshit in circles, in a way that seems relevant to human rationality.
- I've personally had some (few) meaningful realizations in Circles
I think all of the above are much weaker evidence than...
- "I did x procedure, and got y, large, externally verifiable result",
or even,
- "I did v procedure, and got u, specific, good (but hard to verify externally) result."
These days, I generally tend to stick to doing things that are concretely and fairly obviously (if only to me) having good immediate effects. If there aren't pretty immediate obvious, effects, then I won't bother much with it. And I don't think circling passes that bar (for me at least). But I do think there are plenty of reasons to be interested in circling, for someone who isn't following that heuristic strongly.
I also want to say, while I'm giving a sort-of-defense of being interested in circling, that I'm, personally, only a little interested.
I've done some ~1000 hours of Circling retreats, for personal reasons rather than research reasons (though admittedly the two are often entangled). I think I learned a few skills, which I could have learned faster, if I knew what I was aiming for. My ability to connect / be present with (some) others, improved a lot. I think I also damaged something psychologically, which took 6 months to repair.
Overall, I concluded it was fine, but I would have done better to train more specific and goal-directed skills like NVC. Personally, I'm more interested in other topics, and other sources of knowledge.
Replies from: elityre, howie-lempel
↑ comment by Eli Tyre (elityre) · 2019-12-25T03:26:36.702Z · LW(p) · GW(p)
Some sampling of things that I'm currently investigating / interested in (mostly not for CFAR), and sources that I'm using:
- Power and propaganda
- reading the Dictator's Handbook and some of the authors' other work.
- reading Kissinger's books
- rereading Samo's draft
- some "evil literature" (an example of which is "things Brent wrote")
- thinking and writing
- Disagreement resolution and conversational mediation
- I'm currently looking into some NVC materials
- lots and lots of experimentation and iteration
- Focusing, articulation, and aversion processing
- Mostly iteration with lots of notes.
- Things like PJ EBY's excellent ebook.
- Reading other materials from the Focusing institute, etc.
- Ego and what to do about it
- Byron Katie's The Work (I'm familiar with this from years ago, it has an epistemic core (one key question is "Is this true?"), and PJ EBY mentioned using this process with clients.)
- I might check out Eckhart Tolle's work again (which I read as a teenager)
- Learning
- Mostly iteration as I learn things on the object level, right now, but I've read a lot on deliberate practice, and study methodology, as well as learned general learning methods from mentors, in the past.
- Talking with Brienne.
- Part of this project will probably include a lit review on spacing effects and consolidation.
- General rationality and stuff:
- reading Artificial Intelligence: a Modern Approach
- reading David Deutsch's the Beginning of Infinity
- rereading IQ and Human Intelligence
- The Act of Creation
- Old Micheal Vassar talks on youtube
- Thinking about the different kinds of knowledge creation, and how rigorous argument (mathematical proof, engineering schematics) work.
I mostly read a lot of stuff, without a strong expectation that it will be right.
Replies from: howie-lempel↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T16:16:38.196Z · LW(p) · GW(p)
Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.
Things like PJ EBY's excellent ebook.
FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to?
The best links I could find quickly were:
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-26T00:56:49.548Z · LW(p) · GW(p)
↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T16:06:19.471Z · LW(p) · GW(p)
I think I also damaged something psychologically, which took 6 months to repair.
I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.
I expect, though, that this is too sensitive/personal so please feel free to ignore.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-26T00:58:24.359Z · LW(p) · GW(p)
It's not sensitive so much has context-heavy, and I don't think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people's experiences of things like Circling better.
↑ comment by Eli Tyre (elityre) · 2019-12-24T23:12:52.590Z · LW(p) · GW(p)
Oh and as a side note, I have twice in my life had a short introductory conversation with a person, noticed that something unusual or interesting was happening (but not having any idea what), and then finding out subsequently that the person I was talking with had done a lot of circling.
The first person was Pete, who I had a conversation with shortly after EAG 2015, before he came to work for CFAR. The other was an HR person at a tech company that I was cajoled into interviewing at, despite not really having any relevant skills.
I would be hard pressed to say exactly what was interesting about those conversations: something like "the way they were asking questions was...something. Probing? Intentional? Alive?" Those words really don't capture it, but whatever was happening I had a detector that pinged "something about this situation unusual."
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2020-12-12T07:44:27.455Z · LW(p) · GW(p)
Coming back to this, I think I would describe it as "they seemed like they were actually paying attention", which was so unusual as to be noteworthy.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T00:29:50.263Z · LW(p) · GW(p)
Said I appreciate your point that I used the term "extrospection" in a non-standard way—I think you're right. The way I've heard it used, which is probably idiosyncratic local jargon, is to reference the theory of mind analog of introspection: "feeling, yourself, something of what the person you're talking with is feeling." You obviously can't do this perfectly, but I think many people find that e.g. it's easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about "mirror neurons," to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.
Similarly, I think it's often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one's own brain. Personally, I've found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people's vague, sub-verbal curiosities and intuitions about e.g. "which questions are most worth asking."
Circlers don't generally use this skill for research. But it is the primary skill, I think, that circling is designed to train, and my impression is that many circlers have become relatively excellent at it as a result.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-23T01:05:10.058Z · LW(p) · GW(p)
… something like the theory of mind analog of introspection: something like “feeling, yourself, something of what the person you’re talking with is feeling.” You obviously can’t do this perfectly, but I think many people find that e.g. it’s easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about “mirror neurons,” to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.
Hmm. I see, thanks.
Now, you say “You obviously can’t do this perfectly”, but it seems to me a dubious proposition even to suggest that anyone (to a first approximation) can do this at all. Even introspection is famously unreliable; the impression I have is that many people think that they can do the thing that you call ‘extrospection’[1], but in fact they can do no such thing, and are deluding themselves. Perhaps there are exceptions—but however uncommon you might intuitively think such exceptions are, they are (it seems to me) probably a couple of orders of magnitude less common than that.
Similarly, I think it’s often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one’s own brain. Personally, I’ve found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people’s vague, sub-verbal curiosities and intuitions about e.g. “which questions are most worth asking.”
Do you have any data (other than personal impressions, etc.) that would show or even suggest that this has any practical effect? (Perhaps, examples / case studies?)
By the way, it seems to me like coming up with a new term for this would be useful, on account of the aforementioned namespace collision. ↩︎
↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T04:07:53.358Z · LW(p) · GW(p)
Thanks for spelling this out. My guess is that there are some semi-deep cruxes here, and that they would take more time to resolve than I have available to allocate at the moment. If Eli someday writes that post about the Nisbett and Wilson paper [LW(p) · GW(p)], that might be a good time to dive in further.
↑ comment by ChristianKl · 2019-12-26T00:27:57.254Z · LW(p) · GW(p)
To do good UX you need to understand the mental models that your users have of your software. You can do that by doing a bunch of explicit A/B tests or you can do that by doing skilled user interviews.
A person who doesn't do skilled user interviews will project a lot of their own mental models of how the software is supposed to work on the users that might have other mental models.
There are a lot of things about how humans relate to the world around them, that they normally don't share with other people. People with a decent amount of self-awareness know how they reason, but they don't know how other people reason at the same level.
Circling is about creating an environment where things can be shared that normally aren't. While it would be theoretically possible that people lie, it feels good to share about one's intimate experience in a safe environment and be understood.
At one LWCW where I lead two circles there was a person who was in both and who afterwards said "I thought I was the only person who does X in two cases where I now know that other people also do X".
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-26T01:39:19.975Z · LW(p) · GW(p)
Do you claim that people who have experience with Circling, are better at UX design? I would like some evidence for this claim, if so.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-26T10:56:48.276Z · LW(p) · GW(p)
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling.
As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It's not as relevant to where you place your buttons but it's very relevant to designing mental intervention in the style that CFAR does.
Evidence is great, but we have little controlled studies of Circling.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-26T14:58:28.376Z · LW(p) · GW(p)
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling.
This is not an interesting claim. Ok, it’s ‘very similar’. And what of it? What follows from this similarity? What can we expect to be the case, given this? Does skill at Circling transfer to skill at conducting user interviews? How, precisely? What specific things do you expect we will observe?
Evidence is great, but we have little controlled studies of Circling.
So… we don’t have any evidence for any of these claims, in other words?
As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It’s not as relevant to where you place your buttons but it’s very relevant to designing mental intervention in the style that CFAR does.
I don’t think I quite understand what you’re saying, here (perhaps due to a typo or two). What does the term ‘UX’ even mean, as you are using it? What does “designing mental intervention” have to do with UX?
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T15:50:29.573Z · LW(p) · GW(p)
Not a CFAR staff member, but particularly interested in this comment.
It’s entirely unclear to me what this means, or why it is necessary / desirable.
One way to frame this would be getting really good at learning tacit knowledge.
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here?
One way would be to interact one with them, notice "hey, this person is really good at this" and then inquire as to how they got so good. This is my experience with seasoned authentic relaters.
Another way would be to realize there's a hole in understanding related to intuitions, and then start ssearching around for "people who are claiming to be really good at understanding others' intuitions", this might lead you to running into someone as described above and then seeing if they are indeed good at the thing.
But I don’t at all see what the case for the usefulness of ‘circling’ and similar woo might be.
Let's say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have:
1. A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind.
2. A newbie designer who hasn't done that.
To me, that's the obvious use case for circling. I think there's also a bunch of obvious benefits on a group level to being able to relate to people better as well.
Replies from: SaidAchmiz
↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T16:28:58.145Z · LW(p) · GW(p)
One way to frame this would be getting really good at learning tacit knowledge.
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”? In fact, is there any reason to believe that being “really good at learning tacit knowledge” is a thing?
One way would be to interact one with them, notice “hey, this person is really good at this” and then inquire as to how they got so good. This is my experience with seasoned authentic relaters.
Hmm, so in your experience, “seasoned authentic relaters” are really good at “simulating the felt senses of their conversational partners in their own minds”—is that right? If so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”?
Another way would be to realize there’s a hole in understanding related to intuitions
Can you say more about how you came to realize this?
Let’s say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have:
A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind.
A newbie designer who hasn’t done that.
Well, my first step would be to stop wanting that, because it is not a sensible (or, perhaps, even coherent) thing to want.
However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T16:51:46.478Z · LW(p) · GW(p)
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”?
This requires some model of how intuitions work. One model I like to use is to think about "intuition" is like a felt sense or aesthetic that relates to hundreds of little associations you're picking up from a particular situation.
If i'm quickly able to in my mind, get a sense for what it feels like for you (i.e get that same felt sense or aesthetic feel when looking at what you're looking at), and use circling like tools to be able to tease out which parts of the environment most contribute to that aesthetic feel, I can quickly create similar associations in my own mind and thus develop similar intuitions.
f so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”?
Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case.
Can you say more about how you came to realize this?
I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I'm not sure how CFAR recognized it.
However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
I think this is a coherent stance if you think the general "learning intuitions" skill is impossible. But imagine if it weren't, would you agree that training it would be useful?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T21:10:47.489Z · LW(p) · GW(p)
This requires some model of how intuitions work. One model I like to use is […]
Hmm. It’s possible that I don’t understand what you mean by “felt sense”. Do you have a link to any discussion of this term / concept?
That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions?
In other words, “my model of intuitions predicts X” is not a sufficient reason to believe X, unless those predictions have been borne out somehow, or the model validated empirically, or both. As always, some examples would be useful.
Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case.
It is not clear to me whether this would be evidence (in the strict Bayesian sense); is it more likely that the people from whom I have heard such things would make these claims if they were true than otherwise? I am genuinely unsure, but even if the answer is yes, the odds ratio is low; if evidence, it’s a very weak form thereof.
Conversely, if this sort of thing is the only form of evidence put forth, then that itself is evidence against, as it were!
I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I’m not sure how CFAR recognized it.
Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized” would be “let’s try to understand intuition, via circling etc.” rather than “let’s develop intuitions, via deliberate practice, whereupon the results will speak for themselves, and this will also lead to improved understanding”. (Corollary question: have the efforts made toward understanding intuitions yielded an improved emphasis on deliberate practice, and have the results thereof been positive and obvious?)
I think this is a coherent stance if you think the general “learning intuitions” skill is impossible. But imagine if it weren’t, would you agree that training it would be useful?
Indeed, I would, but notice that what you’re asking is different than what you asked before.
In your earlier comment, you asked whether I would find it useful (in the hypothetical “newbie designer” situation) to be dealing with someone who had undertaken a lot of “deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind”.
Now, you are asking whether I judge “training … the general ‘learning intuitions’ skill” to be useful.
Your questions imply that these are the same thing. But (even in the hypothetical case where there is such a thing as the latter) they are not!
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T23:32:06.065Z · LW(p) · GW(p)
The wikipedia article for Gendlin's focusing has a section trying to describe felt sense, taking out the specific part about "the body", the first part says:
" Gendlin gave the name "felt sense" to the unclear, pre-verbal sense of "something"—the inner knowledge or awareness that has not been consciously thought or verbalized",
Which is fairly close to my use of it here.
That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions?
One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions.
Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized”
I do believe CFAR at one point was teaching deliberate practice and calling it "turbocharged training". However, if one is really interested in intiution and thinks its' useful, the next obvious step is to ask "ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?"
Your questions imply that these are the same thing. But (even in the hypothetical case where there is such a thing as the latter) they are not!
Good catch, this assumes that my simplified model of how intuitions work is at least partly correct. If the felt sense you get from a particular situation doesn't relate to intuition, or if its' impossible for one human being to get better at feeling what another is feeling, than these are not equivalent. I happen to think both are true.
Replies from: SaidAchmiz, Benito↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-23T01:14:49.608Z · LW(p) · GW(p)
[Gendlin’s definition]
I see, thanks.
One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions.
Well, my question stands. That is a prediction, sure (if a vague one), but now how do we test it? What concrete observations would we expect, and which are excluded, etc.? What has actually been observed? I’m talking specifics, now; data or case studies—but in any case very concrete evidence, not generalities!
I do believe CFAR at one point was teaching deliberate practice and calling it “turbocharged training”. However, if one is really interested in intiution and thinks its’ useful, the next obvious step is to ask “ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?”
Yes… perhaps this is true. Yet in this case, we would expect to continue to use the available instruments (however blunt they may be) until such time as sharper tools are (a) available, and (b) have been firmly established as being more effective than the blunt ones. But it seems to me like neither (a) (if I’m reading your “at one point” comment correctly), nor (b), is the case here?
Really, what I don’t think I’ve seen, in this discussion, is any of what I, in a previous comment, referred to as “the cake” [LW(p) · GW(p)]. This continues to trouble me!
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-23T03:08:50.225Z · LW(p) · GW(p)
I suspect the CFARians have more delicious cake for you, as I haven't put that much time into circling, and the related connection skills I worked on more than a decade ago and have atrophied since.
Things I remember:
- much quicker connection with people
- there was a few things like exercise that I wasn't passionate about but wanted to be. After talking with people who were passionate I was able to become passionate myself for those things
- I was able to more quickly learn social cognitive strategies by interacting with others who had them.
↑ comment by philh · 2019-12-23T10:19:29.472Z · LW(p) · GW(p)
To suggest something more concrete... would you predict that if an X-ist wanted to pass a Y-ist's ITT, they would have more success if the two of them sat down to circle beforehand? Relative to doing nothing, and/or relative to other possible interventions like discussing X vs Y? For values of X and Y like Democrat/Republican, yay-SJ/boo-SJ, cat person/dog person, MIRI's approach to AI/Paul Christiano's approach?
It seems to me that (roughly speaking) if circling was more successful than other interventions, or successful on a wider range of topics, that would validate its utility. Said, do you agree?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-23T14:25:58.091Z · LW(p) · GW(p)
Yes, although I expect the utility of circling over other methods to be dependent on the degree to which the ITT is based on intuitions.
↑ comment by Ben Pace (Benito) · 2019-12-22T23:57:39.792Z · LW(p) · GW(p)
I always think of 'felt sense' as, not just pre-verbal intuitions, but intuitions associated with physical sensations, be they in my head, shoulders, stomach, etc.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-23T00:11:41.648Z · LW(p) · GW(p)
I think that Gendlin thinks all pre-verbal intuitions are represented with physical sensations.
I don't agree with him but still use the felt-sense language in these parts because rationalists seem to know what I'm talking about.
Replies from: adam_scholl, mr-hire↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T01:02:12.390Z · LW(p) · GW(p)
Yeah, same; I think this term has experienced some semantic drift, which is confusing. I meant to refer to pre-verbal intuitions in general, not just ones accompanied by physical sensation.
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-23T02:54:44.734Z · LW(p) · GW(p)
Also in particular - felt sense refers to the qualia related to intuitions, rather than the intuitions themselves.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T03:59:31.263Z · LW(p) · GW(p)
(Unsure, but I'm suspicious that the distinction between these two things might not be clear).
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-23T14:43:08.751Z · LW(p) · GW(p)
Yes, I think there's a distinction between the semantic content of "My intuition is that Design A is better than Design B" referring to the semantic content or how the intuition "caches out" in terms of decisions. This contrast with the felt sense, which always seems to refer to what the intuition is like "from the inside," for example a sense of unease when looking at Design A, and rightness when looking at Design B.
I feel like using the word "intuition" can refer to both the latter and the former, whereas when I say "felt sense" it always refers to the latter.
↑ comment by Howie Lempel (howie-lempel) · 2019-12-22T15:08:02.915Z · LW(p) · GW(p)
"For example, we spent a bunch of time circling for a while"
Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T00:42:06.911Z · LW(p) · GW(p)
CFAR does spend substantially less time circling now than it did a couple years ago, yeah. I think this is partly because Pete (who spent time learning about circling when he was younger, and hence found it especially easy to notice the lack of circling-type skill among rationalists, much as I spent time learning about philosophy when I was younger and hence found it especially easy to notice the lack of philosophy-type skill among AIRCS participants) left, and partly I think because many staff felt like their marginal skill returns from circling practice were decreasing, so they started focusing more on other things.
↑ comment by Kaj_Sotala · 2019-12-21T14:12:45.835Z · LW(p) · GW(p)
Whether CFAR staff (qua CFAR staff, as above) will help educate people who later themselves produce explicit knowledge in the manner valued by Gwern, Wei Dai, or Scott Alexander, and who wouldn’t have produced (as much of) that knowledge otherwise.
This seems like a good moment to publicly note that I probably would not have started writing my multi-agent sequence [? · GW] without having a) participated in CFAR's mentorship training b) had conversations with/about Val and his posts.
↑ comment by AnnaSalamon · 2019-12-22T05:14:09.435Z · LW(p) · GW(p)
With regard to whether our staff has read the sequences: five have, and have been deeply shaped by them; two have read about a third, and two have read little. I do think it’s important that our staff read them, and we decided to run this experiment with sabbatical months next year in part to ensure our staff had time to do this over the coming year.
↑ comment by orthonormal · 2019-12-20T23:59:38.114Z · LW(p) · GW(p)
I honestly think, in retrospect, that the linchpin of early CFAR's standard of good shared epistemics was probably Critch.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2019-12-21T18:10:28.904Z · LW(p) · GW(p)
I, too, believe that Critch played a large and helpful role here.
↑ comment by Richard_Ngo (ricraz) · 2019-12-22T18:54:00.520Z · LW(p) · GW(p)
Note that Val's confusion seems to have been because he misunderstood Oli's point.
https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh?commentId=SPouGqiWNiJgMB3KW#SPouGqiWNiJgMB3KW [LW(p) · GW(p)]
comment by Raemon · 2019-12-20T22:35:16.467Z · LW(p) · GW(p)
(apologies for this only sort-of being a question, and for perhaps being too impressed with the cleverness of my metaphor at the expense of clarity)
I have a vague model that's something like (in programming terms):
- the original LessWrong sequences were the master branch of a codebase (in terms of being a coherent framework for evaluating the world and making decisions)
- CFAR forked that codebase into (at least one) private repo and did a bunch of development on it, kinda going off in a few divergent directions. My impression that was "the CFAR dev branch" is more introspection-focused, and "internal alignment" focused.
- Many "serious rationalist" I know (including myself) have incorporated some of elements from "the CFAR dev branch" into their epistemogy (and overall worldview).
- (Although, one person said they got more from Leverage than CFAR.)
- In the past couple years, there's a bit of confusion on LessWrong (and adjaecent spaces) about what exactly the standards are, with (some) longterm members offhandedly referring to concepts that haven't been written up in longform, and with unclear epistemic tagging.
- Naively attempting to merge the latest dev branch back into "Sequences Era LessWrong" results in merge conflicts, and it's unclear when this is because:
- "oh, we just haven't written up the right explanations to make sure this was backwards compatible", vs
- "oh, these were just some ideas we were experimenting with that didn't pan out" vs
- "oh, this integration-test-failure is actually an indicator that something was wrong with the idea."
- "oh, actually, it's original LessWrong sequences that are wrong here, not CFAR, and the intergration tests need to be rewritten"
So... I dunno, I guess the questions are:
1) Does that seem like a reasonable metaphor for what's going on?
2) How much (and which things?) that CFAR has developed seem "basically ready to integrate into the public discourse"?
3) Are there ideas you're still experimenting with?
4) What elements have the most inferential distance that need crossing?
Replies from: AnnaSalamon, jan-kulveit↑ comment by AnnaSalamon · 2019-12-22T05:16:41.546Z · LW(p) · GW(p)
Re: 1—“Forked codebases that have a lot in common but are somewhat tricky to merge” seems like a pretty good metaphor to me.
The question I'd like to answer that is near your questions is: "What is the minimal patch/bridge that will let us use all of both codebases without running into merge conflicts?"
We do have a candidate answer to this question, which we’ve been trying out at AIRCS to reasonable effect. Our candidate answer is something like: an explicit distinction between “tacit knowledge” (inarticulate hunches, early-stage research intuitions, the stuff people access and see in one another while circling, etc.) and the “explicit” (“knowledge” worthy of the name, as in the LW codebase—the thing I believe Ben Pace is mainly gesturing at in his comment [LW(p) · GW(p)] above).
Here’s how we explain it at AIRCS:
- By “explicit” knowledge, we mean visible-to-conscious-consideration denotative claims that are piecewise-checkable and can be passed explicitly between humans using language.
- Example: the claim “Amy knows how to ride a bicycle” is explicit.
- By “tacit” knowledge, we mean stuff that allows you to usefully navigate the world (and so contains implicit information about the world, and can be observationally evaluated for how well people seem to navigate the relevant parts of the world when they have this knowledge) but is not made of explicit denotations that can be fully passed verbally between humans.
- Example: however the heck Amy actually manages to ride the bicycle (the opaque signals she sends to her muscles, etc.) is in her own tacit knowledge. We can know explicitly “Amy has sufficient tacit knowledge to balance on a bicycle,” but we cannot explicitly track how she balances, and Amy cannot hand her bicycle-balancing ability to Bob via speech (although speech may help). Relatedly, Amy can’t check the individual pieces of her (opaque) motor patterns to figure out which ones are the principles by which she successfully stays up and which are counterproductive superstition.
- I’ll give a few more examples to anchor the concepts:
- In mathematics:
- Explicit: which things have been proven; which proofs are valid.
- Tacit: which heuristics may be useful for finding proofs; which theorems are interesting/important. (Some such heuristics can be stated explicitly, but I wouldn't call those statements “knowledge.” I can't verify that they're right in the way I can verify “Amy can ride a bike” or “2+3=5.”)
- In science:
- Explicit: specific findings of science, such as “if you take a given amount of hydrogen and decrease its volume by half, you double its pressure." The “experiment” and “conclusion” steps of the scientific method.
- Tacit: which hypotheses are worth testing.
- In Paul Graham-style startups:
- Explicit: what metrics one is hitting, once one achieves an MVP.
- Tacit: the way Graham’s small teams of cofounders are supposed to locate their MVP. (In calling this “tacit,” I don’t mean you can’t communicate any of this verbally. Of course they use words. But they way they use words is made of ad hoc spaghetti-code bits of attempt to get gut intuitions back and forth between a small set of people who know each other well. It is quite different from the scalable processes of explicit science/knowledge that can compile across large sets of people and long periods of time. This is why Graham claims that co-founder teams should have 2-4 people, and that if you hire e.g. 10 people to a pre-MVP startup, it won’t scale well.)
- In mathematics:
In the context of the AIRCS workshop, we share “The Tacit and the Explicit” in order to avoid two different kinds of errors:
- People taking “I know it in my gut” as zero-value, and attempting to live via the explicit only. My sense is that some LessWrong users like Said_Achmiz tend to err in this direction. (This error can be fatal to early-stage research, and to one’s ability to discuss ordinary life/relationship/productivity “bugs” and solutions, and many other mundanely useful topics.)
- People taking “I know it in my gut” as vetted knowledge, and attempting to build on gut feelings in the manner of knowledge. (This error can be fatal to global epistemology: “but I just feel that religion is true / the future can’t be that weird / whatever”).
We find ourselves needing to fix both those errors in order to allow people to attempt grounded original thinking about AI safety. They need to be able to have intuitions, and take those intuitions seriously enough to develop them / test them / let them breathe, without mistaking those intuitions for knowledge.
So, at the AIRCS workshop, we introduce the explicit (which is a big part of what I take Ben Pace to be gesturing at above actually) at the same time that we introduce the tacit (which is the thing that Ben Pace describes benefiting from at CFAR IMO). And we introduce a framework to try to keep them separate so that learning cognitive processes that help with the tacit will not accidentally mess with folks’ explicit, nor vice versa. (We’ve been introducing this framework at AIRCS for about a year, and I do think it’s been helpful. I think it’s getting to the point where we could try writing it up for LW—i.e., putting the framework more fully into the explicit.)
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T06:01:34.797Z · LW(p) · GW(p)
People taking “I know it in my gut” as zero-value, and attempting to live via the explicit only. My sense is that some LessWrong users like Said_Achmiz tend to err in this direction.
This is not an accurate portrayal of my views.
Replies from: Raemon↑ comment by Raemon · 2019-12-22T06:12:47.137Z · LW(p) · GW(p)
I’d be particularly interested, in this context, if you are up clarifying what your views are here.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T07:25:22.498Z · LW(p) · GW(p)
I’d be happy to, except that I’m not sure quite what I need to clarify.
I mean, it’s just not true that I consider “tacit” knowledge (which may, or may not be, the same thing as procedural knowledge [LW(p) · GW(p)]—but either way…) to be “zero-value”. That isn’t a thing that I believe, nor is it adjacent to some similar thing that I believe, nor is it a recognizable distortion of some different thing that I believe.
For instance, I’m a designer, and I am quite familiar with looking at a design, or design element, and declaring that it is just wrong, or that it looks right this way and not that way; or making something look a certain way because that’s what looks good and right; etc., etc. Could I explicitly explain the precise and specific reason for every detail of every design decision I make? Of course not; it’s absurd even to suggest it. There is such a thing as “good taste”, “design sense”, etc. You know quite well, I’m sure, what I am talking about.
So when someone says that I attempt to live via the explicit only, and take other sorts of knowledge as having zero value—what am I to say to that? It isn’t true, and obviously so. Perhaps Anna could say a bit about what led her to this conclusion about my views. I am happy to comment further; but as it stands, I am at a loss.
Replies from: DanielFilan, mr-hire↑ comment by DanielFilan · 2019-12-22T07:33:08.784Z · LW(p) · GW(p)
For what it's worth, I think that saying "Person X tends to err in Y direction" does not mean "Person X endorses or believes Y".
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T07:39:16.308Z · LW(p) · GW(p)
If what Anna meant was “Said undervalues ‘gut’ knowledge, relative to explicit knowledge”… well, that is, of course, not an obviously false or absurd claim; but what she wrote is an odd way of saying it. I have reread the relevant section of Anna’s comment several times, and it is difficult to read it as simply a note that certain people (such as, ostensibly, myself) are merely on somewhat the wrong point along a continuum of placing relative value on this vs. that form of knowledge; it is too banal and straightforward a point, to need to be phrased in such a way as Anna phrased it.
But then, this is getting too speculative to be useful. Perhaps Anna can clarify what she meant.
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T15:28:33.729Z · LW(p) · GW(p)
If it helps for your own calibration of how you come across, there was a thread a while back where I expressed indignation at the phrase "Overcoming intuitions" and you emphatically agreed.
I remember being surprised that you agreed, and having to update my model of your beliefs.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T15:35:59.358Z · LW(p) · GW(p)
Can you think of an example of something I said that led you to that previous, pre-update model?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T16:38:50.601Z · LW(p) · GW(p)
I can't, but here's an example from this same thread:
https://www.lesswrong.com/posts/96N8BT9tJvybLbn5z/we-run-the-center-for-applied-rationality-ama#HgQCE8aHctKjYEWHP [LW(p) · GW(p)]
In this comment, you explicitly understood and agreed with the material that was teaching explicit knowledge (philosophy), but objected to the material designed to teach intuitions (circling).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T16:53:50.957Z · LW(p) · GW(p)
Surely you can see how this does not at all imply that I object to intuition, yes? Logically, after all, there are at least three other possibilities:
-
That I don’t believe that intuitions can be taught; or…
-
That I don’t believe that this particular approach (circling) is good for teaching intuitions; or…
-
That I object to circling for reasons unrelated to the (purported) fact that it teaches intuitions.
(There are other, subtler, possibilities; but these three are the obvious ones.)
The conclusion that I have something against intuitions, drawn from the observation that I am skeptical of circling in particular (or any similar thing), seems to me to be really quite unwarranted.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T17:26:06.701Z · LW(p) · GW(p)
Yes. If you're wondering, I basically updated more towards #1.
I wouldn't call the conclusion unwarranted by the way, it's a perfectly valid interpretation of seeing this sort of stance from you, it was simply uninformed.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T06:09:32.063Z · LW(p) · GW(p)
How does your “tacit vs. explicit” dichotomy relative to the “procedural vs. declarative” dichotomy? Are they identical? (If so, why the novel terminology?) Are they totally orthogonal? Some other relationship?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T14:26:23.797Z · LW(p) · GW(p)
If so, why the novel terminology?
Explicit vs. tacit knowledge isn't a CFAR concept, and is pretty well established in the literature. Here's an example:
https://www.basicknowledge101.com/pdf/km/KM_roles.pdf
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T15:04:13.946Z · LW(p) · GW(p)
Some notes, for my own edification and that of anyone else curious about all this terminology and the concepts behind it.
Some searching turns up an article by one Fred Nickols, titled “The Knowledge in Knowledge Management” [PDF]. (As far as I can tell, “knowledge management” seems to be a field or topic of study that originates in the world of business consulting; and Fred Nickols is a former executive at a consulting firm of some sort.)
Nickols offers the following definitions:
Explicit knowledge, as the first word in the term implies, is knowledge that has been articulated and, more often than not, captured in the form of text, tables, diagrams, product specifications and so on. … An example of explicit knowledge with which we are all familiar is the formula for finding the area of a rectangle (i.e., length times width). Other examples of explicit knowledge include documented best practices, the formalized standards by which an insurance claim is adjudicated and the official expectations for performance set forth in written work objectives.
Tacit knowledge is knowledge that cannot be articulated. As Michael Polanyi (1997), the chemist-turned-philosopher who coined the term put it, "We know more than we can tell." Polanyi used the example of being able to recognize a person’s face but being only vaguely able to describe how that is done.
Knowledge that can be articulated but hasn’t is implicit knowledge. … This is the kind of knowledge that can often be teased out of a competent performer by a task analyst, knowledge engineer or other person skilled in identifying the kind of knowledge that can be articulated but hasn’t.
The explicit, implicit, tacit categories of knowledge are not the only ones in use. Cognitive psychologists sort knowledge into two categories: declarative and procedural. Some add strategic as a third category.
Declarative knowledge has much in common with explicit knowledge in that declarative knowledge consists of descriptions of facts and things or of methods and procedures. … For most practical purposes, declarative knowledge and explicit knowledge may be treated as synonyms. This is because all declarative knowledge is explicit knowledge, that is, it is knowledge that can be and has been articulated.
[Procedural knowledge] is an area where important differences of opinion exist.
One view of procedural knowledge is that it is knowledge that manifests itself in the doing of something. As such it is reflected in motor or manual skills and in cognitive or mental skills. We think, we reason, we decide, we dance, we play the piano, we ride bicycles, we read customers’ faces and moods (and our bosses’ as well), yet we cannot reduce to mere words that which we obviously know or know how to do. Attempts to do so are often recognized as little more than after-the-fact rationalizations. …
Another view of procedural knowledge is that it is knowledge about how to do something. This view of procedural knowledge accepts a description of the steps of a task or procedure as procedural knowledge. The obvious shortcoming of this view is that it is no different from declarative knowledge except that tasks or methods are being described instead of facts or things.
Replies from: RaemonPending the resolution of this disparity, we are left to resolve this for ourselves. On my part, I have chosen to acknowledge that some people refer to descriptions of tasks, methods and procedures as declarative knowledge and others refer to them as procedural knowledge. For my own purposes, however, I choose to classify all descriptions of knowledge as declarative and reserve procedural for application to situations in which the knowing may be said to be in the doing. Indeed, as the diagram in Figure 2 shows, declarative knowledge ties to "describing" and procedural knowledge ties to "doing." Thus, for my purposes, I am able to comfortably view all procedural knowledge as tacit just as all declarative knowledge is explicit.
Some reading this will immediately say, "Whoa there. If all procedural knowledge is tacit, that means we can’t articulate it. In turn, that means we can’t make it explicit, that is, we can’t articulate and capture it in the form of books, tables, diagrams and so on." That is exactly what I mean. When we describe a task, step by step, or when we draw a flowchart representing a process, these are representations. Describing what we do or how we do it yields declarative knowledge. A description of an act is not the act just as the map is not the territory.
↑ comment by Raemon · 2019-12-22T20:36:48.731Z · LW(p) · GW(p)
Thanks! (I'm assuming you made the diagrams?)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-22T21:11:51.369Z · LW(p) · GW(p)
Oh, no. The diagrams are taken from the paper; they’re in the PDF I linked.
EDIT: Which paper is, by the way, quite worth reading; it’s written in an exceptionally clear and straightforward way, and gets right to the heart of all relevant matters. I was very impressed, truth be told. I could’ve usefully quoted much more, but then I’d just be pasting the whole paper (which, in addition to its other virtues, is mercifully short).
Replies from: Raemon↑ comment by Jan Kulveit (jan-kulveit) · 2019-12-22T08:20:56.719Z · LW(p) · GW(p)
I like the metaphor!
Just wanted to note: in my view the original LW Sequences are not functional as a stand-alone upgrade for almost any human mind, and you can empirically observe it: You can think about any LW meet-up group around the world as an experiment, and I think to a first approximation it's fair to say aspiring Rationalists running just on the Sequences do not win, and good stuff coming out of the rationalist community was critically dependent of presence of minds Eliezer & others. (This is not say Sequences are not useful in many ways)
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T14:37:56.932Z · LW(p) · GW(p)
You can think about any LW meet-up group around the world as an experiment
I agree with your conclusion here, but think that this is an exceptionally harsh experiment. I conjecture that basically any meetup group, no matter what source they're using, won't emperically lead to most people who attend it "winning". Either it would drive most people away because its' too intense, or it would not be focused and intense enough to actually make a difference.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-24T22:42:12.326Z · LW(p) · GW(p)
Also, the meetup groups are selected against for agency and initiative, because, for better or for worse, the most initiative taking people often pick up and move to the hubs in the Bay or in Oxford.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-12-25T08:47:51.029Z · LW(p) · GW(p)
Or are just otherwise too busy with their life to have the time for meetups.
comment by johnswentworth · 2019-12-19T21:11:29.336Z · LW(p) · GW(p)
What is CFAR's goal/purpose/vision/raison d'etre? Adam's post [LW · GW] basically said "we're bad at explaining it", and an AMA sounds like a good place to at least attempt an explanation.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2019-12-24T17:47:54.530Z · LW(p) · GW(p)
My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.” (This is obviously not our real name. But if I had to grab a handle that gestures at our raison d’etre, at the moment I’d pick this one. We’ve been internally joking about renaming ourselves this for some months now.)
To elaborate: thinking about singularity scenarios is profoundly disorienting (IMO, typically worse than losing a deeply held childhood religion or similar). Folks over and over again encounter similar failure modes as they attempt this. It can be useful to have an institution for assisting with this -- collecting concepts and tools that were useful for previous waves who’ve attempted thought/work about singularity scenarios, and attempting to pass them on to those who are currently beginning to think about such scenarios.
Relatedly, the pattern of thinking required for considering AI risk and related concepts at all is pretty different from the patterns of thinking that suffice in most other contexts, and it can be useful to have a group that attempts to collect these and pass them forward.
Further, it can be useful to figure out how the heck to do teams and culture in a manner that can withstand the disruptions that can come from taking singularity scenarios seriously.
So, my best current angle on CFAR is that we should try to be a place that can help people through these standard failure modes -- a place that can try to answer the question “how can we be sane and reasonable and sensible and appropriately taking-things-seriously in the face of singularity scenarios,” and can try to pass on our answer, and can notice and adjust when our answer turns out to be invalid.
To link this up with our concrete activities:
AIRCS workshops / MSFP:
- Over the last year, about half our staff workshop-days [LW · GW] went into attempting to educate potential AI alignment researchers. These programs were co-run with MIRI. Workshops included a bunch about technical AI content; a bunch of practice thinking through “is there AI risk” and “how the heck would I align a superintelligence” and related things; and a bunch of discussion of e.g. how to not have “but the stakes are really really big” accidentally overwhelm one’s basic sanity skills (and other basic pieces of how to not get too disoriented).
- Many program alumni attended multiple workshops, spaced across time, as part of a slow acculturation process: stare at AI risk; go back to one’s ordinary job/school context for some months while digesting in a back-burner way; repeat.
- These programs aim at equipping people to contribute to Al alignment technical work at MIRI and elsewhere; in the last two years it’s helped educate a sizable number of MIRI hires and smaller but still important number of others (brief details in our 2019 progress report [LW · GW]; more details coming eventually). People sometime try to gloss the impact of AIRCS as “outreach” or “causing career changes,” but, while I think it does in fact fulfill CEA-style metrics, that doesn’t seem to me like a good way to see its main purple -- helping folks feel their way toward being more oriented and capable around these topics in general, in a context where other researchers have done or are doing likewise.
- They seem like a core activity for a “Center for bridging between common sense and singularity scenarios” -- both in that they tell us more about what happens when folks encounter AI risk, and in that they let us try to use what we think we know for good. (We hope it’s “good.”)
Mainline workshops, alumni reunions, alumni workshops unrelated to AI risk, etc.:
- We run mainline workshops (which many people just call “CFAR workshops”), alumni reunions, and some topic-specific workshops for alumni that have nothing to do with AI risk (e.g., a double crux workshop). Together, this stuff constituted about 30% [LW · GW] of our staff workshop-days over the last two years.
- The EA crowd often asks me why we run these. (“Why not just run AI safety workshops, since that is the part of your work that has more shot at helping large numbers of people?”) The answer is that when I imagine removing the mainline workshops, CFAR begins to feel like a table without enough legs -- unstable, liable to work for awhile but then fall over, lacking enough contact with the ground.
- More concretely: we’re developing and spreading a nonstandard mental toolkit (inner sim, double crux, Gendlin’s Focusing, etc.). That’s a tricky and scary thing to do. It’s really helpful to get to try it on a variety of people -- especially smart, thoughtful, reflective, articulate people who will let us know what seems like a terrible idea, or what causes them help in their lives, or disruption in their lives. The mainline workshops (plus follow-up sessions, alumni workshops, alumni reunions, etc.) let us develop this alleged “bridge” between common sense and singularity scenarios in a way that avoids overfitting it all to just “AI alignment work.” Which is basically to say that they let us develop and test our models of “applied rationality”.
“Sandboxes” toward trying to understand how to have a healthy culture in contact with AI safety:
- I often treat the AIRCS workshops as “sandboxes”, and try within them to create small temporary “cultures” in which we try to get research to be able to flourish, or try to get people to be able to both be normal humans and slowly figure out how to approach AI alignment, or whatever. I find them a pretty productive vehicle for trying to figure out the “social context” thing, and not just the “individual thinking habits” thing. I care about this experimentation-with-feedback because I want MIRI and other longer-term teams to eventually have the right cultural base.
Our instructor training program, and our attempt to maintain a staff who is skilled at seeing what cognitive processes are actually running in people:
- There’s a lot of trainable, transferable skill to seeing what people are thinking. CFAR staff have a bunch of this IMO, and we seem to me to be transferring a bunch of it to the instructor candidates too. We call it “seeking PCK”.
- The “seeking PCK” skillset is obviously helpful for learning to “bridge between common sense and singularity scenarios” -- it helps us see what the useful patterns folks have are, and what the not-so-useful patterns folks have are, and what exactly is happening as we attempt to intervene (so that we can adjust our interventions).
- Thus, improving and maintaining the “seeking PCK” skillset probably makes us faster at developing any other curriculum.
- More mundanely, of course, instructor training also gives us guest instructors who can help us run workshops -- many of whom are also out and about doing other interesting things, and porting wisdom/culture/data back and forth between those endeavors and our workshops.
To explain what “bridging betwen common sense and singularity scenarios” has to do with “applied rationality” and the LW Sequences and so on:
- The farther off you need to extrapolate, the more you need reasoning (vs being able to lean on either received wisdom, or known data plus empirical feedback loops). And singularity scenarios sure are far from the everyday life our heuristics are developed for, so singularity scenarios benefit more than most from trying to be the lens that sees its flaws [LW · GW], and from Sequences-style thinking more broadly.
↑ comment by AnnaSalamon · 2019-12-24T18:22:09.551Z · LW(p) · GW(p)
Examples of some common ways that people sometimes find Singularity scenarios disorienting:
When a person loses their childhood religion, there’s often quite a bit of bucket error [LW · GW]. A person updates on the true fact “Jehovah is not a good explanation of the fossil record” and accidentally confuses that true fact with any number of other things, such as “and so I’m not allowed to take my friends’ lives and choices as real and meaningful.”
I claimed above that “coming to take singularity scenarios seriously” seems in my experience to often cause even more disruption / bucket errors / confusions / false beliefs than does “losing a deeply held childhood religion.” I’d like to elaborate on that here by listing some examples of the kinds of confusions/errors I often encounter.
None of these are present in everyone who encounters Singularity scenarios, or even in most people who encounter it. Still, each confusion below is one where I’ve seen it or near-variants of it multiple times.
Also note that all of these things are “confusions”, IMO. People semi-frequently have them at the beginning and then get over them. These are not the POV I would recommend or consider correct -- more like the opposite -- and I personally think each stems from some sort of fixable thinking error.)
- The imagined stakes in a singularity are huge. Common confusions related to this:
- Confusion about whether it is okay to sometimes spend money/time/etc. on oneself, vs. having to give it all to attempting to impact the future.
- Confusion about whether one wants to take in singularity scenarios, given that then maybe one will “have to” (move across the country / switch jobs / work all the time / etc.)
- Confusion about whether it is still correct to follow common sense moral heuristics, given the stakes.
- Confusion about how to enter “hanging out” mode, given the stakes and one’s panic. (“Okay, here I am at the beach with my friends, like my todo list told me to do to avoid burnout. But how is it that I used to enjoy their company? They seem to be making meaningless mouth-noises that have nothing to do with the thing that matters…”)
- Confusion about how to take an actual normal interest in one’s friends’ lives, or one’s partner’s lives, or one’s Lyft drivers’ lives, or whatever, given that within the person’s new frame, the problems they are caught up in seem “small” or “irrelevant” or to have “nothing to do with what matters”.
- The degrees of freedom in “what should a singularity maybe do with the future?” are huge. And people are often morally disoriented by that part.
Should we tile the universe with a single repeated mouse orgasm, or what?
- Are we allowed to want humans and ourselves and our friends to stay alive? Is there anything we actually want? Or is suffering bad without anything being better-than-nothing?
- If I can’t concretely picture what I’d do with a whole light-cone (maybe because it is vastly larger than any time/money/resources I’ve ever personally obtained feedback from playing with) -- should I feel that the whole future is maybe meaningless and no good?
- The world a person finds themselves in once they start taking Singularity scenarios seriously is often quite different from what the neighbors think, which itself can make things hard
- Can I have a “real” conversation with my friends? Should I feel crazy? Should I avoid taking all this in on a visceral level so that I’ll stay mentally in the same world as my friends?
- How do I keep regarding other peoples’ actions as good and reasonable? The imagined scales are very large, with the result that one can less assume “things are locally this way” is an adequate model.
- Given this, should I get lost in “what about simulations / anthropics” to the point of becoming confused about normal day-today events?
- In order to imagine this stuff, folks need to take seriously reasoning that is neither formal mathematics, nor vetted by the neighbors or academia, nor strongly based in empirical feedback loops.
- Given this, shall I go ahead and take random piles of woo seriously also?
There are lots more where these came from, but I’m hoping this gives some flavor, and makes it somewhat plausible why I’m claiming that “coming to take singularity scenarios seriously can be pretty disruptive to common sense," and such that it might be nice to try having a "bridge" that can help people lose less of the true parts of common sense as their world changes (much as it might be nice for someone who has just lost their childhood religion to have a bridge to "okay, here are some other atheists, and they don't think that God is why they should get up in the morning and care about others, but they do still seem to think they should get up in the morning and care about others").
Replies from: Wei_Dai, howie-lempel, howie-lempel, artyom-kazak↑ comment by Wei Dai (Wei_Dai) · 2019-12-25T05:43:51.671Z · LW(p) · GW(p)
and makes it somewhat plausible why I’m claiming that “coming to take singularity scenarios seriously can be pretty disruptive to common sense,” and such that it might be nice to try having a “bridge” that can help people lose less of the true parts of common sense as their world changes
Can you say a bit more about how CFAR helps people do this? Some of the "confusions" you mentioned are still confusing to me. Are they no longer confusing to you? If so, can you explain how that happened and what you ended up thinking on each of those topics? For example lately I'm puzzling over something related to this [LW(p) · GW(p)]:
Given this, should I get lost in “what about simulations / anthropics” to the point of becoming confused about normal day-today events?
↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T17:48:57.723Z · LW(p) · GW(p)
[Possibly digging a bit too far into the specifics so no worries if you'd rather bow out.]
Do you think these confusions[1] are fairly evenly dispersed throughout the community (besides what you already mentioned: "People semi-frequently have them at the beginning and then get over them.")?
Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who "take singularity scenarios seriously."[2] (B) I'm very uncertain but they also seem less prevalent to me in the EA community than the rationalist community (to the extent the communities can be separated).[3] [4]
Do A and B sound right to you? If so, do you have a take on why that is?
If A or B *are* true, do you think this is in any part caused by the relative groups taking the singularity [/x-risk/the future/the stakes] less seriously? If so, are there important costs from this?
[1] Using your word while withholding my own judgment as to whether every one of these is actually a confusion.
[2] If you're right that a lot of people have them at the beginning and then get over them, a simple potential explanation would be that by the time you're working at one of these orgs, that's already happened.
Other hypothesis: (a) selection effects; (b) working FT in the community gives you additional social supports and makes it more likely others will notice if you start spiraling; (c) the cognitive dissonance with the rest of society is a lot of what's doing the damage. It's easier to handle this stuff psychologically if the coworkers you see every day also take the singularity seriously.[i]
[3] For example perhaps less common at Open Phil, GPI, 80k, and CEA than CFAR and MIRI but I also think this holds outside of professional organisations.
[4] One potential reason for this is that a lot of EA ideas are more "in the air" than rationalist/singularity ones. So a lot of EAs may have had their 'crisis of faith' before arriving in the community. (For example, I know plenty of EAs (myself included) who did some damage to themselves in their teens or early twenties by "taking Peter Singer really seriously."
[i] I've seen this kind of dissidence offered as a (partial) explanation of why PTSD has become so common among veterans & why it's so hard for them to reintegrate after serving a combat tour. No clue if the source is reliable/widely held/true. It's been years but I think I got it from Odysseus in America or perhaps its predecessor, Achilles in Vietnam.
↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T17:15:02.891Z · LW(p) · GW(p)
This seemed really useful. I suspect you're planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)
↑ comment by Artyom Kazak (artyom-kazak) · 2020-01-09T01:24:38.275Z · LW(p) · GW(p)
The state of confusion you're describing sounds a lot like Kegan's 4.5 nihilism (pretty much everything at meaningness.com is relevant). A person's values have been demolished by a persuasive argument, but they haven't yet internalized that people are "allowed" to create their own systems and values. Alright.
1. I assume that LW-adjacent people should actually be better at guiding people out of this stage, because a lot of people in the community have gone through the same process and there is an extensive body of work on the topic (Eliezer's sequences on human values, David Chapman's work, Scott Alexander's posts on effective altruism / axiology-vs-morality / etc).
2. I also assume that in general we want people to go through this process – it is a necessary stage of adult development.
Given this, I'm leaning towards "guiding people towards nihilism is good as long as you don't leave them in the philosophical dark re/ how to get out of it". So, taking a random smart person, persuading them they
should care about Singularity, and leaving – this isn't great. But introducing people to AI risk in the context of LW seems much more benign to me.
↑ comment by Eli Tyre (elityre) · 2019-12-24T23:47:25.510Z · LW(p) · GW(p)
We’ve been internally joking about renaming ourselves this for some months now.
I'm not really joking about it. I wish the name better expressed what the organization does.
Though I admit that CfBCSSS, leaves a lot to be desired in terms of acronyms.
Replies from: johnswentworth, AnnaSalamon↑ comment by johnswentworth · 2019-12-25T00:39:29.173Z · LW(p) · GW(p)
I nominate "Society of Effective Epistemics For AI Risk" or SEE-FAR for short.
Replies from: AnnaSalamon, elityre↑ comment by AnnaSalamon · 2019-12-25T02:02:05.276Z · LW(p) · GW(p)
:) There's something good about "common sense" that isn't in "effective epistemics", though -- something about wanting not to lose the robustness of the ordinary vetted-by-experience functioning patterns. (Even though this is really hard, plausibly impossible, when we need to reach toward contexts far from those in which our experiences were based.)
↑ comment by Eli Tyre (elityre) · 2019-12-25T00:41:56.533Z · LW(p) · GW(p)
This is the best idea I've heard yet.
It would be pretty confusing to people, and yet...
↑ comment by AnnaSalamon · 2019-12-25T02:01:17.346Z · LW(p) · GW(p)
To clarify: we're not joking about the need to get "what we do" and "what people think we do" more in alignment, via both communicating better and changing our organizational name if necessary. We put that on our "goals for 2020" list (both internally, and in our writeup). We are joking that CfBCSSS is an acceptable name (due to its length making it not-really-that).
(Eli works with us a lot but has been taking a leave of absence for the last few months and so didn't know that bit, but lots of us are not-joking about getting our name and mission clear.)
↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T17:16:58.935Z · LW(p) · GW(p)
My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.
[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.]
Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?
↑ comment by habryka (habryka4) · 2019-12-24T18:31:15.108Z · LW(p) · GW(p)
These programs aim at equipping people to contribute to Al alignment technical work at MIRI and elsewhere; in the last two years, N hires have come out of them. I’m sure some (but not all) of those hires would’ve happened even without these programs; I suspect though that
Typo: My guess is that the N should be replaced with a number, and the sentence wasn't intended to trail off like that.
comment by gilch · 2019-12-20T05:50:51.329Z · LW(p) · GW(p)
Thus spake Eliezer: "Every Cause Wants to be a Cult [LW · GW]".
An organization promising life-changing workshops/retreats seems especially high-risk for cultishness, or at least pattern matches on it pretty well. We know the price of retaining sanity is vigilance. What specific, concrete steps are you at CFAR taking to resist the cult attractor?
comment by habryka (habryka4) · 2019-12-21T04:21:20.121Z · LW(p) · GW(p)
What are the LessWrong posts that you wish you had the time to write?
Replies from: AnnaSalamon, elityre, BrienneYudkowsky, adam_scholl, Unnamed↑ comment by AnnaSalamon · 2019-12-24T20:25:55.022Z · LW(p) · GW(p)
Here’s a very partial list of blog post ideas from my drafts/brainstorms folder. Outside view, though, if I took the time to try to turn these in to blog posts, I’d end up changing my mind about more than half of the content in the process of writing it up (and then would eventually end up with blog posts with somewhat different these).
I’m including brief descriptions with the awareness that my descriptions may not parse at this level of brevity, in the hopes that they’re at least interesting teasers.
Contra-Hodgel
- (The Litany of Hodgell says “That which can be destroyed by the truth should be”. Its contrapositive therefore says: “That which can destroy [that which should not be destroyed] must not be the full truth.” It is interesting and sometimes-useful to attempt to use Contra-Hodgel as a practical heuristic: “if adopting belief X will meaningfully impair my ability to achieve good things, there must be some extra false belief or assumption somewhere in the system, since true beliefs and accurate maps should just help (e.g., if “there is no Judeo-Christian God” in practice impairs my ability to have good and compassionate friendships, perhaps there is some false belief somewhere in the system that is messing with that).
The 50/50 rule
- The 50/50 rule is a proposed heuristic claiming that about half of all progress on difficult projects will come from already-known-to-be-project-relevant subtasks -- for example, if Archimedes wishes to determine whether the king’s crown is unmixed gold, he will get about half his progress from diligently thinking about this question (plus subtopics that seem obviously and explicitly relevant to this question). The other half of progress on difficult projects (according to this heuristic) will come from taking an interest in the rest of the world, including parts not yet known to be related to the problem at hand -- in the Archimedes example, from Archimedes taking an interest in what happens to his bathwater.
- Relatedly, the 50/50 rule estimates that if you would like to move difficult projects forward over long periods of time, it is often useful to spend about half of your high-energy hours on “diligently working on subtasks known-to-be-related to your project”, and the other half taking an interest in the world.
Make New Models, but Keep The Old
- “... one is silver and the other’s gold.”
- A retelling of: it all adds up to normality.
On Courage and Believing In.
- Beliefs are for predicting what’s true [LW · GW]. “Believing in”, OTOH, is for creating a local normal that others can accurately predict. For example: “In America, we believe in driving on the right hand side of the road” -- thus, when you go outside and look to predict which way people will be driving, you can simply predict (believe) that they’ll be driving on the right hand side.
- Analogously, if I decide I “believe in” [honesty, or standing up for my friends, or other such things], I create an internal context in which various models within me can predict that my future actions will involve [honesty, or standing up for my friends, or similar].
- It’s important and good to do this sometimes, rather than having one’s life be an accidental mess with nobody home choosing. It’s also closely related to courage.
Ethics for code colonies
- If you want to keep caring about people, it makes a lot of sense to e.g. take the time to put your shopping cart back where it goes, or at minimum not to make up excuses about how your future impact on the world makes you too important to do that.
- In general, when you take an action, you summon up black box code-modification that takes that action (and changes unknown numbers of other things). Life as a “code colony” is tricky that way.
- Ethics is the branch of practical engineering devoted to how to accomplish things with large sets of people over long periods of time -- or even with one person over a long period of time in a confusing or unknown environment. It’s the art of interpersonal and intrapersonal coordination. (I mean, sometimes people say “ethics” means “following this set of rules here”. But people also say “math” means “following this algorithm whenever you have to divide fractions” or whatever. And the underneath-thing with ethics is (among other things, maybe) interpersonal and intra-personal coordination, kinda like how there’s an underneath-thing with math that is where those rules come from.)
- The need to coordinate in this way holds just as much for consequentialists or anyone else.
- It's kinda terrifying to be trying to do this without a culture. Or to be not trying to do this (still without a culture).
The explicit and the tacit (elaborated a bit in a comment [LW(p) · GW(p)] in this AMA; but there’s room for more).
Cloaks, Questing, and Cover Stories
- It’s way easier to do novel hypothesis-generation if you can do it within a “cloak”, without making any sort of claim yet about what other people ought to believe. (Teaching this has been quite useful on a practical level for many at AIRCS, MSFP, and instructor trainings -- seems worth seeing if it can be useful via text, though that’s harder.)
Me-liefs, We-liefs, and Units of Exchange
- Related to “cloaks and cover stories” -- we have different pools of resources that are subject to different implicit contracts and commitments. Not all Bayesian evidence is judicial or scientific evidence, etc.. A lot of social coordination works by agreeing to only use certain pools of resources in agreement with certain standards of evidence / procedure / deference (e.g., when a person does shopping for their workplace they follow their workplace’s “which items to buy” procedures; when a physicist speaks to laypeople in their official capacity as a physicist, they follow certain procedures so as to avoid misrepresenting the community of physicists).
- People often manage this coordination by changing their beliefs (“yes, I agree that drunk driving is dangerous -- therefore you can trust me not to drink and drive”). However, personally I like the rule “beliefs are for true things -- social transactions can make my requests of my behaviors but not of my beliefs.” And I’ve got a bunch of gimmicks for navigating the “be robustly and accurately seen as prosocial” without modifying one’s beliefs (“In my driving, I value cooperating with the laws and customs so as to be predictable and trusted and trustworthy in that way; and drunk driving is very strongly against our customs -- so you can trust me not to drink and drive.”)
How the Tao unravels
- A book review of part of CS Lewis’s book “The abolition of man.” Elaborates CS Lewis’s argument that in postmodern times, people grab hold of part of humane values and assert it in contradiction with other parts of humane values, which then assert back the thing that they’re holding and the other party is missing, and then things fragment further and further. Compares Lewis’s proposed mechanism with how cultural divides have actually been going in the rationality and EA communities over the last ten years.
↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T16:32:22.905Z · LW(p) · GW(p)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.
↑ comment by Eli Tyre (elityre) · 2019-12-22T06:03:52.017Z · LW(p) · GW(p)
Some off the top of my head.
- A bunch of Double Crux posts that I keep promising but am very bad at actually finishing.
- The Last Term Problem (or why saving the world is so much harder than it seems) - A abstract decision theoretic problem that has confused me about taking actions at all for the past year.
- A post on how the commonly cited paper on how "Introspection is Impossible" (Nisbett and Wilson) is misleading.
- Two takes on confabulation - About how the Elephant in the Brain thesis doesn't imply that we can't tell what our motivations actually are, just that we aren't usually motivated to.
- A lit review on mental energy and fatigue.
- A lit review on how attention works.
Most of my writing is either private strategy documents, or spur of the moment thoughts / development-nuggets that I post here.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2019-12-22T06:19:31.595Z · LW(p) · GW(p)
Can you too-tersely summarize your Nisbett and Wilson argument?
Or, like... writer a teaser / movie trailer for it, if you're worried your summary would be incomplete or inoculating?
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-24T22:39:20.507Z · LW(p) · GW(p)
This doesn't capture everything, but one key piece is "People often confuse a lack of motivation to introspect with a lack of ability to introspect. The fact of confabulation does not demonstrate that people are unable articulate what's actually happening in principle." Very related to the other post on confabulation I note above.
Also, if I remember correctly, some of the papers in that meta analysis, just have silly setups: testing whether people can introspect into information that they couldn't have access too. (Possible that I misunderstood or am miss-remembering.)
To give a short positive account:
- All introspection depends on comparison between mental states at different points in time. You can't introspect on some causal factor that doesn't vary.
- Also, the information has to be available at the time of introspection, ie still in short term memory.
- But that gives a lot more degrees for freedom that people seem to predict, and in practice I am able to notice many subtle intentions (such as when my behavior is motivated by signalling), that others want to throw out as unknowable.
↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-26T19:48:34.587Z · LW(p) · GW(p)
This isn’t a direct answer to, “What are the LessWrong posts that you wish you had the time to write?” It is a response to a near-by question, though, which is probably something along the lines of, “What problems are you particularly interested in right now?” which is the question that always drives my blogging. Here’s a sampling, in no particular order.
[edit: cross-posted to Ray's Open Problems post [LW · GW].]
There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t deliberately design my daily life to account for it. Now that I can take my sound sensitivity (and many related things) as object, I’m in a much more powerful position. And it *terrifies* me that I went a quarter of a century without recognizing these basic facts of my experience. It terrifies me even more when I imagine an AI researcher being subject to some similarly crucial thing about how agents work. I would very much like to know what other basic facts of my experience I remain unaware of. I would like to know how to find out what I am currently unable to take as object.
On a related note, you know how an awful lot of people in our community are autistic? It seem to me that our community is subject to this fact. (It also seems to me that many individual people in our community remain subject to most of their autistic patterns, and that this is more like the rule than the exception.) I would like to know what’s going on here, and whether some other state of affairs would be preferable, and how to instantiate that state of affairs.
Why do so many people seem to wait around for other people to teach them things, even when they seem to be trying very hard to learn? Do they think they need permission? Do they think they need authority? What are they protecting? Am I inadvertently destroying it when I try to figure things out for myself? What stops people from interrogating the world on their own terms?
I get an awful lot of use out of asking myself questions. I think I’m unusually good at doing this, and that I know a few other people with this property. I suspect that the really useful thing isn’t so much the questions, as whatever I’m doing with my mind most of the time that allows me to ask good questions. I’d like to know what other people are doing with their minds that prevents this, and whether there’s a different thing to do that’s better.
What is “quality”?
Suppose religion is symbiotic, and not just parasitic. What exactly is it doing for people? How is it doing those things? Are there specific problems it’s solving? What are the problems? How can we solve those problems without tolerating the damage religion causes?
[Some spoilers for bits of the premise of A Fire Upon The Deep and other stories in that sequence.] There’s this alien race in Verner Vinge books called the Tines. A “person” of the Tines species looks at first like a pack of several animals. The singleton members that make up a pack use high-frequency sound, rather than chemical neurotransmitters, to think as one mind. The singleton members of a pack age, so when one of your singletons dies, you adopt a new singleton. Since singletons are all slightly different and sort of have their own personalities, part of personal health and hygiene for Tines involves managing these transitions wisely. If you do a good job — never letting several members die in quick succession, never adopting a singleton that can’t harmonize with the rest of you, taking on new singletons before the oldest ones loose the ability to communicate — then you’re effectively immortal. You just keep amassing new skills and perspectives and thought styles, without drifting too far from your original intentions. If you manage the transitions poorly, though — choosing recklessly, not understanding the patterns an old member has been contributing, participating in a war where several of your singletons may die at once — then your mind could easily become suddenly very different, or disorganized and chaotic, or outright insane, in a way you’ve lost the ability to recover from. I think about the Tines a lot when I experiment with new ways of thinking and feeling. I think much of rationality poses a similar danger to the one faced by the Tines. So I’d like to know what practices constitute personal health and hygiene for cognitive growth and development in humans.
What is original seeing? How does it work? When is it most important? When is it the wrong move? How can I become better at it? How can people who are worse at it than I am become better at it?
In another thread, Adam made a comment that I thought was fantastic. I typed to him, “That comment is fantastic!” As I did so, I noticed that I had an option about how to relate to the comment, and to Adam, when I felt a bid from somewhere in my mind to re-phrase as, “I really like that comment,” or, “I enjoyed reading your comment,” or “I’m excited and impressed by your comment.” That bid came from a place that shares a lot of values with Lesswrong-style rationalists, and 20th century science, and really with liberalism in general. It values objectivity, respect, independence, autonomy, and consent, among other things. It holds map-territory distinctions and keeps its distance from the world, in an attempt to see all things clearly. But I decided to stand behind my claim that the “the comment is fantastic”. I did not “own my experience”, in this case, or highlight that my values are part of me rather than part of the world. I have a feeling that something really important is lost in the careful distance we keep all the time from the world and from each other. Something about the power to act, to affect each other in ways that create small-to-mid-sized superorganisms like teams and communities, something about tending our relationship to the world so that we don’t float off in bubbles of abstraction. Whatever that important thing is, I want to understand it. And I want to protect it, and to incorporate it into my patterns of thought, without loosing all I gain from cold clarity and distance.
I would like to think more clearly, especially when it seems important to do so. There are a lot of things that might affect how clearly you think, some of which are discussed in the Sequences. For example, one common pattern of muddy thought is rationalization, so one way to increase your cognitive clarity is to stop completely ignoring the existence of rationalization. I’ve lately been interested in a category of clarity-increasing thingies that might be sensibly described as “the relationship between a cognitive process and its environment”. By “environment”, I meant to include several things:
- The internal mental environment: the cognitive and emotional situation in which a thought pattern finds itself. Example: When part of my mind is trying to tally up how much money I spent in the past month, and local mental processes desperately want the answer to be “very little” for some reason, my clarity of thought while tallying might not be so great. I expect that well maintained internal mental environments — ones that promote clear thinking — tend to have properties like abundance, spaciousness, and groundedness.
- The internal physical environment: the physiological state of a body. For example, hydration seems to play a shockingly important role in how well I maintain my internal mental environment while I think. If I’m trying to solve a math problem and have had nothing to drink for two hours, it’s likely I’m trying to work in a state of frustration and impatience. Similar things are true of sleep and exercise.
- The external physical environment: the sensory info coming in from the outside world, and the feedback patterns created by external objects and perceptual processes. When I’ve been having a conversation in one room, and then I move to another room, it often feels as though I’ve left half my thoughts behind. I think this is because I’m making extensive use of the walls and couches and such in my computations. I claim that one’s relationship to the external environment can make more or less use of the environment’s supportive potential, and that environments can be arranged in ways that promote clarity of thought (see Adam’s notes on the design of the CFAR venue, for instance).
- The social environment: people, especially frequently encountered ones. The social environment is basically just part of the external physical environment, but it’s such an unusual part that I think it ought to be singled out. First of all, it has powerful effects on the internal mental environment. The phrase “politics is the mind killer” means something like “if you want to design the social environment to maximize muddiness of thought, have I got a deal for you”. Secondly, other minds have the remarkable property of containing complex cognitive processes, which are themselves situated in every level of environment. If you’ve ever confided in a close, reasonable friend who had some distance from your own internal turmoil, you know what I’m getting at here. I’ve thought a lot lately about how to build a “healthy community” in which to situate my thoughts. A good way to think about what I’m trying to do is that I want to cultivate the properties of interpersonal interaction that lead to the highest quality, best maintained internal mental environments for all involved.
I built a loft bed recently. Not from scratch, just Ikea-style. When I was about halfway through the process, I realized that I’d put one of the panels on backward. I’d made the mistake toward the beginning, so there were already many pieces screwed into that panel, and no way to flip it around without taking the whole bed apart again. At that point, I had a few thoughts in quick succession:
- I really don’t want to take the whole bed apart and put it back together again.
- Maybe I could unscrew the pieces connected to that panel, then carefully balance all of them while I flip the panel around? (Something would probably break if I did that.)
- You know what, maybe I don’t want a dumb loft bed anyway.
It so happens that in this particular case, I sighed, took the bed apart, carefully noted where each bit was supposed to go, flipped the panel around, and put it all back together again perfectly. But I’ve certainly been in similar situations where for some reason, I let one mistake lead to more mistakes. I rushed, broke things, lost pieces, hurt other people, or gave up. I’d like to know what circumstances obtain when I have get this right, and what circumstances obtain when I don’t. Where can I get patience, groundedness, clarity, gumption, and care?
What is "groundedness"?
I’ve developed a taste for reading books that I hate. I like to try on the perspective of one author after another, authors with whom I think I have really fundamental disagreements about how the world works, how one ought to think, and whether yellow is really such a bad color after all. There’s a generalized version of “reading books you hate” that I might call “perceptual dexterity”, or I might call “the ground of creativity”, which is something like having a thousand prehensile eye-stalks in your mind, and I think prehensile eye-stalks are pretty cool. But I also think it’s generally a good idea to avoid reading books you hate, because your hatred of them is often trying to protect you from “your self and worldview falling apart”, or something. I’d like to know whether my self and worldview are falling apart, or whatever. And if not, I’d like to know whether I’m doing something to prevent it that other people could learn to do, and whether they’d thereby gain access to a whole lot more perspective from which they could triangulate reality.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T01:23:01.135Z · LW(p) · GW(p)
I have a Google Doc full of ideas. Probably I'll never write most of these, and if I do probably much of the content will change. But here are some titles, as they currently appear in my personal notes:
- Mesa-Optimization in Humans
- Primitivist Priors v. Pinker Priors
- Local Deontology, Global Consequentialism
- Fault-Tolerant Note-Scanning
- Goal Convergence as Metaethical Crucial Consideration
- Embodied Error Tracking
- Abnormally Pleasurable Insights
- Burnout Recovery
- Against Goal "Legitimacy"
- Computational Properties of Slime Mold
- Steelmanning the Verificationist Criterion of Meaning
- Manual Tribe Switching
- Manual TAP Installation
- Keep Your Hobbies
↑ comment by Unnamed · 2019-12-22T08:02:58.758Z · LW(p) · GW(p)
I don’t think that time is my main constraint, but here are some of my blog post shaped ideas:
- Taste propagates through a medium
- Morality: do-gooding and coordination
- What to make of ego depletion research
- Taboo "status"
- What it means to become calibrated
- The NFL Combine as a case study in optimizing for a proxy
- The ability to paraphrase
- 5 approaches to epistemics
comment by ChristianKl · 2019-12-19T20:48:37.453Z · LW(p) · GW(p)
Why did so much of the initial CFAR employees decide to leave the organization?
Replies from: AnnaSalamon, Unnamed, adam_scholl↑ comment by AnnaSalamon · 2019-12-25T05:21:17.939Z · LW(p) · GW(p)
My guesses, in no particular order:
-
Being a first employee is pretty different from being in a middle-stage organization. In particular, the opportunity to shape what will come has an appeal that can I think rightly bring in folks who you can’t always get later. (Folks present base rates for various reference classes below; I don’t know if anyone has one for “founding” vs “later” in small organizations?)
- Relatedly, my initial guess back in ~2013 (a year in) was that many CFAR staff members would “level up” while they were here and then leave, partly because of that level-up (on my model, they’d acquire agency and then ask if being here as one more staff member was or wasn’t their maximum-goal-hitting thing). I was excited about what we were teaching and hoped it could be of long-term impact to those who worked here a year or two and left, as well as to longer-term people.
-
I and we intentionally hired for diversity of outlook. We asked ourselves: “does this person bring some component of sanity, culture, or psychological understanding -- but especially sanity -- that is not otherwise represented here yet?” And this… did make early CFAR fertile, and also made it an unusually difficult place to work, I think. (If you consider the four founding members of me, Julia Galef, Val, and Critch, I think you’ll see what I mean.)
-
I don’t think I was very easy to work with. I don’t think I knew how to make CFAR a very easy place to work either. I was trying to go with inside views even where I couldn’t articulate them and… really didn’t know how to create a good interface between that and a group of people. Pete and Duncan helped us do otherwise more recently I think, and Tim and Adam and Elizabeth and Jack and Dan building on it more since, with the result that CFAR is much more of a place now (less of a continuous “each person having an existential crisis all the time” than it was for some in the early days; more of a plod of mundane work in a positive sense). (The next challenge here, which we hope to accomplish this year, is to create a place that still has place-ness, and also has more visibility into strategy.)
-
My current view is that being at workshops for too much of a year is actually really hard on a person, and maybe not-good. It mucks up a person’s codebase without enough chance for ordinary check-sums to sort things back to normal again afterward. Relatedly, my guess is also that while stints at CFAR do level a person up in certain ways (~roughly as I anticipated back in 2013), they unfortunately also risk harming a person in certain ways that are related to “it’s not good to live in workshops or workshop-like contexts for too many weeks/months in a row, even though a 4-day workshop is often helpful” (which I did not anticipate in 2013). (Basically: you want a bunch of normal day-to-day work on which to check whether your new changes actually work well, and to settle back into your deeper or more long-term self. The 2-3 week “MIRI Summer Fellows Program” (MSFP) has had… some great impacts in terms of research staff coming out of the program, but also most of our least stable people additionally came out of that. I believe that this year we’ll be experimentally replacing it with repeated shorter workshops; we’ll also be trying a different rest days pattern for staff, and sabbatical months, as well as seeking stability/robustness/continuity in more cultural and less formal ways.)
↑ comment by ChristianKl · 2019-12-25T17:01:05.430Z · LW(p) · GW(p)
“it’s not good to live in workshops or workshop-like contexts for too many weeks/months in a row, even though a 4-day workshop is often helpful”
In curious about that. It seems like a new point for me. What concrete negative effects have you seen there?
↑ comment by Unnamed · 2019-12-22T09:34:23.973Z · LW(p) · GW(p)
(This is Dan, from CFAR since 2012)
Working at CFAR (especially in the early years) was a pretty intense experience, which involved a workflow that regularly threw you into these immersive workshops, and also regularly digging deeply into your thinking and how your mind works and what you could do better, and also trying to make this fledgling organization survive & function. I think the basic thing that happened is that, even for people who were initially really excited about taking this on, things looked different for them a few years later. Part of that is personal, with things like burnout, or feeling like they’d gotten their fill and had learned a large chunk of what they could from this experience, or wanting a life full of experiences which were hard to fit in to this (probably these 3 things overlap). And part of it was professional, where they got excited about other projects for doing good in the world while CFAR wanted to stay pretty narrowly focused on rationality workshops.
I’m tempted to try to go into more detail, but it feels like that would require starting to talk about particular individuals rather the set of people who were involved in early CFAR and I feel weird about that.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-20T19:07:03.461Z · LW(p) · GW(p)
So I’m imagining there might be both a question (for what types of reasons have CFAR staff left?) and a claim (CFAR’s rate of turnover is unusual) here. Anna should be able to better address the question, but with regard to the claim: I think it’s true, at least relative to average U.S. turnover. The median length Americans spend in jobs is 4.2 years, while the median length CFAR employees have stayed in their jobs is 2.2 years; 32% of our employees (7 people) left within their first year.
Replies from: habryka4, denkenberger↑ comment by habryka (habryka4) · 2019-12-20T19:16:34.406Z · LW(p) · GW(p)
I am not fully sure about the correct reference class here, but employee turnover in Silicon Valley is generally very high, so that might also explain some part of the variance: https://www.inc.com/business-insider/tech-companies-employee-turnover-average-tenure-silicon-valley.html
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-20T20:16:02.614Z · LW(p) · GW(p)
I would guess that many startups in Silicon Valley making big promising about changing the world and then when they employers spend a year at the company, they see that there's little meaning in the work they are doing.
If people at CFAR don't keep working at CFAR that seems like a value judgment on CFAR not being that important.
Replies from: johnswentworth, habryka4↑ comment by johnswentworth · 2019-12-20T23:02:09.648Z · LW(p) · GW(p)
I would guess that many startups in Silicon Valley making big promising about changing the world and then when they employers spend a year at the company, they see that there's little meaning in the work they are doing.
I've worked for 4-6 silicon valley startups now (depending on how we count it), and this has generally not been my experience. For me and most of the people I've worked with, staying in one job for a long time just seems weird. Moving around frequently is how you grow fastest and keep things interesting; people in startups see frequent job-hopping as normal, and it's the rest of the world that's strange.
That said, I have heard occasional stories about scammy startups who promise lots of equity and then suck. My impression is that they generally lure in people who haven't been in silicon valley before; people with skills, who've done this for a little while, generally won't even consider those kinds of offers.
↑ comment by habryka (habryka4) · 2019-12-20T20:35:58.883Z · LW(p) · GW(p)
This explanation seems unlikely to me. More likely seem to me the highly competitive labor market (with a lot of organizations trying to outbid each other), a lot of long work hours and a lot of people making enough money that leaving their job for a while is not a super big deal. It's not an implausible explanation, but I don't think it explains the variance very well.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-21T12:41:29.296Z · LW(p) · GW(p)
If you primarily work to earn a paycheck, then you can easily switch around to another organization that pays more money.
If you strongly believe in a certain organization having a mission that's very important it's harder to change.
The personal development workshops I usually attend are taught be people who have more then two decades of teaching experience and likely more then 20000 hours of time refining their skills behind them.
From what I hear from CFAR about the research they are doing a lot of it is hard to transfer from one head to another.
It seems that if you have a median tenure of 2 years most of the research gets lost and nobody will develop 10000 hours in the domain.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T08:42:26.129Z · LW(p) · GW(p)
Well, I think it can both be the case that a given staff member thinks the organization's mission is important, and also that due to their particular distribution of comparative advantages, current amount of burnout, etc., that it would be on net better for them to work elsewhere. And I think most of our turnover has resulted from considerations like this, rather than from e.g. people deciding CFAR's mission was doomed.
I think the concern about short median tenure leading to research loss makes sense, and has in fact occurred some. But I'm not that worried about it, personally, for a few reasons:
- This cost is reduced because we're in the teaching business. That is, relative to an organization that does pure research, we're somewhat better positioned to transfer institutional knowledge to new staff, since much of the relevant knowledge has already been heavily optimized for easy transferability.
- There's significant benefit to turnover, too. I think the skills staff develop while working at CFAR are likely to be useful for work at a variety of orgs; I feel excited about the roles a number of former staff are playing elsewhere, and expect I'll be excited about future roles our current staff play elsewhere too.
- Many of our staff already have substantial "work-related experience," in some sense, before they're hired. For example, I spent a bunch of time in college reading LessWrong, trying to figure out metaethics, etc., which I think helped me become a better CFAR instructor than I might have been otherwise. I expect many lesswrongers, for example, have already developed substantial skill relevant to working effectively at CFAR.
↑ comment by denkenberger · 2019-12-26T00:26:10.107Z · LW(p) · GW(p)
Note that that statistic is how long people have been in their current job, not how long they will stay in their current job total. If everyone stayed in their jobs for 40 years, and you did a survey of how long people have been in their job, the median will come out to 20 years. I have not found hard data for the number we actually want, but this indicates that the median time that people stay in their jobs is about eight years, though it would be slightly shorter for younger people.
comment by riceissa · 2019-12-20T08:35:05.747Z · LW(p) · GW(p)
What are your thoughts on Duncan Sabien's Facebook post which predicts significant differences in CFAR's direction now that he is no longer working for CFAR?
Replies from: AnnaSalamon, Unnamed, erratim, Duncan_Sabien↑ comment by AnnaSalamon · 2019-12-22T02:51:49.904Z · LW(p) · GW(p)
My rough guess is “we survived; most of the differences I could imagine someone fearing didn’t come to pass”. My correction on that rough guess is: “Okay, but insofar as Duncan was the main holder of certain values, skills, and virtues, it seems pretty plausible that there are gaps now today that he would be able to see and that we haven’t seen”.
To be a bit more specific: some of the poles I noticed Duncan doing a lot to hold down while he was here were:
- Institutional accountability and legibility;
- Clear communication with staff; somebody caring about whether promises made were kept; somebody caring whether policies were fair and predictable, and whether the institution was creating a predictable context where staff, workshop participants, and others wouldn’t suddenly experience having the rug pulled out from under them;
- Having the workshop classes start and end on time; (I’m a bit hesitant to name something this “small-seeming” here, but it is a concrete policy that supported the value above, and it is easier to track)
- Revising the handbook into a polished state;
- Having the workshop classes make sense to people, have clear diagrams and a clear point, etc.; having polish and visible narrative and clear expectations in the workshop;
AFAICT, these things are doing… alright in the absence of Duncan (due partly to the gradual accumulation of institutional knowledge), though I can see his departure in the organization. AFAICT also, Duncan gave me a good chunk of model of this stuff sometime after his Facebook post, actually -- and worked pretty hard on a lot of this before his departure too. But I would not fully trust my own judgment on this one, because the outside view is that people (in this case, me) often fail to see what they cannot see.
When I get more concrete:
- Institutional accountability and legibility is I think better than it was;
- Clean communication with staff, keeping promises, creating clear expectations, etc. -- better on some axes and worse on others -- my non-confident guess is better overall (via some loss plus lots of work);
- Classes starting and ending on time -- at mainlines: slightly less precise class-timing but not obviously worse thereby; at AIRCS, notable decreases, with some cost;
- Handboook revisions -- have done very little since he left;
- Polish and narrative cohesion in the workshop classes -- it’s less emphasized but not obviously worse thereby IMO, due partly to the infusion of the counterbalancing “original seeing” content from Brienne that was perhaps easier to pull off via toning polish down slightly. Cohesion and polish still seem acceptable, and far far better than before Duncan arrived.
Also: I don’t know how to phrase this tactfully in a large public conversation. But I appreciate Duncan’s efforts on behalf of CFAR; and also he left pretty burnt out; and also I want to respect what I view as his own attempt to disclaim responsibility for CFAR going forward (via that Facebook post) so that he won’t have to track whether he may have left misleading impressions of CFAR’s virtues in people. I don’t want our answers here to mess that up. If you come to CFAR and turn out not to like it, it is indeed not Duncan’s fault (even though it is still justly some credit to Duncan if you do, since we are still standing on the shoulders of his and many others’ past work).
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-22T04:12:03.743Z · LW(p) · GW(p)
On reading Anna's above answer (which seems true to me, and also satisfies a lot of the curiosity I was experiencing, in a good way), I noted a feeling of something like "reading this, the median LWer will conclude that my contribution was primarily just ops-y and logistical, and the main thing that was at threat when I left was that the machine surrounding the intellectual work would get rusty."
It seems worth noting that my model of CFAR (subject to disagreement from actual CFAR) is viewing that stuff as a domain of study, in and of itself—how groups cooperate and function, what makes up things like legibility and integrity, what sorts of worldview clashes are behind e.g. people who think it's valuable to be on time and people who think punctuality is no big deal, etc.
But this is not necessarily something super salient in the median LWer's model of CFAR, and so I imagine the median LWer thinking that Anna's comment means my contributions weren't intellectual or philosophical or relevant to ongoing rationality development, even though I think Anna-and-CFAR did indeed view me as contributing there, too (and thus the above is also saying something like "it turned out Duncan's disappearance didn't scuttle those threads of investigation").
Replies from: AnnaSalamon, ChristianKl↑ comment by AnnaSalamon · 2019-12-22T04:25:33.617Z · LW(p) · GW(p)
I agree very much with what Duncan says here. I forgot I need to point that kind of thing out explicitly. But a good bit of my soul-effort over the last year has gone into trying to inhabit the philosophical understanding of the world that can see as possibilities (and accomplish!) such things as integrity, legibility, accountability, and creating structures that work across time and across multiple people. IMO, Duncan had a lot to teach me and CFAR here; he is one of the core models I go to when I try to understand this, and my best guess is that it is in significant part his ability to understand and articulate this philosophical pole (as well as to do it himself) that enabled CFAR to move from the early-stage pile of un-transferrable "spaghetti code" that we were when he arrived, to an institution with organizational structure capable of e.g. hosting instructor trainings and taking in and making use of new staff.
↑ comment by ChristianKl · 2019-12-22T21:22:10.806Z · LW(p) · GW(p)
Reading this I'm curious about what the actual CFAR position on punctuality was before and now. Was it something like the Landmark package under your tenure?
↑ comment by Unnamed · 2019-12-22T08:28:32.024Z · LW(p) · GW(p)
(This is Dan, from CFAR since June 2012)
These are more like “thoughts sparked by Duncan’s post” rather than “thoughts on Duncan’s post”. Thinking about the question of how well you can predict what a workshop experience will be like if you’ve been at a workshop under different circumstances, and looking back over the years...
In terms of what it’s like to be at a mainline CFAR workshop, as a first approximation I’d say that it has been broadly similar since 2013. Obviously there have been a bunch of changes since January 2013 in terms of our curriculum, our level of experience, our staff, and so on, but if you’ve been to a mainline workshop since 2013 (and to some extent even before then), and you’ve also had a lifetime full of other experiences, your experience at that mainline workshop seems like a pretty good guide to what a workshop is like these days. And if you haven’t been to a workshop and are wondering what it’s like, then talking to people who have been to workshops since 2013 seems like a good way to learn about it.
More recent workshops are more similar to the current workshop than older ones. The most prominent cutoff that comes to mind for more vs. less similar workshops is the one I already mentioned (Jan 2013) which is the first time that we basically understood how to run a workshop. The next cutoff that comes to mind is January 2015, which is when the current workshop arc & structure clicked into place. The next is July 2019, which is the second workshop which was run by something like the current team and the first one where we hit our stride (it was also the first one after we started this year's instructor training, which I think helped with hitting our stride). And after that is sometime in 2016 I think when the main classes reached something resembling their current form.
Besides recency, it’s also definitely true that the people at the workshop bring a different feel to it. European workshops have a different feel than US workshops because so many of the people there are from somewhat different cultures. Each staff member brings a different flavor - we try to have staff who approach things in different ways, partly in order to span more of the space of possible ways that it can look like to be engaging with this rationality stuff. The workshop MC (which was generally Duncan’s role while he was involved) does impart more of their flavor on the workshop than most people, although for a single participant their experience is probably shaped more by whichever people they wind up connecting with the most and that can vary a lot even between participants at the same workshop.
↑ comment by Timothy Telleen-Lawton (erratim) · 2019-12-22T05:18:33.397Z · LW(p) · GW(p)
What I get from Duncan’s FB post is (1) an attempt to disentangle his reputation from CFAR’s after he leaves, (2) a prediction that things will change due to his departure, and (3) an expression of frustration that more of his knowledge than necessary will be lost.
- It's a totally reasonable choice.
- At the time I first saw Duncan’s post I was more worried about big changes to our workshops from losing Duncan than I have observed since then. A year later I think the change is actually less than one would expect from reading Duncan’s post alone. That doesn’t speak to the cost of not having Duncan—since filling in for his absence means we have less attention to spend on other things, and I believe some things Duncan brought have not been replaced.
- I am also sad about this, and believe that I was the person best positioned to have caused a better outcome (smaller loss of Duncan’s knowledge and values). In other words I think Duncan’s frustration is not only understandable, but also pointing at a true thing.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-22T05:31:28.080Z · LW(p) · GW(p)
(I expect the answer to 2 will still be the same from your perspective, after reading this comment, but I just wanted to point out that not all influences of a CFAR staff member cash out in things-visible-in-the-workshop; the part of my FB post that you describe as 2 was about strategy and research and internal culture as much as workshop content and execution. I'm sort of sad that multiple answers have had a slant that implies "Duncan only mattered at workshops/Duncan leaving only threatened to negatively impact workshops.")
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T09:50:51.277Z · LW(p) · GW(p)
I'd be curious for an answer to this one too, actually.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T04:09:18.391Z · LW(p) · GW(p)
To be honest I haven't noticed much change, except obviously for the literal absence of Duncan (which is a very noticeable absence; among other things Duncan is an amazing teacher, imo better than anyone currently on staff).
comment by Ben Pace (Benito) · 2019-12-20T03:00:28.417Z · LW(p) · GW(p)
At this point, you guys must have sat down with 100s of people for 1000s of hours of asking them how their mind works, prodding them with things, and seeing how they turn out like a year later. What are some things about how a person thinks that you tend to look out for as especially positive (or negative!) signs, in terms of how likely they are in the future to become more agentic? (I'd be interested in concrete things rather than attempts to give comprehensive-yet-vague answers.)
comment by mingyuan · 2019-12-19T22:09:15.647Z · LW(p) · GW(p)
I've heard a lot of people say things along the lines that CFAR "no longer does original research into human rationality." Does that seem like an accurate characterization? If so, why is it the case that you've moved away from rationality research?
Replies from: BrienneYudkowsky, AnnaSalamon, adam_scholl↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-20T22:16:29.654Z · LW(p) · GW(p)
Hello, I am a CFAR contractor who considers nearly all of their job to be “original research into human rationality”. I don’t do the kind of research many people imagine when they hear the word “research” (RCT-style verifiable social science, and such). But I certainly do systematic inquiry and investigation into a subject in order to discover or revise beliefs, theories, applications, etc. Which is, you know, literally the dictionary.com definition of research.
I’m not very good at telling stories about myself, but I’ll attempt to describe what I do during my ordinary working hours anyway.
All of the time, I keep an eye out for things that seem to be missing or off in what I take to be the current art of rationality. Often I look to what I see in the people close to me, who are disproportionately members of rationality-and-EA-related organizations, watching how they solve problems and think through tricky stuff and live their lives. I also look to my colleagues at CFAR, who spend many many hours in dialogue with people who are studying rationality themselves, for the first time or on a continuing basis. But since my eyes are in my own head, I look most for what is absent in my own personal art of rationality.
For example, when I first read the Sequences in 2012 or 2013, I gained a lot, but I also felt a gaping hole in the shape of something like “recognizing those key moments in real-life experience when the rationality stuff you’ve thought so much about comes whizzing by your head at top speed, looking nothing at all like the abstractions you’ve so far considered”. That’s when I started doing stuff like snapping my fingers every time I saw a stop sign, so I could get a handle on what “noticing” even is, and begin to fill in the hole. I came up with a method of hooking intellectual awareness up to immediate experience, then I spent a whole year throwing the method at a whole bunch of real life situations, keeping track of what I observed, revising the method, talking with people about it as they worked with the same problem themselves, and generally trying to figure out the shape of the world around phenomenology and trigger-action planning.
I was an occasional guest instructor with CFAR at the time, and I think that over the course of my investigations, CFAR went from spending very little time on the phenomenological details of key experiences to working that sort of thing into nearly every class. I think it’s now the case that rationality as it currently exists contains an “art of noticing”.
My way of investigating always pushes into what I can’t yet see or grasp or articulate. Thus, it has the unfortunate property of being quite difficult to communicate about directly until the research program is mostly complete. So I can say a lot about my earlier work on noticing, but talking coherently about what exactly CFAR’s been paying me for lately is much harder. It’s all been the same style of research, though, and if I had to give names to my recent research foci, I’d say I’ve been looking into original seeing, some things related to creativity and unconstrained thought, something about learning and what it means to own your education, and experiences related to community and cooperation.
It’s my impression that CFAR has always had several people doing this kind of thing, and that several current CFAR staff members consider it a crucial part of their jobs as well. When I was hired, Tim described research as “the beating heart” of our organization. Nevertheless, I personally would like more of it in future CFAR, and I’d like it to be done with a bit more deliberate institutional support.
That’s why it was my primary focus when working with Eli to design our 2019 instructor training program. The program consisted partially of several weekend workshops, but in my opinion the most important part happened while everyone was at home.
My main goal, especially for the first weekend, was to help the trainees choose a particular area of study. It was to be something in their own rationality that really mattered to them and that they had not yet mastered. When they left the workshop, they were to set off on their own personal quest to figure out that part of the world and advance the art.
This attitude, which we’ve been calling “questing” of late, is the one with which I hope CFAR instructors will approach any class they intend to teach, whether it’s something like “goal factoring” that many people have taught in the past, or something completely new that nobody’s even tried to name yet. When you really get the hang of the questing mentality, you never stop doing original rationality research. So to whatever degree I achieved my goal with instructor training (which everyone seems to think is a surprisingly large degree), CFAR is moving in the direction of more original rationality research, not less.
Replies from: Grue_Slinky↑ comment by Grue_Slinky · 2019-12-21T12:51:14.390Z · LW(p) · GW(p)
How do CFAR's research interests/priorities compare with LW's Open Problems in Human Rationality [LW · GW]? Based on Brienne and Anna's replies here, I suspect the answer is "they're pretty different", but I'd like to hear what accounts for this divergence.
Replies from: AnnaSalamon, Benito↑ comment by AnnaSalamon · 2019-12-24T21:03:58.650Z · LW(p) · GW(p)
I quite like the open questions that Wei Dai wrote there, and I expect I'd find progress on those problems to be helpful for what I'm trying to do with CFAR. If I had to outline the problem we're solving from scratch, though, I might say:
- Figure out how to:
- use reason (and stay focused on the important problems, and remember “virtue of the void” and “lens that sees its own flaws, and be quick where you can) without
- going nutso, or losing humane values, and while:
- being able to coordinate well in teams.
Wei Dai’s open problems feel pretty relevant to this!
I think in practice this goal leaves me with subproblems such as:
- How do we un-bottleneck “original seeing [LW · GW]” / hypothesis-generation;
- What is the “it all adds up to normality” skill based in; how do we teach it;
- Where does “mental energy” come from in practice, and how can people have good relationships to this;
- What’s up with people sometimes seeming self-conscious/self-absorbed (in an unfortunate, slightly untethered way) and sometimes seeming connected to “something to protect” outside themselves?
- It seems to me that “something to protect” makes people more robustly mentally healthy. Is that true? If so why? Also how do we teach it?
- Why is it useful to follow “spinning plates” (objects that catch your interest for their own sake) as well as “hamming questions”? What’s the relationship between those two? (I sort of feel like they’re two halves of the same coin somehow? But I don’t have a model.)
- As well as more immediately practical questions such as: How can a person do “rest days” well. What ‘check sums’ are useful for noticing when something breaks as you’re mucking with your head. Etc.
↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T17:51:20.547Z · LW(p) · GW(p)
I'm not sure I understand what you mean by "something to protect." Can you give an example?
[Answered by habryka]
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-25T18:04:44.037Z · LW(p) · GW(p)
Presumable it's a reference to: https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect [LW · GW]
Replies from: howie-lempel↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T18:16:58.981Z · LW(p) · GW(p)
Thanks! forgot about that post.
↑ comment by Ben Pace (Benito) · 2019-12-21T22:24:24.245Z · LW(p) · GW(p)
If Brienne wanted to give their own answer to that post, even if it was incomplete, I'd be very excited about that.
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-26T20:26:33.684Z · LW(p) · GW(p)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-26T20:43:55.377Z · LW(p) · GW(p)
Hurrah! :D
↑ comment by AnnaSalamon · 2019-12-20T22:12:01.219Z · LW(p) · GW(p)
My model is that CFAR is doing the same activity it was always doing, which one may or may not want to call “research”.
I’ll describe that activity here. I think it is via this core activity (plus accidental drift, or accidental hill-climbing in response to local feedbacks) that we have generated both our explicit curriculum, and a lot of the culture around here.
Components of this core activity (in no particular order):
- We try to teach specific skills to specific people, when we think those skills can help them. (E.g. goal-factoring; murphyjitsu; calibration training on occasion; etc.)
- We keep our eyes open while we do #1. We try to notice whether the skill does/doesn’t match the student’s needs. (E.g., is this so-called “skill” actually making them worse at something that we or they can see? Is there a feeling of non-fit suggesting something like that? What’s actually happening as the “skill” gets “learned”?)
- We call this noticing activity “seeking PCK” and spend a bunch of time developing it in our mentors and instructors.
- We try to stay in touch with some of our alumni after the workshop, and to notice what the long-term impacts seem to be (are they actually practicing our so-called “skills”? Does it help when they do? More broadly, what changes do we just-happen-by-coincidence to see in multiple alumni again and again, and are these positive or negative changes, and what might be causing them?
- In part, we do this via the four follow-up calls that participants receive after they attend the mainline workshop; in part we do it through the alumni reunions, the kind of contact that comes naturally from being in the same communities, etc.
- We often describe some of what we think we’re seeing, and speculate about where to go given that, in CFAR’s internal colloqium.
- We pay particular attention to alumni who are grappling with existential risk or EA, partly because it seems to pose distinct difficulties that it would be nice if someone found solutions to.
- Spend a bunch of time with people who are succeeding at technical AI safety work, trying to understand what skills go into that. Spend a bunch of time with people who are training to do technical AI safety work (often at the same time that people who can actually do such work are there), working to help transfer useful mindset (while trying also to pay attention to what’s happening.
- Right now we do this mostly at the AIRCS and MSFP workshops.
- Spend a bunch of time engaging smart new people to see what skills/mindsets they would add to the curriculum, so we don’t get too stuck in a local optimum.
- What this looks like recently:
- The instructor training workshops are helping us with this. Many of us found those workshops pretty generative, and are excited about the technique-seeds and cultural content that the new instructor candidates have been bringing.
- The AIRCS program has also been bringing in highly skilled computer scientists, often from outside the rationality and EA community. My own thinking has changed a good bit in contact with the AIRCS experience. (They are less explicitly articulate about curriculum than the instructor candidates; but they ask good questions, buy some pieces of our content, get wigged out by other pieces of our content in a non-random manner, answer follow-up questions in contact with that that sometimes reveal implicit causal models of how to think that seem correct to me, etc. And so they are a major force for AIRCS curriculum generation in that way.)
- Gaps in 5:
- I do wish we had better contact with more and varied highly productive thinkers/markers of different sorts, as a feed-in to our curriculum. We unfortunately have no specific plans to fix this gap in 2020 (and I don’t think it could fit without displacing some even-more-important planned shift -- we have limited total attention); but it would be good to do sometime over the next five years. I keep dreaming of a “writers’ workshop” and an “artist’s workshop” and so on, aimed at seeing how our rationality stuff mutates when it hits people with different kinds of visibly-non-made-up productive skill.
- What this looks like recently:
- We sometimes realize that huge swaths of our curriculum are having unwanted effects and try to change it. We sometimes realize that our model of “the style of thinking we teach” is out of touch with our best guesses about what’s good, and try to change it.
- We try to study any functional cultures that we see (e.g., particular functional computer science communities; particular communities found in history books), to figure out what magic was there. We discuss this informally and with friends and with e.g. the instructor candidates.
- We try to figure out how thinking ever correlates with the world, and when different techniques make this better and worse in different context. And we read the Sequences to remember that this is what we’re doing.
- We could stand to do this one more; increasing this is a core planned shift for 2020. But we’ve always done it some, including over the last few years.
The “core activity” exemplified in the above list is, of course, not RCT-style verifiable track records-y social science (which is one common meaning of “research”). There is a lot of merit to that verifiable social science, but also a lot of slowness to it, and I cannot imagine using it to design the details of a curriculum, although I can imagine using it to see whether a curriculum has particular high-level effects.
We also still do some (but not as much as we wish we could do) actual data-tracking, and have plans to do modestly more of it over the coming year. I expect this planned modest increase will be useful for our broader orientation but not much of a direct feed-in into curriculum, although it might help us tweak certain knobs upward or downward a little.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-20T22:35:32.838Z · LW(p) · GW(p)
Also worth noting that there are a few different claims of the sort OP mentions that people make, I think. One thing people sometimes mean by this is “CFAR no longer does the sort of curriculum development which would be necessary to create an 'Elon Musk factory.'"
CFAR never had the goal of hugely amplifying the general effectiveness of large numbers of people (which I’m happy about, since I’m not sure achieving that goal would be good). One should not donate to CFAR in order to increase the chances of an Elon Musk factory.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2019-12-20T22:45:52.194Z · LW(p) · GW(p)
That is, we were always focused on high-intensity interventions for small numbers of people -- especially the people who are the very easiest to impact (have free time; smart and reflective; lucky in their educational background and starting position). We did not expect things to generalize to larger sets.
(Mostly. We did wonder about books and things for maybe impacting the epistemics (not effectiveness) of some larger number of people a small amount. And I do personally think that if there ways to help with the general epistemic, wisdom, or sanity of larger sets of people, even if by a small amount, that would be worth meaningful tradeoffs to create. But we are not presently aiming for this (except in the broadest possible "keep our eyes open and see if we someday notice some avenue that is actually worth taking here" sense), and with the exception of helping to support Julia Galef's upcoming rationality book back when she was working here, we haven't ever attempted concrete actions aimed at figuring out how to impact larger sets of people).
I agree, though, that one should not donate to CFAR in order to increase the chances of an Elon Musk factory.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-12-20T23:08:57.290Z · LW(p) · GW(p)
Do you have any advice on who to donate to in order to increase the chances of an Elon Musk factory?
Replies from: mr-hire, adam_scholl↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T16:30:26.433Z · LW(p) · GW(p)
It seems like paradigm academy is trying to do something like create an Elon Musk Factory:
But then again, so is Y-combinator, and every other incubator, as well as pretty much every leadership retreat (ok maybe not the leadership retreats, because Elon Musk is a terrible leader, but they're trying to do something like create a factory for what people imagine Elon Musk to be like). It seems like a very competitive space to create an Elon Musk factory, because its' so economically valuable.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-24T23:17:19.779Z · LW(p) · GW(p)
because Elon Musk is a terrible leader
This is a drive-by, but I don't believe this statement, based on the fact that Elon has successfully accomplished several hard things via the use of people organized in hierarchies (companies). I'm sure he has foibles, and it might not be fun to work for him, but he does get shit done.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-25T00:14:00.238Z · LW(p) · GW(p)
I claim that Elon has done this despite his leadership abilities.
I think that it's possible to be a bad leader but an effective CEO.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-25T21:44:13.889Z · LW(p) · GW(p)
It seems to me unclear what exactly do you mean with the terms. What do you mean with leadership as compared to being a CEO?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-26T23:42:58.041Z · LW(p) · GW(p)
Leadership (as for instance leadership retreats are trying to teach it) is the intersection between management and strategy.
Another way to put it, its' the discipline of getting people to do what's best for your organization.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-27T10:04:14.831Z · LW(p) · GW(p)
Do you think that Elon doesn't get his employees to do what's best for his companies?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-27T14:25:07.433Z · LW(p) · GW(p)
I think he's bad at this.
You can see this in some aspects of his companies.
High micromanagement. High turnover. Disgruntled former employees.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-20T23:42:21.921Z · LW(p) · GW(p)
I'm not aware of existing organizations that seem likely to me to create such a factory.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-20T23:49:22.246Z · LW(p) · GW(p)
I think that there are many rare and positive qualities of Musk that I try to emulate, and some rare qualities that are damaging and that I shouldn't emulate. Importantly, from many broad perspectives (like thinking that economic growth is a robust good) it's pretty weird to think that Elon Musk is bad. I presume you think Musk is pretty unilateralist and think that he probably did net damage with the building of OpenAI?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T02:37:34.790Z · LW(p) · GW(p)
I think Musk is impressive in many ways. I didn't really intend to express skepticism of him in particular, so much as of what might happen if one created loads more people as agenty as him. For example, I can easily imagine this accelerating capabilities progress relative to safety progress, which strikes me as bad.
comment by DanielFilan · 2019-12-19T21:28:02.471Z · LW(p) · GW(p)
What organisation, if it existed and ran independently of CFAR, would be the most useful to CFAR?
Replies from: AnnaSalamon, adam_scholl↑ comment by AnnaSalamon · 2019-12-22T03:36:55.917Z · LW(p) · GW(p)
I wish someone would create good bay area community health. It isn't our core mission; it doesn't relate all that directly to our core mission; but it relates to the background environment in which CFAR and quite a few other organizations may or may not end up effective.
One daydream for a small institution that might help some with this health is as follows:
- Somebody creates the “Society for Maintaining a Very Basic Standard of Behavior”;
- It has certain very basic rules (e.g. “no physical violence”; “no doing things that are really about as over the line as physical violence according to a majority of our anonymously polled members”; etc.)
- It has an explicit membership list of folks who agree to both: (a) follow these rules; and (b) ostracize from “community events” (e.g. parties to which >4 other society members are invited) folks who are in bad standing with the society (whether or not they personally think those members are guilty).
- It has a simple, legible, explicitly declared procedure for determining who has/hasn’t entered bad standing (e.g.: a majority vote of the anonymously polled membership of the society; or an anonymous vote of a smaller “jury” randomly chosen from the society).
Benefits I’m daydreaming might come from this institution:
- A. If the society had large membership, bad actors could be ostracized from larger sections of the community, and with more simplicity and less drama.
- B. Also, we could do that while imposing less restraint on individual speech, which would make the whole thing less creepy. Like, if many many people thought person B should be exiled, and person A wanted to defer but was not herself convinced, she could: (a) defer explicitly, while saying that’s what she was doing; and meanwhile (b) speak her mind without worrying that she would destabilize the community’s ability to ever coordinate.
↑ comment by mako yass (MakoYass) · 2019-12-23T03:43:52.776Z · LW(p) · GW(p)
Why aren't there Knowers of Character who Investigate all Incidents Thoroughly Enough for The Rest of The Community to Defer To, already? Isn't that a natural role that many people would like to play?
Is it just that the community hasn't explicitly formed consensus that the people who're already very close to being in that role can be trusted, and forming that consensus takes a little bit of work?
Replies from: AnnaSalamon, deluks917, drethelin↑ comment by AnnaSalamon · 2019-12-25T02:04:01.100Z · LW(p) · GW(p)
No; this would somehow be near-impossible in our present context in the bay, IMO; although Berkeley's REACH center and REACH panel are helpful here and solve part of this, IMO.
↑ comment by sapphire (deluks917) · 2019-12-30T06:11:30.023Z · LW(p) · GW(p)
I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
↑ comment by drethelin · 2019-12-30T23:17:50.517Z · LW(p) · GW(p)
I believe the reason why is that knowing everyone in the community would literally be a full-time job and no one wants to pay for that.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2020-01-11T19:21:22.265Z · LW(p) · GW(p)
No; that isn't the trouble; I could imagine us getting the money together for such a thing, since one doesn't need anything like a consensus to fund a position. The trouble is more that at this point the members of the bay area {formerly known as "rationalist"} "community" are divided into multiple political factions, or perhaps more-chaos-than-factions, which do not trust one another's judgment (even about pretty basic things, like "yes this person's actions are outside of reasonable behavioral norms). It is very hard to imagine an individual or a small committee that people would trust in the right way. Perhaps even more so after that individual or committee tried ruling against someone who really wanted to stay, and and that person attempted to create "fear, doubt, and uncertainty" or whatever about the institution that attempted to ostracize them.
I think something in this space is really important, and I'd be interested in investing significantly in any attempt that had a decent shot at helping. Though I don't yet have a strong enough read myself on what the goal ought to be.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-01-11T21:41:23.351Z · LW(p) · GW(p)
For whatever it's worth, my sense is that it's actually reasonably doable to build an institution/process that does well here, and gets trust from a large fraction of the community, though it is by no means an easy task. I do think it would likely require more than one full-time person, and at least one person of pretty exceptional skill in designing processes and institutions (as well as general competence).
Replies from: Raemon↑ comment by Raemon · 2020-01-11T21:55:27.102Z · LW(p) · GW(p)
I think Anna roughly agrees (hence her first comment), she was just answering the question of "why hasn't this already been done?"
I do think adversarial pressure (i.e. if you rule against a person they will try to sow distrust against you and it's very stressful and time consuming) is a reason that "reasonably doable" isn't really a fair description. It's doable, but quite hard, and a big commitment that I think is qualitatively different from other hard jobs.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T12:40:38.570Z · LW(p) · GW(p)
CFAR relies heavily on selection effects for finding workshop participants. In general we do very little marketing or direct outreach, although AIRCS and MSFP do some of the latter; mostly people hear about us via word of mouth. This system actually works surprisingly (to me) well at causing promising people to apply.
But I think many of the people we would be most happy to have at a workshop probably never hear about us, or at least never apply. One could try fixing this with marketing/outreach strategies, but I worry this would disrupt the selection effects which I think have been a necessary ingredient for nearly all of our impact.
So I fantasize sometimes about a new organization being created which draws loads of people together, via selection effects similar to those which have attracted people to LessWrong, which would make it easier for us to find more promising people.
(I also—and this isn’t a wish for an organization, exactly, but it gestures at the kind of problem I speculate some organization could potentially help solve—sometimes fantasize about developing something like “scouts” at existing places with such selection effects. For example, a bunch of safety researchers competed in IMO/IOI when they were younger; I think it would be plausibly valuable for us to make friends with some team coaches, and for them to occasionally put us in touch with promising people).
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-21T21:06:28.442Z · LW(p) · GW(p)
What kind of people do you think never hear about CFAR but that you want to have at your workshops?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T23:53:47.959Z · LW(p) · GW(p)
I expect there are a bunch which never hear about us due to language barrier, and/or because they're geographically distant from most of our alumni. But I would be surprised if there weren't also lots of geographically-near, epistemically-promising people who've just never happened to encounter someone recommending a workshop.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-22T20:13:03.902Z · LW(p) · GW(p)
It seems to me like being more explicit about what kind of people should be there would make it easier for other people to send them your way.
comment by DanielFilan · 2019-12-19T20:28:05.676Z · LW(p) · GW(p)
My impression is that CFAR has moved towards a kind of instruction where the goal is personal growth and increasing one's ability to think clearly about very personal/intuition-based matters, and puts significantly less emphasis on things like explicit probabilistic forecasting, that are probably less important but have objective benchmarks for success.
- Do you think that this is a fair characterisation?
- How do you think these styles of rationality should interact?
- How do you expect CFAR's relative emphasis on these styles of rationality to evolve over time?
↑ comment by Adam Scholl (adam_scholl) · 2019-12-20T18:24:36.091Z · LW(p) · GW(p)
I think it’s true that CFAR mostly moved away from teaching things like explicit probabilistic forecasting, and toward something else, although I would describe that something else differently—more like, skills relevant for hypothesis generation, noticing confusion, communicating subtle intuitions, updating on evidence about crucial considerations, and in general (for lack of a better way to describe this) “not going insane when thinking about x-risk.”
I favor this shift, on the whole, because my guess is that skills of the former type are less important bottlenecks for the problems CFAR is trying to help solve. That is, all else equal, if I could press a button to either make alignment researchers and the people who surround them much better calibrated, or much better at any of those latter skills, I’d currently press the latter button.
But I do think it’s plausible CFAR should move somewhat backward on this axis, at the margin. Some skills from the former category would be pretty easy to teach, I think, and in general I have some kelly betting-ish inclination to diversify the goals of our curricular portfolio, in case our core assumptions are wrong.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-20T18:38:09.394Z · LW(p) · GW(p)
To be clear, this is not to say that those skills are bad, or even that they’re not an important part of rationality. More than half of the CFAR staff (at least 5 of the 7 current core staff, not counting myself, as a contractor) have personally trained their calibration, for instance.
In general, just because something isn’t in the CFAR workshop doesn’t mean that it isn’t an important part of rationality. The workshop is only 4 days, and not everything is well-taught in a workshop context (as opposed to [x] minutes of practice every day, for a year, or something like an undergraduate degree).
comment by habryka (habryka4) · 2019-12-20T01:21:16.125Z · LW(p) · GW(p)
Moderator note: I've deleted six comments on this thread by users Ziz and Gwen_, who appear to be the primary people responsible for barricading off last month's CFAR alumni reunion, and who were subsequently arrested on multiple charges, including false imprisonment.
I explicitly don't want to judge the content of their allegations against CFAR, but both Ziz and Gwen_ have a sufficient track record of being aggressive offline (and Ziz also online) that I don't really want them around on LessWrong or to provide a platform to them. So I've banned them for the next 3 months (until March 19th), during which I and the other moderators will come to a more long-term decision about what to do about all of this.
comment by Matt Goldenberg (mr-hire) · 2019-12-20T16:28:20.917Z · LW(p) · GW(p)
On the level of individual life outcomes, do you think CFAR outperforms other self help seminars like Tony Robbins, Landmark, Alethia, etc?
Replies from: adam_scholl, elityre↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T04:12:01.235Z · LW(p) · GW(p)
I think it would depend a lot on which sort of individual life outcomes you wanted to compare. I have basically no idea where these programs stand, relative to CFAR, on things like increasing participant happiness, productivity, relationship quality, or financial success, since CFAR mostly isn't optimizing for producing effects in these domains.
I would be surprised if CFAR didn't come out ahead in terms of things like increasing participants' ability to notice confusion, communicate subtle intuitions, and navigate pre-paradigmatic technical research fields. But I'm not sure, since in general I model these orgs as having sufficiently different goals than us that I haven't spent much time learning about them.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-22T14:23:33.123Z · LW(p) · GW(p)
I'm not sure, since in general I model these orgs as having sufficiently different goals than us that I haven't spent much time learning about them.
Note that as someone who has participated in many other workshops, and who is very well read in other self-help schools, I think this is a clear blind spot and missstep of CFAR.
I think you would have discovered many other powerful concepts for running effective workshops, and been significantly further with rationality techniques, if you took these other organizations seriously as both competition and sources of knowledge, and had someone on staff who spent a significant amount of time simply stealing from existing schools of thought.
Replies from: elityre, adam_scholl↑ comment by Eli Tyre (elityre) · 2019-12-24T22:48:24.705Z · LW(p) · GW(p)
Well, there are a lot of things out there. Why did you promote these ones?
CFAR staff have done a decent amount of trawling through self help space, in particular people did investigation that turned up Focusing, Circling, and IFS. There have also been other things that people around here tried, and haven't gone much further.
Granted, this is not a systematic investigation of the space of personal development stuff, but that seems less promising to me than people thinking about particular problems (often personal problems, or problems that they've observed in the rationality and EA communities) and investigating know solutions or attempted solutions that relate to those problems.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-25T00:56:44.436Z · LW(p) · GW(p)
Well, there are a lot of things out there. Why did you promote these ones?
I don't think these ones in particular, I listed these as some of the most popular ones.
Granted, this is not a systematic investigation of the space of personal development stuff, but that seems less promising to me than people thinking about particular problems (often personal problems, or problems that they've observed in the rationality and EA communities) and investigating know solutions or attempted solutions that relate to those problems.
I personally have gotten a lot out of a hybrid approach, where I find a problem, investigate the best relevant self-helpy solutions, then go down the rabbit hole of finding all the other things created by that person, and all of their sources, influences, and collaborators.
I suspect someone who's job it is to do this could have a similar function as the "living library" role at MIRI (I'm not sure how exactly that worked for them though)
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-25T05:24:14.427Z · LW(p) · GW(p)
then go down the rabbit hole of finding all the other things created by that person, and all of their sources, influences, and collaborators.
Oh. Yeah. I think this is pretty good. When someone does something particularly good, I do try to follow up on all their stuff.
And, I do keep track of the histories of the various lineages and where people came from and what influenced them. It's pretty interesting how many different things are descended from the same nodes.
But, you know, limited time. I don't follow up on everything.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-24T07:10:57.462Z · LW(p) · GW(p)
To be clear, others at CFAR have spent time looking into these things, I think; Anna might be able to chime in with details. I just meant that I haven't personally.
↑ comment by Eli Tyre (elityre) · 2019-12-22T06:12:11.440Z · LW(p) · GW(p)
I haven't done any of the programs you mentioned. And I'm pretty young, so my selection is limited. But I've done lots of personal development workshop and trainings, both before and after my CFAR workshop, and my CFAR workshop was far and above the densest in terms of content, and most transformative on both my day-to-day processing, and my life trajectory.
The only thing that compares are some dedicated, years long relationships with skilled mentors.
YMMV. I think my experience was an outlier.
Replies from: Unnamed↑ comment by Unnamed · 2019-12-22T08:30:41.401Z · LW(p) · GW(p)
(This is Dan from CFAR)
Warning: this sampling method contains selection effects.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-25T00:48:00.240Z · LW(p) · GW(p)
Hahahahah. Strong agree.
comment by Charlie Steiner · 2019-12-19T22:17:13.809Z · LW(p) · GW(p)
How much interesting stuff do you think there is in your curriculum that hasn't percolated into the community? What's stopping said percolation?
comment by jacobjacob · 2019-12-20T10:30:28.854Z · LW(p) · GW(p)
Is there something you find yourselves explaining over and over again in person, and that you wish you could just write up in an AMA once and for all where lots of people will read it, and where you can point people to in future?
comment by gilch · 2019-12-20T05:34:42.089Z · LW(p) · GW(p)
Does CFAR "eat its own dogfood"? Do the cognitive tools help in running the organization itself? Can you give concrete examples? Are you actually outperforming comparable organizations on any obvious metric due to your "applied rationality"? (Why ain'tcha rich? Or are you?)
Replies from: luke-raskopf, BrienneYudkowsky, adam_scholl, Duncan_Sabien, elityre↑ comment by Luke Raskopf (luke-raskopf) · 2019-12-21T03:13:17.975Z · LW(p) · GW(p)
A response to just the first three questions. I’ve been at CFAR for two years (since January 2018). I've noticed, especially during the past 2-3 months, that my mind is changing. Compared to a year, or even 6 months ago, it seems to me that my mind more quickly and effortlessly moves in some ways that are both very helpful and resemble some of the cognitive tools we offer. There’s obviously a lot of stuff CFAR is trying to do, and a lot of techniques/concepts/things we offer and teach, so any individual’s experience needs to be viewed as part of a larger whole. With that context in place, here are a few examples from my life and work:
- Notice the person I'm talking to is describing a phenomenon but I can't picture anything —> Ask for an example (Not a technique we formally teach at the workshop, but seems to me like a basic application of being specific [LW · GW]. In that same vein: while teaching or explaining a concept, I frequently follow a concrete-abstract-concrete structure.)
- I'm making a plan —> Walk through it, inner sim / murphyjitsu style (I had a particularly vivid and exciting instance of this a few weeks ago: I was packing my bag for a camping trip and found myself, without having explicitly set the intention to do so, simulating the following day with an eye for what I would need. I noticed that I would need my camping spork, which I hadn't thought of before, and packed it! Things like this happen regularly when planning workshops and doing my day-to-day CFAR work, in addition to my personal life.)
- I’m willing to make trades of money for time, like paying for Ubers or shorter flights or services, etc. I used to have a pretty rigid deontological rule for myself against ever spending money when I “didn’t need to,” which I believe now was to my own detriment. I think internalizing Units of Exchange and Goal Factoring and sunk cost and willingness to pay lead me to a) more clearly see how much I value my time, and b) acquire felt “permission” (from myself) to make trades that previously seemed weird or unacceptable, to great benefit to myself and others. I notice this shift when I’m doing ops work with people outside of CFAR and they say something like, “No, it’s just impossible to carpet the entire floor. The venue doesn’t want us to, and besides it would cost a ton of money.” And I say something like, “Well, we really value having a cozy, welcoming, comfortable space for participants, and we’d probably be willing to pay quite a bit for that. What if we had $2k to spend on it--could we do it then?” and they say, “What… I mean, I guess…” Or I’m talking to a friend about paying for membership in a pickup basketball league and I say, “So I might miss one or two of the five games, but I’d gladly pay the full-season price of $130 to play even just two or three really good pickup games, so I’m definitely in.” and he responds with something like, “Huh, well I didn’t think about it like that but I guess it’s worth that much to me…” I feel excited at what seems to me like more freedom for myself in this area. Some good dogfood.
- Something needs doing in the org and I have a vision for how to do it —> Just create the 80-20 version of my proposal in 5 minutes and run with that. (This one is insane. It’s wild to me how many things fall through the cracks or never happen because the only thing missing was one enterprising soul saying, “Hey, I spent 5 minutes on this shitty first draft version — anyone have feedback or thoughts?” so people could tear it apart and improve it. I had an experience last week of creating a system for making reference check phone calls and just watching myself with pride and satisfaction; like, “Whoa, I’m just building this out of nothing”. There’s something here that’s general, what I think is meant by “agency,” what I’d call “self-efficacy”--the belief, the felt belief, that you are one of the people who can just build things and make things happen, and here you go just doing it. That seems to me to be one of the best pieces of dogfood I’ve eaten at CFAR. It’s an effect that’s tricky to measure/quantify, but we think there’s good reason to believe it’s there).
- I’m in a meeting and I hear, “Yeah, so let’s definitely take X action” and then silence or a change of topic —> Say, “Okay, so what’s going to happen next? Who’s owning this and when are we going to check in about whether it happened?” (Also insane. I’m shocked by the number of times that I have said “Oh no, we totally thought that was a good idea but never made sure it happened” and the only thing required to save the situation was the simple question above. This happens a lot less nowadays; one time a few weeks ago, both Adam and I had this exact reaction simultaneously in a meeting. It was awesome.)
- Empiricism/tracking/something. Every time I go to the YMCA to swim, I log the time that I walk into the building and the time I walk out of the building. I started doing this because my belief that “I can get in and out for a swim in less than an hour” has persisted for months, in the face of consistent evidence that it actually just takes an hour or more every time (gotta get changed, shower, swim, shower, get changed--it adds up!), and has caused me stress and frustration. Earlier this year, Brienne and I spent some time working on noticing skills, working that into the mainline curriculum and our instructor training curriculum, as well as practicing in daily life. So I decided not to try to do anything different, just to observe--give myself a detailed, clear, entirely truthful picture of exactly how long I take at the Y. In the two months I’ve been doing this tracking, the average amount of time I spend in the Y has dropped by about 15 minutes. My perspective on “how much time it takes me to swim” has changed, too; from “God dammit, I’m taking too long but I really want to take my time working out!” to “Sometimes I want to have a nice, slow workout, sit in the hottub and take a while. Sometimes I want to move quickly so I can get shit done. I have the capacity to do both of those.” I care a little about the time, and a lot about the satisfaction, self-efficacy, and skill that came from giving myself the gift of more time simply by allowing some part of my system to update and change my behavior as a consequence of consistently providing myself with a clearer picture of this little bit of territory. That’s some sweet dog food.
Hope this gives you something to chew on ;)
↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-22T00:14:24.332Z · LW(p) · GW(p)
(Just responding here to whether or not we dogfood.)
I always have a hard time answering this question, and nearby questions, personally.
Sometimes I ask myself whether I ever use goal factoring, or seeking PCK, or IDC, and my immediate answer is “no”. That’s my immediate answer because when I scan through my memories, almost nothing is labeled “IDC”. It’s just a continuous fluid mass of ongoing problem solving full of fuzzy inarticulate half-formed methods that I’m seldom fully aware of even in the moment.
A few months ago I spent some time paying attention to what’s going on here, and what I found is that I’m using either the mainline workshop techniques, or something clearly descended from them, many times a day. I almost never use them on purpose, in the sense of saying “now I shall execute the goal factoring algorithm” and then doing so. But if I snap my fingers every time I notice a feeling of resolution and clarification about possible action, I find that I snap my fingers quite often. And if, after snapping my fingers, I run through my recent memories, I tend to find that I’ve just done goal factoring almost exactly as it’s taught in mainlines.
This, I think, is what it’s like to fully internalize a skill.
I’ve noticed the same sort of thing in my experience of CFAR’s internal communication as well. In the course of considering our answers to some of these questions, for example, we’ve occasionally run into disagreements with each other. In the moment, my impression was just that we were talking to each other sensibly and working things out. But if I scan through a list of CFAR classes as I recall those memories, I absolutely recognize instances of inner sim, trigger-action planning, againstness, goal factoring, double crux, systemization, comfort zone exploration, internal double crux, pedagogical content knowledge, Polaris, mundanification, focusing, and the strategic level, at minimum.
At one point when discussing the topic of research I said something like, “The easiest way for me to voice my discomfort here would involve talking about how we use words, but that doesn’t feel at all cruxy. What I really care about is [blah]”, and then I described a hypothetical world in which I’d have different beliefs and priorities. I didn’t think of myself as “using double crux”, but in retrospect that is obviously what I was trying to do.
I think techniques look and feel different inside a workshop vs. outside in real life. So different, in fact, that I think most of us would fail to recognize almost every example in our own lives. Nevertheless, I’m confident that CFAR dogfoods continuously.
↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-22T00:16:22.634Z · LW(p) · GW(p)
So, is CFAR rich?
I don’t really know, because I’m not quite sure what CFAR’s values are as an organization, or what its extrapolated volition would count as satisfaction criteria.
My guess is “not much, not yet”. According to what I think it wants to do, it seems to me like its progress on that is small and slow. It seems pretty disorganized and flaily much of the time, not great at getting the people it most needs, and not great at inspiring or sustaining the best in the people it has.
I think it’s *impressively successful* given how hard I think the problem really is, but in absolute terms, I doubt it’s succeeding enough.
If it weren’t dogfooding, though, it seems to me that CFAR would be totally non-functional.
Why would it be totally non-functional? Well, that’s really hard for me to get at. It has something to do with what sort of thing a CFAR even is, and what it’s trying to do. I *do* think I’m right about this, but most of the information hasn’t made it into the crisp kinds of thoughts I can see clearly and make coherent words about. I figured I’d just go ahead and post this anyhow, and y'all can make or not-make what you want of my intuitions.
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-22T00:17:52.876Z · LW(p) · GW(p)
More about why CFAR would be non-functional if it weren’t dogfooding:
As I said, my thoughts aren’t really in such a state that I know how to communicate them coherently. But I’ve often found that going ahead and communicating incoherently can nevertheless be valuable; it lets people’s implicit models interact more rapidly (both between people and within individuals), which can lead to developing explicit models that would otherwise have remained silent.
So, when I find myself in this position, I often throw a creative prompt to the part of my brain that thinks it knows something, and don’t bother trying to be coherent, just to start to draw out the shape of a thing. For example, if CFAR were a boat, what sort of boat would it be?
If CFAR were a boat, it would be a collection of driftwood bound together with twine. Each piece of driftwood was yanked from the shore in passing when the boat managed to get close enough for someone to pull it in. The riders of the boat are constantly re-organizing the driftwood (while standing on it), discarding parts (both deliberately and accidentally), and trying out variations on rudders and oars and sails. All the while, the boat is approaching a waterfall, and in fact the riders are not trying to make a boat at all, but rather an airplane.
The CFAR techniques are first of all the driftwood pieces themselves, and are also ways of balancing atop something with no rigid structure, of noticing when the raft is taking on water, of coordinating about which bits of driftwood ought to be tied to which other bits, and of continuing to try to build a plane when you’d rather forget the waterfall and go for a swim.
Which, if I had to guess, is an impressionistic painting depicting my concepts around an organization that wants to bootstrap an entire community into equalling the maybe impossible task of thinking well enough to survive x-risk.
This need to quickly bootstrap patterns of thought and feeling, not just of individual humans but of far-flung assortments of people, is what makes CFAR’s problem so hard, and its meager success thus far so impressive to me. It doesn’t have the tools it needs to efficiently and reliably accomplish the day-to-day tasks of navigation and not sinking and so forth, so it tries to build them by whatever means it can manage in any given moment.
It’s a shitty boat, and an even shittier plane. But if everyone on it were just passively riding the current, rather than constantly trying to build the plane and fly, the whole thing would sink well before it reached the waterfall.
↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T03:50:35.899Z · LW(p) · GW(p)
I think we eat our own dogfood a lot. It’s pretty obvious in meetings—e.g., people do Focusing-like moves to explain subtle intuitions, remind each other to set TAPs, do explicit double cruxing, etc.
As to whether this dogfood allows us to perform better—I strongly suspect so, but I’m not sure what legible evidence I can give about that. It seems to me that CFAR has managed to have a surprisingly large (and surprisingly good) effect on AI safety as a field, given our historical budget and staff size. And I think there are many attractors in org space (some fairly powerful) that would have made CFAR less impactful, had it fallen into them, that it’s avoided falling into in part because its staff developed unusual skill at noticing confusion and resolving internal conflict.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T10:33:42.484Z · LW(p) · GW(p)
I'm reading the replies of current CFAR staff with great interest (I'm a former staff member who ended work in October 2018), as my own experience within the org was "not really; to some extent yes, in a fluid and informal way, but I rarely see us sitting down with pen and paper to do explicit goal factoring or formal double crux, and there's reasonable disagreement about whether that's good, bad, or neutral."
Replies from: erratim↑ comment by Timothy Telleen-Lawton (erratim) · 2019-12-21T19:32:48.375Z · LW(p) · GW(p)
All of these answers so far (Luke, Adam, Duncan) resonate for me.
I want to make sure I’m hearing you right though, Duncan. Putting aside the ‘yes’ or ‘no’ of the original question, do the scenes/experiences that Luke and Adam describe match what you remember from when you were here?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T19:39:58.346Z · LW(p) · GW(p)
They do. The distinction seems to me to be something like endorsement of a "counting up" strategy/perspective versus endorsement of a "counting down" one, or reasonable disagreement about which parts of the dog food are actually beneficial to eat at what times versus which ones are Goodharting or theater or low payoff or what have you.
↑ comment by Eli Tyre (elityre) · 2023-05-28T01:55:41.229Z · LW(p) · GW(p)
I wrote the following comment during this AMA back in 2019, but didn't post it because of the reasons that I note in the body of the comment.
I still feel somewhat unsatisfied with what I wrote. I think something about the tone feels wrong, or gives the wrong impression, somehow. Or maybe this only presents part of the story. But it still seems better to say aloud than not.
I feel more comfortable posting it now, since I'm currently early in the process of attempting to build an organization / team that does meet these standards. In retrospect, I think probably it would have been better if I had just posted this at the time, and hashed out some disagreements with others in the org in this thread.
(In some sense this comment is useful mainly as bit of a window into the kind of standards that I, personally, hold a rationality-development / training organization to.)
My original comment is reproduced verbatim below (plus a few edits for clarity).
I feel trepidation about posting this comment, because it seems in bad taste to criticize a group, unless one is going to step up and do the legwork to fix the problem [LW · GW]. This is one of the top 5 things that bothers me about CFAR, and maybe I will step up to fix it at some point, but I’m not doing that right now and there are a bunch of hard problems that people are doing diligent work to fix. Criticizing is cheap. Making things better is hard.
[edit 2023: I did run a year long CFAR instructor training that was explicitly designed to take steps on this class of problems though. It is not as if I was just watching from the sidelines. But shifting the culture of even a small org, especially from a non-executive role, is pretty difficult, and my feeling is that I made real progress in the direction that I wanted, but only about one twentieth of the way to what I would think is appropriate.]My view is that CFAR does not meaningfully eat its own dogfood, or at least doesn’t enough, and that this hurts the organization’s ability to achieve its goals.
This is not to contradict the anecdotes that others have left here, which I think are both accurate presentations, and examples of good (even inspiring) actions. But while some members of CFAR do have personal practices (with varying levels of “seriousness”) in correct thought and effective action, CFAR, as an institution, doesn’t really make much use of rationality. I resonate strongly with Duncan’s comment about counting up vs. counting down.
More specific data, both positive and negative:
- CFAR did spend some 20 hours of staff meeting time Circling in 2017, separately from a ~50 hour CFAR circling retreat the most of the staff participated in, and various other circling events that CFAR staff attended together (but were not “run by CFAR”).
- I do often observe people doing Focusing moves and Circling moves in meetings.
- I have observed occasional full explicit Double Crux conversations on the order of three or four times a year.
- I frequently (on the order of once every week or two) observe CFAR staff applying the Double Crux moves (offering cruxes, crux checking, operationalizing, playing the Thursday-Friday game) in meetings and in conversation with each other.
- Group goal-factoring has never happened, to the best of my knowledge, even though there are a number of things that happen at CFAR that seem very inefficient, seem like “shoulds”, or are frustrating / annoying to at least one person [edit 2023: these are explicit triggers for goal factoring]. I can think of only one instance in which two of us (Tim and I, specifically) tried to goal-factor something (a part of meetings that some of us hate).]
- We’ve never had an explicit group pre-mortem, to the best of my knowledge. There is the occasional two-person session of simulating a project (usually workshop or workshop activity), and the ways in which it goes wrong. [edit 2023: Anna said that she had participated in many long form postmortems regarding hiring in particular, when I sent her a draft of this comment in 2019.]
- There is no infrastructure for tracking predictions or experiments. Approximately, CFAR as an institution doesn’t really run [formal] experiments, at least experiments with results that are tracked by anything other than the implicit intuitions of the staff. [edit 2023: some key features of a "formal experiment" as I mean it are writing down predictions in advance, and having a specific end date at which the group reviews the results. This is in contrast to simply trying new ideas sometimes.]
- There is no explicit processes for iterating on new policies or procedures (such as iterating on how meetings are run).
- [edit 2023: An example of an explicit process for iterating on policies and procedures is maintaining a running document for a particular kind of meeting. Every time you have that kind of meeting, you start by referring to the notes from the last session. You try some specific procedural experiments, and then end the meeting with five minutes of reflection on what worked well or poorly, and log those in the document. This way you are explicitly trying new procedures and capturing the results, instead finding procedural improvements mainly by stumbling into them, and often forgetting improvements rather than integrating and building upon them. I use documents like this for my personal procedural iteration.
Or in Working Backwards, the authors describe not just organizational innovations that Amazon came up with to solve explicitly-noted organizational problems, but the sequence of iteration that led to those final form innovations.]- There is informal, but effective, iteration on the workshops. The processes that run CFAR’s internals however, seem to me to be mostly stagnant [edit 2023: in the sense that there's not deliberate intentional effort on solving long-standing institutional frictions, or developing more effective procedures for doing things.]
- As far as I know, there are no standardized checklists for employing CFAR techniques in relevant situations (like starting a new project). I wouldn’t be surprised if there were some ops checklists with a murphyjitsu step. I’ve never seen a checklist for a procedure at CFAR, excepting some recurring shopping lists for workshops.
- The interview process does not incorporate the standard research about interviews and assessment contained in Thinking, Fast and Slow. (I might be wrong about this. I, blessedly, don’t have to do admissions interviews.)
- No strategic decision or choice to undertake a project, that I’m aware of, has involved quantitative estimates of impact, or quantitative estimates of any kind. (I wouldn’t be surprised if the decision to run the first MSFP did, [edit 2023: but I wasn't at CFAR at the time. My guess is that there wasn't.])
- Historically, strategic decisions were made to a large degree by inertia. This is more resolved now, but for a period of several years, I think most of the staff didn’t really understand why we were running mainlines, and in fact when people [edit 2023: workshop participants] asked about this, we would say things like “well, we’re not sure what else to do instead.” This didn’t seem unusual, and didn’t immediately call out for goal factoring.
- There’s not designated staff training time for learning or practicing the mental skills, or for doing general tacit knowledge transfer between staff. However, Full time CFAR staff have historically had a training budget, which they could spend on whatever personal development stuff they wanted, at their own discretion.
- CFAR does have a rule that you’re allowed / mandated to take rest days after a workshop, since the workshop eats into your weekend.
Overall, CFAR strikes me as a mostly a normal company, populated by some pretty weird hippy-rationalists. There aren’t any particular standards that the employees are expected to use rationality techniques, nor institutional procedures for doing rationality [edit 2023: as distinct from having shared rationality-culture].
This is in contrast to say, Bridgewater associates, which is clearly structured intentionally to enable updating and information processing, on the organizational level. (Incidentally, Bridgewater is rich in the most literal sense.)
Also, I’m not fully exempt from these critiques myself: I have not really internalized goal factoring, yet, for instance, and think that I personally, am making the same kind of errors of inefficient action that I’m accusing CFAR of making. I also don’t make much use of quantitative estimates, and I have lots of empirical iteration procedures, but haven’t really gotten the hang of doing explicit experiments. (I do track decisions and predictions though, for later review.)
Overall, I think this gap is about due 10% “these tools don’t work as well, especially at the group level, as we seem to credit them, and we are correct to not use them”, about 30% to this being harder to do than it seems, and about 60% due to CFAR not really trying at this (and maybe it shouldn’t be trying at this, because there are trade offs and other things to focus on).
Elaborating on the 30%: I do think that making an org like this, especially when not starting from scratch, is deceptively difficult. I think that while implementing some of these seems trivial on the surface, but that it actually entails a shift in culture and expectations, and doing this effectively requires leadership and institution-building skills that CFAR doesn’t currently have. Like, if I imagine something like this existing, it would need to have a pretty in depth onboarding process for new employees, teaching the skills, and presenting “how we do things here.” If you wanted to bootstrap into this kind of culture, at anything like a fast enough speed, you would need the same kind of on-boarding for all of the existing employees, but it would be even harder, because you wouldn’t have the culture already going to provide example and immersion.
comment by Neel Nanda (neel-nanda) · 2019-12-21T09:39:25.090Z · LW(p) · GW(p)
What are the most important considerations for CFAR with regards to whether or not to publish the Handbook?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T10:10:16.311Z · LW(p) · GW(p)
Historically, CFAR had the following concerns (I haven't worked there since Oct 2018, so their thinking may have changed since then; if a current staff member gets around to answering this question you should consider their answer to trump this one):
- The handbook material doesn't actually "work" in the sense that it can change lives; the workshop experience is crucial to what limited success CFAR *is* able to have, and there's concern about falsely offering hope
- There is such a thing as idea inoculation; the handbook isn't perfect and certainly can't adjust itself to every individual person's experience and cognitive style. If someone gets a weaker, broken, or uncanny-valley version of a rationality technique out of a book, not only may it fail to help them in any way, but it will also make subsequently learning [a real and useful skill that's nearby in concept space] correspondingly more difficult, both via conscious dismissiveness and unconscious rounding-off.
- To the extent that certain ideas or techniques only work in concert or as a gestalt, putting the document out on the broader internet where it will be chopped up and rearranged and quoted in chunks and riffed off of and likely misinterpreted, etc., might be worse than not putting it out at all.
comment by riceissa · 2019-12-20T04:35:16.763Z · LW(p) · GW(p)
Back in April, Oliver Habryka wrote [EA · GW]:
Anna Salamon has reduced her involvement in the last few years and seems significantly less involved with the broader strategic direction of CFAR (though she is still involved in some of the day-to-day operations, curriculum development, and more recent CFAR programmer workshops). [Note: After talking to Anna about this, I am now less certain of whether this actually applies and am currently confused on this point]
Could someone clarify the situation? (Possible sub-questions: Why did Oliver get this impression? Why was he confused even after talking talking to Anna? To what extent and in what ways has Anna reduced her involvement in CFAR in the last few years? If Anna has reduced her involvement in CFAR, what is she spending her time on instead?)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2019-12-25T11:20:57.579Z · LW(p) · GW(p)
I’ve worked closely with CFAR since it’s founding in 2012, for varying degrees of closely (ranging from ~25 hrs/week to ~60 hrs/week). My degree of involvement in CFAR’s high-level and mid-level strategic decisions has varied some, but at the moment is quite high, and is likely to continue to be quite high for at least the coming 12 months.
During work-type hours in which I’m not working for CFAR, my attention is mostly on MIRI on MIRI’s technical research. I do a good bit of work with MIRI (though I am not employed by MIRI -- I just do a lot of work with them), much of which also qualifies as CFAR work (e.g., running the AIRCS workshops and assisting with the MIRI hiring process; or hanging out with MIRI researchers who feel “stuck” about some research/writing/etc. type thing and want a CFAR-esque person to help them un-stick). I also do a fair amount of work with MIRI that does not much overlap CFAR (e.g. I am a MIRI board member).
Oliver remained confused after talking with me in April because in April I was less certain how involved I was going to be in upcoming strategic decisions. However, it turns out the answer was “lots.” I have a lot of hopes and vision for CFAR over the coming years, and am excited about hashing it out with Tim and others at CFAR, and seeing what happens as we implement; and Tim and others seem excited about this as well.
My attention oscillates some across the years between MIRI and CFAR, based partly on the needs of each organization and partly on e.g. there being some actual upsides to me taking a backseat under Pete as he (and Duncan and others) made CFAR into more of a functioning institution in ways I would’ve risked reflexively meddling with. But there has been much change in the landscape CFAR is serving, and it’s time I think for there to be much change also in e.g. our curriculum, our concept of “rationality”, our relationship to community, and how we run our internal processes -- and I am really excited to be able to be closely involved with CFAR this year, in alliance with Tim and others.
comment by Matt Goldenberg (mr-hire) · 2019-12-20T16:26:27.047Z · LW(p) · GW(p)
What important thing do you believe about rationality, that most others in the rationality community do not?
I'd be interested in both
- An organizational thesis level, IE, a belief that guides the strategic direction of the organization
- an individual level from people who are responding to the AMA.
comment by habryka (habryka4) · 2019-12-21T04:23:26.192Z · LW(p) · GW(p)
What do you consider CFAR's biggest mistake?
comment by habryka (habryka4) · 2019-12-21T04:20:02.855Z · LW(p) · GW(p)
Do you have any non-fiction book recommendations?
Replies from: elityre, erratim, adam_scholl↑ comment by Eli Tyre (elityre) · 2019-12-22T06:21:05.651Z · LW(p) · GW(p)
The two best books on Rationality:
- The Sequences
- Principles by Ray Dalio (I read the PDF that leaked from bridge water. I haven't even looked at the actual book.)
My starter kit for people who want to build the core skills of the mind / personal effectiveness stuff (I reread all of these, for reminders, every 2 years or so):
- Getting things Done: the Art of Stress-free Productivity
- Nonviolent Communication: a Language for Life
- Focusing
- Thinking, Fast and Slow
↑ comment by gilch · 2019-12-22T20:11:28.300Z · LW(p) · GW(p)
I note that Principles and Getting things Done are not on CFAR's reading list, even though the rest are.
↑ comment by Timothy Telleen-Lawton (erratim) · 2019-12-22T17:43:39.013Z · LW(p) · GW(p)
Metaphors We Live By by George Lakoff — Totally changed the way I think about language and metaphor and frames when I read it in college. Helped me understand that there are important kinds of knowledge that aren't explicit [LW(p) · GW(p)].
↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T11:16:59.603Z · LW(p) · GW(p)
I really like Language, Truth and Logic, by A.J. Ayer. It's an old book (1936) and it's silly in some ways. It's basically an early pro-empiricism manifesto, and I think many of its arguments are oversimplified, overconfident, or wrong. Even so, it does a great job of teaching some core mental motions of analytic philosophy. And its motivating intuitions feel familiar—I suspect that if 25-year-old Ayer got transported to the present, given internet access etc., we would see him on LessWrong pretty quick.
comment by Ben Pace (Benito) · 2019-12-20T02:59:39.706Z · LW(p) · GW(p)
I feel like CFAR has learned a lot about how to design a space to bring about certain kinds of experiences in the people in that space (i.e. encouraging participants to re-examine their lives and how their minds work). What are some surprising things you've learned about this, that inform how you design e.g. CFAR's permanent venue?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-20T21:02:01.329Z · LW(p) · GW(p)
Ambience and physical comfort are surprisingly important. In particular:
-
Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).
-
Food/drink: Have lots of it! Both in terms of quantity and variety. The cost to workshop quality of people not having their preferences met here sufficiently outweighs the cost of buying too much food, that in general it’s worth buying too much as a policy. It's particularly important to meet people's (often, for rationalists, amusingly specific) specific dietary needs, have a variety of caffeine options, and provide a changing supply of healthy, easily accessible snacks.
-
Furniture: As comfortable as possible, and arranged such that multiple small conversations are more likely to happen than one big one.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-20T21:11:04.147Z · LW(p) · GW(p)
What are the effects of following, and of not following, these guidelines? What tests have you run to determine these effects, and is the data from those tests available for download?
Replies from: adam_scholl, Benito↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T03:32:46.048Z · LW(p) · GW(p)
We have not conducted thorough scientific investigation of our lamps, food or furniture. Just as one might have reasonable confidence in a proposition like "tired people are sometimes grumpy" without running an RCT, one can I think be reasonably confident that e.g. vegetarians will be upset if there’s no vegetarian food, or that people will be more likely to clump in small groups if the chairs are arranged in small groups.
I agree the lighting recommendations are quite specific. I have done lots of testing (relative to e.g. the average American) of different types of lamps, with different types of bulbs in different rooms, and have informally gathered data about people’s preferences. I have not done this formally, since I don’t think that would be worth the time, but in my informal experience, the bulb preferences of the subset of people who report any strong lighting preferences at all tend to correlate strongly with that bulb’s CRI. Currently incandescents have the highest CRI of commonly-available bulbs, so I generally recommend those. My other suggestions were developed via a similar process.
↑ comment by Ben Pace (Benito) · 2019-12-20T21:15:49.231Z · LW(p) · GW(p)
Pretty sure effect sizes are obvious - I’ve been to events without enough snacks, and people leave early because they’re tired and out of energy. I think lighting also has obvious effect sizes when you try it, and also room layout just obviously changes the accordance’s of a space (classroom lecture vs sitting in circle vs kitchen etc).
Added: I don't think I disagree much with the things Said and others say below, I just meant to say that I don't think that careful statistics is required to have robust beliefs about these topics.
Replies from: habryka4, SaidAchmiz↑ comment by habryka (habryka4) · 2019-12-20T21:31:07.335Z · LW(p) · GW(p)
My guess is also that CFAR has seen many datapoints in this space, and could answer Said's question fine. I don't expect them to have run controlled experiments, but I do expect them to have observed a large variety of different lighting setups, food/drink availability and furniture arrangements, and would be able to give aggregate summaries of their experiences with that.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-20T21:25:19.227Z · LW(p) · GW(p)
Surely we’re not taking seriously recommendations based on “it’s just obvious”…? (There’s at least some sort of journal of events that notes these parameters and records apparent effects, that can be perused for patterns, etc.… right?)
Besides which, consider this:
Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).
These are very specific recommendations! I assume this means that the CFAR folks tried a bunch of variations—presumably in some systematic, planned way—and determined that these particular parameters are optimal. So… how was this determination made? What was the experimentation like? Surely it wasn’t just… “we tried some stuff in an ad-hoc manner, and this particular very specific set of parameters ended up being ‘obviously’ good”…?
EDIT: Let me put it another way:
What will happen if, instead of incandescent lighting, I use halogen bulbs? What if the light is 90 CRI instead of 95+? If it’s 4500K instead of 3500K—or, conversely, if it’s 2700K? What if the light is in the center of the ceiling? What if the lampshade is greenish and not yellowish? Etc., etc.—what specifically ought I expect to observe, if I depart from the recommended lighting pattern in each of those ways (and others)?
Replies from: ESRogs, habryka4↑ comment by ESRogs · 2019-12-20T21:29:35.802Z · LW(p) · GW(p)
I assume this means that the CFAR folks tried a bunch of variations—presumably in some systematic, planned way—and determined that these particular parameters are optimal.
Why do you assume this? I would guess it was local hill climbing. (The base rate for local hill climbing is much higher than for systematic search, isn't it?)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-20T21:34:24.674Z · LW(p) · GW(p)
The base rate for local hill climbing is much higher than for systematic search, isn’t it?
No doubt it is. But then, the base rate for many things is much higher than the base rate for the corresponding more “optimal” / “rational” / “correct” versions of those things. Should I assume in each case that CFAR does everything in the usual way, and not the rarer–but–better way? (Surely a depressing stance to take, if accurate…)
Replies from: ESRogs↑ comment by ESRogs · 2019-12-20T22:12:52.539Z · LW(p) · GW(p)
Yes, when the better way takes more resources.
On the meta level, I claim that doing things the usual way most of the time is the optimal / rational / correct way to do things. Resources are not infinite, trade-offs exist, etc.
EDIT: for related thoughts, see Vaniver's recent post on T-Shaped Organizations [LW · GW].
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-12-20T22:18:27.921Z · LW(p) · GW(p)
Strongly second this. Running a formal experiment is often much more costly from a decision theoretic perspective than other ways of reducing uncertainty.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-20T22:42:16.284Z · LW(p) · GW(p)
I think that you, and ESRogs, and possibly also habryka (though probably less so, if at all), have rather misunderstood the thrust of my comments.
I was not, and am not, suggesting that CFAR run experiments in a systematic (not ‘formal’—that is a red herring) way, nor am I saying that they should have done this.
Rather, what I was attempting to point out was that Adam Scholl’s comment, with its specific recommendations (especially the ones about lighting), would make sense if said recommendations were arrived at via a process of systematic experimentation (or, indeed, any even semi-systematic approach). On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?
If you missed it before, I would like to draw your attention to the part of this comment of mine elsethread [LW(p) · GW(p)] that comes after the “EDIT” note. Judging from the specificity of his recommendations, I must assume that Adam can answer the questions I ask there.
Replies from: elityre, ESRogs↑ comment by Eli Tyre (elityre) · 2019-12-21T04:01:18.324Z · LW(p) · GW(p)
On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?
Why not?
If you're running many, many events, and one of your main goals is to get good conversations happening you'll begin to build up an intuition about which things help and hurt. For instance, look at a room, and be like "it's too dark in here." Then you go get your extra bright lamps, and put them in the middle of the room, and everyone is like "ah, that is much better, I hadn't even noticed."
It seems like if you do this enough, you'll end up with pretty specific recommendations like what Adam outlined.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-27T22:49:51.837Z · LW(p) · GW(p)
Actually, I think this touches on something that is useful to understand about CFAR in general.
Most of our "knowledge" (about rationality, about running workshops, about how people can react to x-risk, etc.) is what I might call "trade knowledge", it comes from having lots of personal experience in the domain, and building up good procedures via mostly-trial and error (plus metacognition and theorizing about noticed problems might be, and how to fix them).
This is distinct from scientific knowledge, which is build up from robustly verified premises, tested by explicit attempts at falsification.
(I'm reminded of an old LW post, that I can't find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don't regard Eliezer as trustworthy.)
For instance, I might lead someone through an IDC like process at a CFAR workshop. This isn't because I've done rigorous tests (or I know of others who have done rigorous tests) of IDC, or because I've concluded from the neuroscience literature the IDC is the optimal process for arriving at true beliefs.
Rather, its that I (and other CFAR staff) have interacted with people who have a conflict between beliefs / models / urges / "parts", a lot, in addition to spending even more time engaging with those problems in ourselves. And from that exploration, this IDC-process seems to work well, in the sense of getting good results. So, I have a prior that it will be useful for the nth person. (Of course sometime this isn't the case, because people can be really different, and occasionally a tool will be ineffective, or even harmful, despite being extremely useful for most people.)
The same goes for, for instance, whatever conversational facilitation acumen I've acquired. I don't want to be making a claim that, say, "finding a Double Crux is the objectively correct process, or the optimal process, for resolving disagreements." Only that I've spent a lot of time resolving disagreements, and, at least sometimes, at least for me, this strategy seems to help substantially.
I can also give theoretical reasons why I think it works, but those theory reasons are not much of a crux: if a person can't seem to make something useful happen when they try to Double Crux, but something useful does happen when they do this other thing, I think they should do the other thing, theory be damned. It might be that that person is trying to apply the Double Crux pattern in a domain that its not suited for (but I don't know that, because I haven't tried to work in that domain yet), or it might be that they're missing a piece or doing it wrong, and we might be able to iron it out if I observed their process, or maybe they have some other skill that I don't have myself, and they're so good at that skill that trying to do the Double Crux thing is a step backwards (in the same way that there are different schools of martial arts).
The fact that my knowledge, and CFAR's knowledge, in these domains is trade knowledge has some important implications:
- It means that our content is path dependent. There are probably dozens or hundreds of stable, skilled "ways of engaging with minds." If you're trying to build trade knowledge you will end up gravitating to one cluster, and build out skill and content there, even if that cluster is a local optimum, and another cluster is more effective overall.
- It means that you're looking for skill, more than declarative third-person knowledge and that you're not trying to make things that are legible to other fields. A carpenter wants to have good techniques for working with wood, and in most cases doesn't care very much if his terminology or ontology lines up with that of botany.
- For instance, maybe to the carpenter there are 3 kinds of knots in wood, and they need to be worked with in different ways, but he's actually conflating 2 kinds of biological structures in the first type, and the second and third type are actually the same biological structure, but flipped vertically (because sometimes the wood is "upside down" from the orientation of the tree). The carpenter, qua carpenter, doesn't care about this. He's just trying to get the job done. But that doesn't mean that bystanders should get confused and think that the carpenter thinks that he has discovered some new, superior framework of botany.
- It means that a lot of content can only easily be conveyed tacitly, and in person, or at least, making it accessible via writing, etc. is an additional hard task.
- Carpentry (I speculate) involves a bunch of subtle tacit, perceptual maneuvers, like (I'm making this up) learning to tell when the wood is "smooth to the grain" or "soft and flexible", and looking at a piece of wood and knowing that you should cut it up top near the knot, even though that seems like it it would be harder to work around, because of how "flat" it gets down the plank. (I am still totally making this up.) It is much easier to convey these things to a learner who is right there with you, so that you can watch their process, and, for instance, point out exactly what you mean by "soft and flexible" via iterated demonstration.
- That's not to say that you couldn't figure out how to teach the subtle art of carpentry via blog post or book, but you would have to figure out how to do that (and it would still probably be worse than learning directly from someone skilled). This is related to why CFAR has historically been reluctant to share the handbook: the handbook sketches the techniques, and is a good reminder, but we don't think it conveys the techniques particularly well, because that's really hard.
Replies from: SaidAchmiz, Vaniver
↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-28T00:40:06.516Z · LW(p) · GW(p)
I don’t think this works.
A carpenter might say that his knowledge is trade knowledge and not scientific knowledge, and when challenged to provide some evidence that this supposed “trade knowledge” is real, and is worth something, may point to the chairs, tables, cabinets, etc., which he has made. The quality of these items may be easily examined, by someone with no knowledge of carpentry at all. “I am a trained and skilled carpenter, who can make various useful things for you out of wood” is a claim which is very, very easy to verify.
But as I understand it, CFAR has considerable difficulty providing, for examination, any equivalent of a beautifully-made oak cabinet. This makes claims of “trade knowledge” rather more dubious.
↑ comment by Vaniver · 2019-12-27T23:28:47.069Z · LW(p) · GW(p)
(I'm reminded of an old LW post, that I can't find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don't regard Eliezer as trustworthy.)
You're thinking of You're Calling *Who* A Cult Leader? [LW · GW]
And from that exploration, this IDC-process seems to work well, in the sense of getting good results.
An important clarification, at least from my experience of the metacognition, is that it's both getting good results and not triggering alarms (in the form of participant pushback or us feeling skeevy about doing it). Something that gets people to nod along (for the wrong reasons) or has some people really like it and other people really dislike it is often the sort of thing where we go "hmm, can we do better?"
↑ comment by habryka (habryka4) · 2019-12-20T21:37:09.809Z · LW(p) · GW(p)
I think every debrief document I've interacted with (which are all before CFAR got a permanent venue) included a section on "thoughts on the venue and layout" as well as "thoughts and food and snacks" that usually discussed the effects of how the food and snacks were handled and how the venue seemed to affect the workshop (and whether CFAR should go back to that venue in the future). I am not sure whether that meets your threshold for systematicness, but it should at least allow a cross-verification of the listed patterns with observations at the time of the workshops in different conditions.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-20T21:46:31.618Z · LW(p) · GW(p)
It’s a start, at least! If all the parameters (i.e., CRI / color temperature / etc. of the lighting, and of course furniture layout and so on) were recorded each time, and if notes on effects were taken consistently, then this should allow at least some rough spotting of patterns. Is this data available somewhere, in aggregated form? How comprehensive is it (i.e., how far back does it date, and how complete is the coverage of CFAR events)?
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-20T22:11:59.421Z · LW(p) · GW(p)
My guess is someone could dig up the debriefs for probably almost all workshops for the past 4 years, though synthesizing that is probably multiple days of work. I don't expect specific things like CRI to have been recorded, but I do expect the sections to say stuff like "all the rooms were too dark, and this one room had an LED light in it that gave me headaches, and I've also heard from one attendee that they didn't like being in that room", which would allow you to derive a bunch of those parameters from context.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-12-20T22:49:34.647Z · LW(p) · GW(p)
See this comment elsethread [LW(p) · GW(p)]. To summarize a bit: it was not (and is not) my intention to ask or require anyone to do the sort of digging and synthesis that you describe[1]. Rather, I was wondering how the specific recommendations listed in Adam Scholl’s comment were arrived at (if not via a process even as systematic as synthesis from informal debriefs)—and, in consequence, how exactly those recommendations are to be understood (that is: “this is one point in the space of possibilities which we have stumbled upon and which seems good”? or, “this is the optimal point in the possibility space”? what are we to understand about the shape of the surrounding fitness landscape across the dimensions described? etc.).
Though of course it would be interesting to do, regardless! If the debriefs can be made available for public download, en masse, I suspect a number of people would be interested in sifting through them for this sort of data, and much other interesting info as well. ↩︎
comment by riceissa · 2019-12-20T09:00:49.568Z · LW(p) · GW(p)
I have seen/heard from at least two sources something to the effect that MIRI/CFAR leadership (and Anna in particular) has very short AI timelines and high probability of doom (and apparently having high confidence in these beliefs). Here is the only public example that I can recall seeing. (Of the two examples I can specifically recall, this is not the better one, but the other was not posted publicly.) Is there any truth to these claims?
Replies from: Benito, Buck, Benito↑ comment by Ben Pace (Benito) · 2019-12-21T01:38:28.947Z · LW(p) · GW(p)
Riceissa's question was brief, so I'll add a bunch of my thoughts on this topic.
I also remember there was something of a hush around the broader x-risk network on the topic of timelines, sometime around the time of FLI's second AI conference. Since then I've received weird mixed signals about what people think, with hushed tones of being very worried/scared. The explicit content is of a similar type to Sam Altman's line "if you believe what I believe about the timeline to AGI and the effect it will have on the world, it is hard to spend a lot of mental cycles thinking about anything else" but rarely accompanied with an explanation of the reasoning that lead to that view.
I think that you can internalise models of science, progress, computation, ML, and geopolitics, and start to feel like "AGI being built" is part of your reality, your world-model, and then figure out what actions you want to take in the world. I've personally thought about it a bit and come to some of my own conclusions, and I've generally focused on plans designed for making sure AGI goes well. This is the important and difficult work of incorporating abstract, far ideas into your models of near-mode reality.
But it's also seems to me that a number of x-risk people looked at many of the leaders getting scared, and that is why they believe the timeline is short. This is how a herd turns around and runs in fear from an oncoming jaguar - most members of the herd don't stop to check for themselves, they trust that everyone else is running for good reason. More formally, it's known as an info cascade [LW · GW]. This is often the rational thing to do when people you trust act as if something dangerous is coming at you. You don't stop and actually pay attention to the evidence oneself.
(I personally experience such herd behaviour commonly when using the train systems in the UK. When a train is cancelled and 50 people are waiting beside it to get on, I normally don't see the board that announces which platform to go to for the replacement train, as it's only visible to a few of the people, but very quickly the whole 50 people are moving to the new platform for the replacement train. I also see it when getting off a train at a new train station, where lots of people don't really know which way to walk to get out of the building: immediately coming off the train, is it left or right? But the first few people tend to make a judgement, and basically everyone else follows them. I've sometimes done it myself, been the first off and started walking confidently in a direction, and have everyone start confidently follow me, and it always feels a little magical for a moment, because I know I just took a guess.)
But the unusual thing about our situation, is that when you ask the leaders of the pack why they think a jaguar is coming, they're very secretive about it. In my experience many clued-in people will explicitly recommend not sharing information about timelines. I'm thinking about OpenPhil, OpenAI, MIRI, FHI, and so on. I don't think I've ever talked to people at CFAR about timelines.
To add more detail to my saying it's considered 'the' decision-relevant variable by many, here's two quotes. Ray Arnold is a colleague and a friend of mine, and two years ago he wrote a good post on his general updates about such subjects [LW · GW], that said the following:
Claim 1: Whatever your estimates two years ago for AGI timelines, they should probably be shorter and more explicit this year.
Claim 2: Relatedly, if you’ve been waiting for concrete things to happen for you to get worried enough to take AGI x-risk seriously, that time has come. Whatever your timelines currently are, they should probably be influencing your decisions in ways more specific than periodically saying “Well, this sounds concerning.”
Qiaochu also talked about it as the decision-relevant question [LW(p) · GW(p)]:
[Timelines] are the decision-relevant question. At some point timelines get short enough that it's pointless to save for retirement. At some point timelines get short enough that it may be morally irresponsible to have children...
Ray talks in his post about how much of his beliefs on this topic comes from trusting another person closer to the action, which is perfectly reasonable thing to do, though I'll point out again it's also (if lots of people do it) herd behaviour. Qiaochu talks about how he never figured out the timeline to AGI with an explicit model, even though he takes short timelines very seriously, which also sounds like a process that involves trusting others a bunch.
It's okay to keep secrets, and in a number cases it's of crucial importance. Much of Nick Bostrom's career is about how some information can be hazardous, and about how not all ideas are safe at our current level of wisdom. But it's important to note that "short timelines" is a particular idea that has had the herd turn around and running in fear to solve an urgent problem, and there's been a lot of explicit recommendations to not give people the info they'd need to make a good judgment about it. And those two things together are always worrying.
It's also very unusual for this community. We've been trying to make things go well wrt AGI for over a decade, and until recently we've put all our reasoning out in the open. Eliezer and Bostrom published so much. And yet now this central decision-node, "the decision-relevant variable", is hidden from the view of most people involved. It's quite strange, and generally is the sort of thing that is at risk for abuse by whatever process is deciding what the 'ground truth' is. I don't believe the group of people involved in being secretive about AI timelines have spent at all as much time thinking about the downsides of secrecy or put in the work to mitigate them. Of course I can't really tell, given the secrecy.
All that said, as you can see in the quotes/links that I and Robby provided elsewhere in this thread, I think Eliezer has made the greatest attempt of basically anyone to explain how he models timelines, and wrote very explicitly about his updates after AlphaGo Zero. And the Fire Alarm post was really, really great. In my personal experience the things in the quotes above is fairly consistent with how Eliezer reasoned about timelines before the deep learning revolution.
I think a factor that is likely to be highly relevant is that companies like DeepMind face a natural incentive to obscure understanding their progress and to be the sole arbiters of what is going to happen. I know that they're very careful about requiring all visitors to their offices to sign NDAs, and requiring employees to get permission for any blogposts they're planning to write on the internet about AI. I'd guess a substantial amount of this effect comes from there, but I'm not sure.
Edit: I edited this comment a bunch of times because I initially wrote it quickly, and didn't quite like how it came out. Sorry if anyone was writing a reply. I'm not likely to edit it again.
Edit: I think it's likely I'll turn this into a top level post at some point.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2019-12-30T08:56:47.488Z · LW(p) · GW(p)
FWIW, I don't feel this way about timelines anymore. Lot more pessimistic about estimates being mostly just noise.
↑ comment by Buck · 2019-12-21T06:32:02.680Z · LW(p) · GW(p)
For the record, parts of that ratanon post seem extremely inaccurate to me; for example, the claim that MIRI people are deferring to Dario Amodei on timelines is not even remotely reasonable. So I wouldn't take it that seriously.
Replies from: erratim, Benito↑ comment by Timothy Telleen-Lawton (erratim) · 2019-12-21T17:02:50.491Z · LW(p) · GW(p)
Agreed I wouldn’t take the ratanon post too seriously. For another example, I know from living with Dario that his motives do not resemble those ascribed to him in that post.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2019-12-21T20:45:37.493Z · LW(p) · GW(p)
I don't know Dario well, but I know enough to be able to tell that the anon here doesn't know what they're talking about re Dario.
↑ comment by Ben Pace (Benito) · 2019-12-22T00:32:32.149Z · LW(p) · GW(p)
Huh, thanks for the info, I'm surprised to hear that.
I myself had heard that rumour, saying that at the second FLI conference Dario had spoken a lot about short timelines and now everyone including MIRI was scared. IIRC I heard it from some people involved in ML who were in attendance of that conference, but I didn't hear it from anyone at MIRI. I never heard much disconfirmatory evidence, and it's certainly been a sort-of-belief that's bounced around my head for the past two or so years.
↑ comment by Ben Pace (Benito) · 2019-12-20T22:54:33.622Z · LW(p) · GW(p)
Certainly MIRI has written about this, for example see the relevant part of their 2018 update:
The latter scenario is relatively less important in worlds where AGI timelines are short. If current deep learning research is already on the brink of AGI, for example, then it becomes less plausible that the results of MIRI’s deconfusion work could become a relevant influence on AI capabilities research, and most of the potential impact of our work would come from its direct applicability to deep-learning-based systems. While many of us at MIRI believe that short timelines are at least plausible, there is significant uncertainty and disagreement about timelines inside MIRI, and I would not feel comfortable committing to a course of action that is safe only in worlds where timelines are short.
Also see Eliezer's top-notch piece on timelines [LW · GW], which includes the relevant quote:
Of course, the future is very hard to predict in detail. It's so hard that not only do I confess my own inability, I make the far stronger positive statement that nobody else can do it either.
Eliezer also updated after losing a bet that AlphaGo would not be able to beat humans so well, which he wrote about in AlphaGo Zero and the Foom Debate [LW · GW]. It ends with the line:
I wouldn't have predicted AlphaGo and lost money betting against the speed of its capability gains, because reality held a more extreme position than I did on the Yudkowsky-Hanson spectrum.
Replies from: RobbBB, riceissa
↑ comment by Rob Bensinger (RobbBB) · 2019-12-20T23:27:18.254Z · LW(p) · GW(p)
More timeline statements, from Eliezer in March 2016:
That said, timelines are the hardest part of AGI issues to forecast, by which I mean that if you ask me for a specific year, I throw up my hands and say “Not only do I not know, I make the much stronger statement that nobody else has good knowledge either.” Fermi said that positive-net-energy from nuclear power wouldn’t be possible for 50 years, two years before he oversaw the construction of the first pile of uranium bricks to go critical. The way these things work is that they look fifty years off to the slightly skeptical, and ten years later, they still look fifty years off, and then suddenly there’s a breakthrough and they look five years off, at which point they’re actually 2 to 20 years off.
If you hold a gun to my head and say “Infer your probability distribution from your own actions, you self-proclaimed Bayesian” then I think I seem to be planning for a time horizon between 8 and 40 years, but some of that because there’s very little I think I can do in less than 8 years, and, you know, if it takes longer than 40 years there’ll probably be some replanning to do anyway over that time period.
And from me in April 2017:
Since [August], senior staff at MIRI have reassessed their views on how far off artificial general intelligence (AGI) is and concluded that shorter timelines are more likely than they were previously thinking. [...]
There’s no consensus among MIRI researchers on how long timelines are, and our aggregated estimate puts medium-to-high probability on scenarios in which the research community hasn’t developed AGI by, e.g., 2035. On average, however, research staff now assign moderately higher probability to AGI’s being developed before 2035 than we did a year or two ago.
I talked to Nate last month and he outlined the same concepts and arguments from Eliezer's Oct. 2017 There's No Fire Alarm for AGI [LW · GW] (mentioned by Ben above) to describe his current view of timelines, in particular (quoting Eliezer's post):
History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up. [...]
And again, that's not to say that people saying "fifty years" is a certain sign that something is happening in a squash court; they were saying “fifty years” sixty years ago too. It's saying that anyone who thinks technological timelines are actually forecastable, in advance, by people who are not looped in to the leading project's progress reports and who don't share all the best ideas about exactly how to do the thing and how much effort is required for that, is learning the wrong lesson from history. In particular, from reading history books that neatly lay out lines of progress and their visible signs that we all know now were important and evidential. It's sometimes possible to say useful conditional things about the consequences of the big development whenever it happens, but it’s rarely possible to make confident predictions about the timing of those developments, beyond a one- or two-year horizon. And if you are one of the rare people who can call the timing, if people like that even exist, nobody else knows to pay attention to you and not to the Excited Futurists or Sober Skeptics. [...]
So far as I can presently estimate, now that we've had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.
By saying we're probably going to be in roughly this epistemic state until almost the end, I don't mean to say we know that AGI is imminent, or that there won't be important new breakthroughs in AI in the intervening time. I mean that it's hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won't know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky. Maybe researcher enthusiasm and funding will rise further, and we'll be able to say that timelines are shortening; or maybe we’ll hit another AI winter, and we'll know that's a sign indicating that things will take longer than they would otherwise; but we still won't know how long.
↑ comment by riceissa · 2019-12-20T23:44:22.271Z · LW(p) · GW(p)
I had already seen all of those quotes/links, all of the quotes/links that Rob Bensinger posts in the sibling comment, as well as this tweet from Eliezer. I asked my question because those public quotes don't sound like the private information I referred to in my question, and I wanted insight into the discrepancy.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-21T01:22:59.965Z · LW(p) · GW(p)
Okay. I was responding to "Is there any truth to these claims?" which sounded like it would be a big shock to discover MIRI/CFAR staff were considering short timelines a lot in their actions, when they'd actually stated it out loud in many places.
While I agree that I'm confused about MIRI/CFAR's timelines and think that info-cascades around this have likely occurred, I want to mention that the thing you linked to is pretty hyperbolic.
To the best of my understanding, part of why the MIRI leadership (Nate Soares, Eliezer Yudkowsky, and Anna Salamon) have been delusionally spewing nonsense about the destruction of the world within a decade is because they've been misled by Dario Amodei, an untrustworthy, blatant status-seeker recently employed at Google Brain. I am unaware of the existence of even a single concrete, object-level reason to believe these claims; I, and many others, suspect that Dario is intentionally embellishing the facts because he revels in attention.
I want to say that I think that Dario is not obviously untrustworthy; I think well of him for being an early EA who put in the work to write up their reasoning about donations (see his extensive writeup on the GiveWell blog from 2009) which I always take as a good sign about someone's soul; the quote also says there's no reason or argument to believe in short timelines, but the analyses above in Eliezer's posts on AlphaGo Zero and Fire Alarm provide plenty of reasons for thinking AI could come within a decade. Don't forget that Shane Legg, one of the cofounders of DeepMind, has been consistently predicting AGI with 50% probability by 2028 (e.g. he said it here in 2011 [LW · GW]).
Replies from: philh↑ comment by philh · 2019-12-21T10:25:45.769Z · LW(p) · GW(p)
Don’t forget that Shane Legg, one of the cofounders of DeepMind, has been consistently predicting AGI with 50% probability by 2028 (e.g. he said it here in 2011).
Just noting that since then, half the time to 2028 has elapsed. If he's still giving 50%, that's kind of surprising.
Replies from: jalex-stark-1, Benito↑ comment by Jalex Stark (jalex-stark-1) · 2019-12-22T14:47:53.441Z · LW(p) · GW(p)
Why is that surprising? Doesn't it just mean that the pace of development in the last decade has been approximately equal to the average over Shane_{2011}'s distribution of development speeds?
Replies from: philh↑ comment by philh · 2019-12-22T16:57:21.829Z · LW(p) · GW(p)
I don't think it's that simple. The uncertainty isn't just about pace of development but about how much development needs to be done.
But even if it does mean that, would that not be surprising? Perhaps not if he'd originally given a narrow confidence internal, but his 10% estimate was in 2018. For us to be hitting the average precisely enough to not move the 50% estimate much... I haven't done any arithmetic here, but I think that would be surprising, yeah.
And my sense is that the additional complexity makes it more surprising, not less.
Replies from: jalex-stark-1↑ comment by Jalex Stark (jalex-stark-1) · 2019-12-28T05:46:03.680Z · LW(p) · GW(p)
Yes, I agree that the space of things to be uncertain about is multidimensional. We project the uncertainty onto a one-dimensional space parameterized by "probability of <event> by <time>".
It would be surprising for a sophisticated person to show a market of 49 @ 51 on this event. (Unpacking jargon, showing this market means being willing to buy for 49 or sell at 51 a contract which is worth 100 if the hypothesis is true and 0 if it is false)
(it's somewhat similar saying that your 2-sigma confidence interval around the "true probability" of the event is 49 to 51. The market language can be interpreted with just decision theory while the confidence interval idea also requires some notion of statistics)
My interpretation of the second-hand evidence about Shane Legg's opinion suggests that Shane would quote a market like 40 @ 60. (The only thing I know about Shane is that they apparently summarized their belief as 50% a number of years ago and hasn't publicly changed their opinion since)
Replies from: philh↑ comment by philh · 2019-12-28T13:25:44.779Z · LW(p) · GW(p)
Perhaps I'm misinterpreting you, but I feel like this was intended as disagreement? If so, I'd appreciate clarification. It seems basically correct to me, and consistent with what I said previously. I still think that: if, in 2011, you gave 10% probability by 2018 and 50% by 2028; and if, in 2019, you still give 50% by 2028 (as an explicit estimate, i.e. you haven't just not-given an updated estimate); then this is surprising, even acknowledging that 50% is probably not very precise in either case.
↑ comment by Ben Pace (Benito) · 2019-12-21T11:25:27.057Z · LW(p) · GW(p)
I realised after writing that I didn't give a quote to show he that still believed it. I have the recollection that he still says 2028, I think someone more connected to AI/ML probably told me, but I can't think of anywhere to quote him saying it.
comment by lincolnquirk · 2019-12-19T18:46:21.359Z · LW(p) · GW(p)
Ok, I'll bite. Why should CFAR exist? Rationality training is not so obviously useful that an entire org needs to exist to support it; especially now that you've iterated so heavily on the curriculum, why not dissolve CFAR and merge back into (e.g.) MIRI and just reuse the work to train new MIRI staff?
even more true if CFAR is effective recruitment for MIRI, but merging back in would allow you to separately optimize for that.
Replies from: PeterMcCluskey, JohnBuridan↑ comment by PeterMcCluskey · 2019-12-20T20:54:01.753Z · LW(p) · GW(p)
It's at least as important for CFAR to train people who end up at OpenAI, Deepmind, FHI, etc.
↑ comment by SebastianG (JohnBuridan) · 2019-12-20T17:49:19.700Z · LW(p) · GW(p)
I'm sure the methods of CFAR have wider application than to Machine Learning...
comment by habryka (habryka4) · 2019-12-21T04:23:35.399Z · LW(p) · GW(p)
What do you consider CFAR's biggest win?
comment by namespace (ingres) · 2019-12-20T16:46:13.265Z · LW(p) · GW(p)
The CFAR branch of rationality is heavily inspired by General Semantics, with its focus on training your intuitive reactions, evaluation, the ways in which we're biased by language, etc. Eliezer Yudkowsky mentions that he was influenced by The World of Null-A [LW · GW], a science fiction novel about a world where General Semantics has taken over as the dominant philosophy of society.
Question: Considering the similarity to what Alfred Korzybski was trying to do with General Semantics to the workshop and consulting model of CFAR, are you aware of a good analysis of how General Semantics failed? If so, has this informed your strategic approach with CFAR at all?
Replies from: adam_scholl, yagudin↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T03:00:23.889Z · LW(p) · GW(p)
I buy that General Semantics was in some sense a memetic precursor to some of the ideas described in the sequences/at CFAR, but I think this effect was mostly indirect, so it seems misleading to me to describe CFAR as being heavily influenced by it. Davis Kingsley, former CFAR employee and current occasional guest instructor, has read a bunch about GM, I think, and mentions it frequently, but I'm not aware of direct influences aside from this.
comment by Ben Pace (Benito) · 2019-12-21T04:31:58.068Z · LW(p) · GW(p)
Which of the rationalists virtues do you think you’ve practised the most in working at CFAR?
Replies from: Unnamed, AnnaSalamon↑ comment by Unnamed · 2019-12-22T07:53:59.413Z · LW(p) · GW(p)
(This is Dan from CFAR)
I did a quick poll of 5 staff members and the average answer was 5.6.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-22T07:58:51.742Z · LW(p) · GW(p)
Ah, the virtue of precision.
"More can be said about the 5.6th virtue than of all the virtues in the world!"
Replies from: Unnamed↑ comment by Unnamed · 2019-12-22T08:39:04.234Z · LW(p) · GW(p)
Not precise at all. The confidence interval is HUGE.
stdev = 5.9 (without Bessel's correction)
std error = 2.6
95% CI = (0.5, 10.7)
The confidence interval should not need to go that low. Maybe there's a better way to do the statistics here.
Replies from: DanielFilan↑ comment by DanielFilan · 2019-12-22T20:55:20.954Z · LW(p) · GW(p)
To reduce sampling error you could ask everyone again.
↑ comment by AnnaSalamon · 2019-12-21T20:57:15.144Z · LW(p) · GW(p)
I'll interpret this as "Which of the rationalist virtues do you think CFAR has gotten the most mileage from your practicing".
The virtue of the void. Hands down. Though I still haven't done it nearly as much as it would be useful to do it. Maybe this year? [LW · GW]
If I instead interpret this as "Which of the rationalist virtues do you spend the most minutes practicing": curiosity. Which would be my runner-up for "CFAR got the most mileage from my practicing".
comment by namespace (ingres) · 2019-12-20T14:12:53.775Z · LW(p) · GW(p)
Does CFAR have a research agenda? If so, is it published anywhere?
comment by gilch · 2019-12-20T05:21:38.297Z · LW(p) · GW(p)
What can the LessWrong community do (or the broader rationality-aligned movement do) to help with CFAR's mission?
Replies from: QBee↑ comment by E. Garrett (QBee) · 2019-12-31T16:41:02.785Z · LW(p) · GW(p)
At the risk of sounding trite: stay fun, stay interested, stay fresh, and stay sane! We want the people we bring on and the culture that surrounds us to be a good place to be for epistemics and also for people.
We have a bunch of instructor candidates that I am very excited about. One of the ones I am most excited about strikes me as an intellectual offspring of the Sequences, and he’s rocking it. I would like to encounter more people like him, so I hope this community continues to make good, strong, “what odds would you give me”, rationalists that are interested in teaching and curriculum development.
We would also like more thoughts on how different mental tech breaks and how to create reason-based, healthy communities with better immune systems that do not break. Partial clues most welcome, especially if you write them up as readable LW posts and email us so we can read the discussion :)
If you have ideas of something particular you want to do or add, reach out to us. We are looking for someone to do metrics with Dan. We are looking for particularly skilled computer scientists to attend the AIRCS program. We are potentially looking for a couple summer interns. We are looking for someone with a lot of professional high level ops experience and a good dose of common sense.
We also *just happen to be* running a fundraiser right now!
comment by mingyuan · 2019-12-19T22:07:24.623Z · LW(p) · GW(p)
What does Dan actually do? What's his output and who decides what he looks into?
Replies from: QBee↑ comment by E. Garrett (QBee) · 2019-12-22T01:14:40.490Z · LW(p) · GW(p)
Good question! I also had it earlier this year, so I studied him, and here is what I learned of Dan:
Dan is workshop staff at most of the workshops we run, including AIRCS workshops, mainlines, and other programs like instructor training. So, for ~16 weeks of 2019, he was helping run, teaching at, and doing ops at workshops.
Dan is also in charge of all our spreadsheets and data and everything that happens after a workshop: synthesizing the feedback we get from attendees, putting attendees into follow-up groups, pairing people with mentors, and sending out exercises.
Dan is in charge of our impact reports. This involves figuring out what to measure and how to measure it, doing the actual measuring, and then writing about it in a way that hopefully people understand. Dan decides what he looks into based on his own judgements and the questions funders and our exec team have. He’s currently working on metrics data for the fundraiser that will be out soon.
Dan is a general CFAR staff member. This means he contributes to our weekly colloquia with thoughts about rationality, interviews people for workshops, does follow-ups with participants, and other random projects.
comment by Ben Pace (Benito) · 2019-12-21T04:31:14.044Z · LW(p) · GW(p)
What’s a post from The Sequences that has really affected how you think about rationality?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T09:52:30.601Z · LW(p) · GW(p)
I really loved this post on Occam's Razor [LW · GW]. Before encountering it, I basically totally misunderstood the case for using the heuristic, and so (wrongly, I now think) considered it kind of dumb.
I also especially loved "The Second Law of Thermodynamics, and Engines of Cognition [LW · GW]," which gave me a wonderful glimpse (for the first time) into how "laws of inference" ground in laws of physics.
comment by Vaniver · 2019-12-20T18:29:26.693Z · LW(p) · GW(p)
In another post [LW · GW], Adom Scholl says:
Historically, I think CFAR has been really quite bad at explaining its goals, strategy, and mechanism of impact—not just to funders, and to EA at large, but even to each other. I regularly encounter people who, even after extensive interaction with CFAR, have seriously mistaken impressions about what CFAR is trying to achieve.
What are the common mistaken impressions?
For each, do you think they would be net good if done by someone else? Are you aware of other groups that are attempting to achieve those aims? And what do you think it would take to make one?
comment by Eigil Rischel (eigil-rischel) · 2019-12-19T23:04:40.477Z · LW(p) · GW(p)
CFAR must have a lot of information about the efficacy of various rationality techniques and training methods (compared to any other org, at least). Is this information, or recommendations based on it, available somewhere? Say, as a list of techniques currently taught at CFAR - which are presumably the best ones in this sense. Or does one have to attend a workshop to find out?
Replies from: eigil-rischel↑ comment by Eigil Rischel (eigil-rischel) · 2020-07-16T12:37:48.297Z · LW(p) · GW(p)
If anyone came across this comment in the future - the CFAR Participant Handbook [LW · GW] is now online, which is more or less the answer to this question.
comment by habryka (habryka4) · 2019-12-21T04:19:55.052Z · LW(p) · GW(p)
Do you have any fiction book recommendations?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-22T02:40:52.667Z · LW(p) · GW(p)
Thanks to your recommendation I recently read New Atlantis, by Francis Bacon, and it was so great! It's basically Bacon's list of things he wished society had, ranging from "clothes made of sea-water-green satin" and "many different types of beverages" to "research universities that employ full-time specialist scholars."
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-22T03:23:57.440Z · LW(p) · GW(p)
I am very glad to hear that!
comment by anoni · 2019-12-19T21:10:56.260Z · LW(p) · GW(p)
Do you think the AI risk for computer scientists workshops turn anyone off AI risk? How does this compare to the degree to which regular workshops turn people off rationality? Do these workshops target audiences of interest to the AI risk community in general (successful computer scientists in general), or audiences that are of special interest to MIRI (e.g. rationalist pure mathematicians)?
comment by throwaway27370 · 2019-12-24T20:19:06.659Z · LW(p) · GW(p)
Hello,
Could you shed some light on this recent incident (mirror) involving CFAR? I am sure I am not the only one who is confused.
Best regards.
Replies from: Unnamed, MakoYass↑ comment by Unnamed · 2019-12-22T09:24:44.113Z · LW(p) · GW(p)
(This is Dan from CFAR)
In terms of what happened that day, the article covers it about as well as I could. There’s also a report from the sheriff’s office which goes into a bit more detail about some parts.
For context, all four of the main people involved live in the Bay Area and interact with the rationality community. Three of them had been to a CFAR workshop. Two of them are close to each other, and CFAR had banned them prior to the reunion based on a bunch of concerning things they’ve done. The other two I’m not sure how they got involved.
They have made a bunch of complaints about CFAR and other parts of the community (the bulk of which are false or hard to follow), and it seems like they were trying to create a big dramatic event to attract attention. I’m not sure quite how they expected it to go.
This doesn’t seem like the right venue to go into details to try to sort out the concerns about them or the complaints they’ve raised; there are some people looking into each of those things.
↑ comment by mako yass (MakoYass) · 2019-12-20T00:52:22.170Z · LW(p) · GW(p)
This is probably the least important question (the answer is that some people are nuts) but also the one that I most want to see answered for some reason.
Replies from: eigil-rischel↑ comment by Eigil Rischel (eigil-rischel) · 2019-12-20T20:59:41.955Z · LW(p) · GW(p)
Information about people behaving erratically/violently is better at grabbing your brain's "important" sensor? (Noting that I had exactly the same instinctual reaction). This seems to be roughly what you'd expect from naive evopsych (which doesn't mean it's a good explanation, of course)
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-22T08:08:09.608Z · LW(p) · GW(p)
I'd guess there weren't as many nutcases in the average ancestral climate, as there are in modern news/rumor mills. We underestimate how often it's going to turn out that there wasn't really a reason they did those things.
comment by habryka (habryka4) · 2019-12-21T04:22:18.467Z · LW(p) · GW(p)
Who designed all the wall-decoration in the CFAR venue and do you have a folder with all the art-pieces you used? I might want to use some of them for future art/design projects.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T09:27:23.074Z · LW(p) · GW(p)
I did. I have some but not all of the images saved; happy to share what I have, feel free to pm me for links.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-12-22T18:49:21.335Z · LW(p) · GW(p)
Follow-up question: how did you go about finding/picking all that stuff? I was particularly surprised to see a ctenophore picture on the wall - it's the sort of thing which makes sense in the collection, but only if you have (what I thought to be) some fairly esoteric background knowledge about evo-devo.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-23T01:23:21.987Z · LW(p) · GW(p)
I just googled around for pictures of things I think are neat. I think ctenophores are neat, since they look like alien spaceships and maybe evolved neurons independently; I think it's neat that wind sometimes makes clouds do the vortex thing that canoe paddles make water do, etc.
comment by namespace (ingres) · 2019-12-20T14:03:48.799Z · LW(p) · GW(p)
By looking in-depth at individual case studies, advances in cogsci research, and the data and insights from our thousand-plus workshop alumni, we’re slowly building a robust set of tools for truth-seeking, introspection, self-improvement, and navigating intellectual disagreement—and we’re turning that toolkit on itself with each iteration, to try to catch our own flawed assumptions and uncover our own blindspots and mistakes.
This is taken from the about page on your website (emphasis mine). I also took a look at this list of resources and notice I'm still curious:
Question: What literature (academic or otherwise) do you draw on the most often for putting together CFAR's curriculum? For example, I remember being told that the concept of TAP's was taken from some psychology literature, but searching Google scholar didn't yield anything interesting.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-20T15:41:21.871Z · LW(p) · GW(p)
The name for TAP's in the psychology literature is implementation intention. CFAR renamed it.
comment by gilch · 2019-12-20T05:16:11.799Z · LW(p) · GW(p)
How does CFAR plan to scale its impact?
Can CFAR help raise the sanity waterline? Has curriculum been developed that can teach any of the cognitive tools to more than a handful of people at a time at workshops? Perhaps a MOOC or a franchise?
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-20T15:39:36.812Z · LW(p) · GW(p)
This question seems to assume that CFAR sees the impact of their workshops as being about successfully teaching cognitive tools. That doesn't seem to be the case based on conversation I had in the past with CFAR folks.
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-21T03:23:32.107Z · LW(p) · GW(p)
What did that conversation cause you to think CFAR believes the impact of their workshops *is* about?
Replies from: ChristianKl↑ comment by ChristianKl · 2019-12-21T09:41:57.128Z · LW(p) · GW(p)
The definition I got was "Making people more agenty about changing their thinking". I'm not sure about the exact wording that was used. It might have been "feel agency" instead of being agenty and it might have been thinking habits instead of thinking, but that's a gist that I remember from a conversation at LWCW.
Falk Lieder who runs an academic research group on applied rationality was asking what potential there's for cooperating with CFAR to study the effectiveness of the techniques and the response was something along the lines of "CFAR doesn't really care that much about the individual techniques, the only thing that might be interesting is to measure whether the whole CFAR workshop as a unit produces those agency changes".
If any org has the goal of creating a strict list of cognitive tools who are individually powerful for helping people, cooperating with Falk to get academic backing both in terms of independent scientific authority and in terms of being more clear about the value of the cognitive tool would be valuable.
comment by gilch · 2019-12-20T05:26:29.937Z · LW(p) · GW(p)
The end of the Sequences, The Craft and the Community, concluded with "Go Forth and Create the Art!" Is that what CFAR is doing? Is anyone else working on this?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T10:04:20.470Z · LW(p) · GW(p)
[Disclaimer: have not been at CFAR since October 2018; if someone currently from the org contradicts this, their statement will be more accurate about present-day CFAR]
No (CFAR's mission has always been narrower/more targeted) and no (not in any systematic, competent fashion).
comment by Ben Pace (Benito) · 2019-12-20T02:58:23.826Z · LW(p) · GW(p)
What have been the changes in how people respond to your classes and workshops as you've changed from the general public to having a substantial (50%?) focus on people who may work on AI alignment? I mean obviously I expect that they're more able to hold up a technical discussion, but I'm curious what else you've noticed.
comment by Ben Pace (Benito) · 2019-12-21T04:31:05.223Z · LW(p) · GW(p)
What's a SlateStarCodex posts you have thought a lot about while thinking about rationality / CFAR?
Replies from: Unnamed, Duncan_Sabien↑ comment by Unnamed · 2019-12-22T07:52:42.100Z · LW(p) · GW(p)
(This is Dan from CFAR)
Guided By The Beauty Of Our Weapons
Asymmetric vs. symmetric tools is now one of the main frameworks that I use to think about rationality (although I wish we had better terminology for it). A rationality technique (as opposed to a productivity hack or a motivation trick or whatever) helps you get more done on something in cases where getting more done is a good idea.
This wasn’t a completely new idea when I read Scott’s post about it, but the post seems to have helped a lot with getting the framework to sink in.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T10:01:26.878Z · LW(p) · GW(p)
In case no one who currently works at CFAR gets around to answering this (I was there from Oct 2015 to Oct 2018 in a pretty influential role but that means I haven't been around for about fourteen months):
- Meditations on Moloch is top of the list by a factor of perhaps four
- Different Worlds as a runner up
Lots of social dynamic stuff/how groups work/how individuals move within groups:
- Social Justice and Words, Words, Words
- I Can Tolerate Anything Except The Outgroup
- Guided By The Beauty Of Our Weapons
- Yes, We Have Noticed The Skulls
- Book Review: Surfing Uncertainty
↑ comment by Tenoke · 2019-12-21T10:42:06.475Z · LW(p) · GW(p)
Meditations on Moloch is top of the list by a factor of perhaps four
Is that post really that much more relevant than everything else for TEACHING rationality? How come?
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T10:43:19.666Z · LW(p) · GW(p)
That's not the question that was asked, so ... no.
Edit: more helpfully, I found it valuable for thinking about rationality and thinking about CFAR from a strategic perspective—what it was, what it should be, what problems it was up against, how it interfaced with the rest of society.
Replies from: Tenoke↑ comment by Tenoke · 2019-12-21T10:45:53.943Z · LW(p) · GW(p)
while thinking about rationality / CFAR
for TEACHING rationality
You are saying those 2 aren't the same goal?? Even approximately? Isn't CFAR roughly a 'teaching rationality' organization?
Replies from: Benito, Duncan_Sabien↑ comment by Ben Pace (Benito) · 2019-12-21T11:30:14.020Z · LW(p) · GW(p)
In other threads on this post, Brienne and others describe themselves as doing research, so CFAR seems to be doing both. Math research and teaching math are a bit different. Although I am also interested to know of SSC posts that were helpful for developing curriculum.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-21T19:23:05.926Z · LW(p) · GW(p)
I'm not saying that, either.
I request that you stop jumping to wild conclusions and putting words in people's mouths, and focus on what they are actually saying.
Replies from: Tenoke↑ comment by Tenoke · 2019-12-22T00:29:06.411Z · LW(p) · GW(p)
All you were saying was "That’s not the question that was asked, so … no." so I'm sorry if I had to guess and ask. Not sure what I've missed by 'not focusing'.
I see you've added both an edit after my comment and then this response, as wellwhich is a bit odd.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2019-12-22T03:32:44.879Z · LW(p) · GW(p)
In general, if you don't understand what someone is saying, it's better to ask "what do you mean?" than to say "are you saying [unrelated thing that does not at all emerge from what they said]??" with double punctuation.
comment by Ben Pace (Benito) · 2019-12-19T18:53:45.026Z · LW(p) · GW(p)
Meta: Because I think a lot of users will be unusually interested in this Q&A, I'll pin this to the frontpage post-list while it's ongoing, and then afterwards move it back to Anna's personal blog.
comment by Ben Pace (Benito) · 2019-12-20T02:59:05.546Z · LW(p) · GW(p)
As you've seen people grow as rationalists and become more agentic, what patterns have you noticed in how people change their relationships with their emotions?
comment by lincolnquirk · 2019-12-19T18:53:37.331Z · LW(p) · GW(p)
What aspects of CFAR's strategy would you be most embarrassed by if they were generally known? :P
comment by habryka (habryka4) · 2019-12-21T04:20:22.209Z · LW(p) · GW(p)
Who is the rightful caliph?
Replies from: adam_scholl, s0ph1a↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T13:09:39.842Z · LW(p) · GW(p)
All hail Logmoth, the rightful caliph!
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-21T19:16:50.820Z · LW(p) · GW(p)
An innovative choice
comment by Ben Pace (Benito) · 2019-12-20T02:57:48.234Z · LW(p) · GW(p)
Anna previously wrote about the challenges of making your explicit reasoning trustworthy [LW · GW], and later about bucket errors [LW · GW] and how they are often designed to help keep one's reasoning sane. Can Anna and/or other instructors talk about how much you've seen people's reasoning get more trustworthy over time, and what that looks like? I'm also interested if you still feel like you catch yourself with false beliefs regularly, or how you think about it for yourself.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-20T03:04:38.387Z · LW(p) · GW(p)
As a follow-up, can you talk about times you've experienced participants having built explicit anti-epistemology to defend themselves from doing too much explicit reasoning? I'm talking about cases analogous to when people have made bucket errors and defend it anyway, so that they don't accidentally break things.
comment by Ben Pace (Benito) · 2019-12-20T02:57:21.382Z · LW(p) · GW(p)
You teach classes like Goal Factoring, Internal Double Crux, and TAPs (Trigger Actions Plans), which are all about breaking parts of your mind down into smaller parts.
I expect you to endorse the statement "Breaking things down into smaller parts is good!" but I'm curious if you have any more detailed opinions about that. Can you share your sense of when it's the right next thing to do versus not the right next thing to do, when solving problems and understanding your own mind? I'd also be interested in stories of when you've seen people do it especially well or badly.
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2019-12-21T03:28:50.621Z · LW(p) · GW(p)
which are all about breaking parts of your mind down into smaller parts
Na, my mind's a bunch of super tiny stuff to begin with. When I do IDC, I just stop in my unified-person-fabrication a little earlier than the point at which I've erased all ability to perceive internal distinctions.
(Sorry, I know that's not an answer to your question. Maybe somebody, perhaps even a future me, will come by and give you a real answer.)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-21T03:51:40.307Z · LW(p) · GW(p)
Hah, I did not expect that reply. Do you think this is a pretty Brienne-specific way of working internally, or d’you think if I practised IDC with you a couple of times I’d start to realise this was how I worked too?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-12-21T04:35:52.000Z · LW(p) · GW(p)
I suppose I'm not sure why I think I'm a coherent agent generally.
I guess I have to talk as though I'm one a lot, using words like "I think" and "My perspective on this is" and "You're disregarding my preferences here".
I've found the CFAR classes called 'IDC' helpful when I myself am confused about what I want or what I think, especially when it's a social situation, where I'm e.g. feeling bad but not sure why, and I split the conflicting feelings into subagents that have beliefs and goals. If I'm able to actually name supposed subagents that would give rise to the current conflict I'm feeling (and I find this be 90% of the battle), then I find that the confusion is quickly dissolved, and I am able to more clearly integrate them into a whole.
To give a real example (or something pretty close to a real time I used it), it sounds internally something like:
Ah, this part of me is worried about me losing an alliance in my tribe. This part thinks that because the person I just talked to seems sad about the things I said, that I hurt my alliances in the tribe. But another subagent, who advocates a lot for saying true things to friends even if when it's uncomfortable, believes that moves like this where you say true things that don't make the other person happy, will pay off in the long-run in terms of more trusted alliances. Risk and reward often go hand-in-hand, and if I'm able to weather shorter periods of the friendship having some negative interactions where the friendship has the potential to breakdown, then I'll get stronger alliances I want in the long-term.
At which point I no longer felt conflicted, and I was able to just think about "What will I do next?" without having to go a level lower into why two parts of me were pulling in different directions.
comment by ChristianKl · 2019-12-19T20:52:19.350Z · LW(p) · GW(p)
What's your operating definition of what rationality happens to be?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2019-12-21T03:39:11.662Z · LW(p) · GW(p)
The capacity to develop true beliefs, so as to better achieve your goals.
comment by Rafael Harth (sil-ver) · 2019-12-19T20:29:39.650Z · LW(p) · GW(p)
What is your model wrt the link between intelligence and rationality?
comment by SebastianG (JohnBuridan) · 2019-12-20T17:55:45.161Z · LW(p) · GW(p)
What have you learned about transfer in your experience at CFAR? Have you seen people gain the ability to transfer the methods of one domain into other domains? How do you make transfer more likely to occur?
comment by ChristianKl · 2019-12-19T20:51:17.013Z · LW(p) · GW(p)
How much of the gain of participating in a mainstream workshop is learning techniques?
Do you chose techniques to teach based on their effectiveness as the only criteria or do you also teach some techniques not because they are the most effective but because you believe that it's useful to bring participants in contact with ideas that are foreign to them?
comment by mako yass (MakoYass) · 2019-12-20T00:46:58.565Z · LW(p) · GW(p)
I've been developing a game. Systemically, it's about developing accurate theories. The experience of generating theories, probing specimens, firing off experiments, figuring out where the theories go wrong, and refining the theories into fully general laws of nature which are reliable enough to create perfect solutions to complex problem statements. This might make it sound complicated, but it does all of that with relatively few components. Here's a screenshot of the debug build of the game over a portion of the visual design scratchpad (ignore the bird thing, I was just doodling): https://makopool.com/fcfar.png
The rule/specimen/problemstatement is the thing on the left, the experiments/solutions that the player has tried are on the right. You can sort of see in the scratchpad that I'm planning to change how the rule is laid out to make it more central and to make the tree structure as clear as possible (although there's currently an animation where it sort of jiggles the branches in a way that I think makes structure clear, it doesn't look as good this way).
It might turn out to be something like a teaching tool. It illuminates a part of cognition that I think we're all very interested in, not just comprehension, it also tests/trains (I would love to know which) directed creative problemsolving. It seems to reliably teach how frequently and inevitably our right-seeming theories will be wrong.
Playtesting it has been... kind of profound. I'll see a playtester develop a wrong theory and I'll see directly that there's no other way it could have gone. They could not have simply chosen to reserve judgement and not be wrong. They came up with a theory that made sense given the data they'd seen, and they had to be wrong. It is now impossible for me to fall for it when I'm presented with assertions like "It's our best theory and it's only wrong 16% of the time". To coin an idiom.. you could easily hide the curvature of the earth behind an error rate that high, I know this because I've experienced watching all of my smartest friends try their best to get the truth and end up with something else instead.
The game will have to teach people to listen closely to anomalous cases and explore their borders until they find the final simple truth. People who aren't familiar with that kind of thinking tend to give up on the game very quickly. People who are familiar with that kind of thinking tend to find it very rewarding. It would be utterly impotent for me to only try to reach the group who already know most of what the game has to show them. It would be easy to do that. I really really hope I have the patience to struggle and figure out how to reach the group who does not yet understand why the game is fun, instead. It could fail to happen. I've burned out before.
My question: what do you think of that, what do you think of the witness, and would you have any suggestions as to how I could figure out whether the game has the intended effects as a teaching tool.
Replies from: elityre, D_Malik↑ comment by Eli Tyre (elityre) · 2019-12-20T19:30:28.541Z · LW(p) · GW(p)
I think this project sounds cool. This might be (I don't know enough to know) an example of rationality training in something other than the CFAR paradigm of "1) present 2) techniques to 3) small groups 4) at workshops."
But I think your question is too high context to easily answer. Is there a way I can play the current version? If so would try it for a bit and then tell you what I think, personally.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-21T22:39:55.329Z · LW(p) · GW(p)
If you have an android phone, sure. I'll DM you a link to the apk. I should note, it's pretty brutal right now and I have not yet found a way to introduce enough primitives to the player to make really strict tests, so it's possible to guess your way all the way to the end. Consider the objective to be figure out the laws, rather than solve the puzzles.
↑ comment by D_Malik · 2019-12-21T03:47:58.840Z · LW(p) · GW(p)
I don't understand that screenshot at all (maybe the resolution is too low?), but from your description it sounds in a similar vein to Zendo and Eleusis and Penultima, which you could get ideas from. Yours seems different though, and I'd be curious to know more details. I tried implementing some single-player variants of Zendo five years ago, though they're pretty terrible (boring, no graphics, probably not useful for training rationality).
I do think there's some potential for rationality improvements from games, though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun. I also think it'll be very difficult to achieve transfer to life-in-general, for the same reason that learning to ride a bike doesn't train you to move your feet in circles every time you sit in a chair. ("I pedal when I'm on a bike, to move forward; why would I pedal when I'm not on a bike, and my goal isn't to move forward? I reason this way when I'm playing this game, to get the right answer; why would I reason this way when I'm not playing the game, and my goal is to seem reasonable or to impress people or to justify what I've already decided?")
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-12-22T08:01:03.233Z · LW(p) · GW(p)
I've heard of Zendo and I've been looking for someone to play Eleusis with for a while heh (maybe I'll be able to get the local EA group to do it one of these days).
though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun
Fun isn't a generic substance. Fun is subjective. A person's sense of fun is informed by something. If you've internalised the rationalist ethos, if your gut trusts your mind, if you know deeply that rationality is useful and that training it is important, a game that trains rationality is going to be a lot of fun for you.
This is something I see often during playtesting. The people who're quickest to give up on the game tend to be the people who don't think experimentation and hypothesising has any place in their life.
I am worried about transfer failure. I guess I need to include discussion of the themes of the game and how they apply to real world situations. Stories about wrong theories, right theories, the power of theorising, the importance of looking closely at cases that break our theories.
I need to... make sure that people can find the symmetry between the game and parts of their lives.
comment by emanuele ascani (emanuele-ascani) · 2019-12-23T11:56:47.396Z · LW(p) · GW(p)
Would it be possible and cost-effective to release video courses at a much lower cost?