Posts

Key Decision Analysis - a fundamental rationality technique 2020-01-12T05:59:57.704Z · score: 45 (12 votes)
What were the biggest discoveries / innovations in AI and ML? 2020-01-06T07:42:11.048Z · score: 9 (4 votes)
Has there been a "memetic collapse"? 2019-12-28T05:36:05.558Z · score: 31 (6 votes)
What are the best arguments and/or plans for doing work in "AI policy"? 2019-12-09T07:04:57.398Z · score: 14 (2 votes)
Historical forecasting: Are there ways I can get lots of data, but only up to a certain date? 2019-11-21T17:16:15.678Z · score: 39 (11 votes)
How do you assess the quality / reliability of a scientific study? 2019-10-29T14:52:57.904Z · score: 78 (25 votes)
Request for stories of when quantitative reasoning was practically useful for you. 2019-09-13T07:21:43.686Z · score: 10 (4 votes)
What are the merits of signing up for cryonics with Alcor vs. with the Cryonics Institute? 2019-09-11T19:06:53.802Z · score: 20 (7 votes)
Does anyone know of a good overview of what humans know about depression? 2019-08-30T23:22:05.405Z · score: 14 (6 votes)
What is the state of the ego depletion field? 2019-08-09T20:30:44.798Z · score: 28 (11 votes)
Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? 2019-07-29T22:59:33.170Z · score: 85 (27 votes)
Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea? 2019-07-09T21:57:28.537Z · score: 21 (9 votes)
Does scientific productivity correlate with IQ? 2019-06-16T19:42:29.980Z · score: 28 (9 votes)
Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? 2019-06-16T19:12:48.358Z · score: 32 (11 votes)
Eli's shortform feed 2019-06-02T09:21:32.245Z · score: 31 (6 votes)
Historical mathematicians exhibit a birth order effect too 2018-08-21T01:52:33.807Z · score: 117 (39 votes)

Comments

Comment by elityre on Where are people thinking and talking about global coordination for AI safety? · 2020-01-14T15:20:26.882Z · score: 2 (1 votes) · LW · GW

Do you have resources on this topic to recommend?

Comment by elityre on Key Decision Analysis - a fundamental rationality technique · 2020-01-14T01:50:41.304Z · score: 3 (2 votes) · LW · GW

I fixed the typo.

At least for the time being, I'm abstracting the specifics from most of the things I learned, because they are pretty personal.

Comment by elityre on Paper-Reading for Gears · 2020-01-14T01:48:34.293Z · score: 2 (1 votes) · LW · GW

This was super useful for me. Reading this post was causal in starting to figure out how to do statistical analysis on my phenomenology-based psychological models. We'll see where this goes, but it might be enough to convert my qualitative model-building into quantitative science!

Thanks!

Comment by elityre on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T19:29:37.598Z · score: 5 (3 votes) · LW · GW

One thing I guess I could share:

I often make choices based on plans that I’m excited about at the time. But very frequently I don’t actually get very far with those plans before they peter out and falter / I move on to something else. When I am making choices about what to do, I should take my current plans with a grain of salt, because things actually change a lot from that initial vission.

Comment by elityre on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T19:22:52.747Z · score: 4 (2 votes) · LW · GW

I think that's pretty personal to me.

Comment by elityre on Key Decision Analysis - a fundamental rationality technique · 2020-01-12T19:13:54.033Z · score: 11 (6 votes) · LW · GW

Here are some examples, though as I said, I think my own definition of a decision was too strict:

  • I went to that forecasting day that included Carl Shulman, Kajta, etc.
  • I didn’t do [multi week research project with some people].
  • I rejoined CFAR’s colloquium.
  • I decided to go to the mainline workshop in late February.
  • I bought a macbook air with a 512 GB hard drive and 18 GB of RAM. I returned it for a macbook pro with a 512 GB hard drive and 18 GB of RAM. This was $100 more expensive, but with a faster processor. For that reason I’m typing this on my old (often crashing) machine.
  • I opted not to attend the Bay NVC convergence facilitation training.
  • I returned my macbook pro to get a macbook air again, because of the better battery life.
  • I got on to Prague time the long way, by staying up late, sleeping all day, and then taking an evening plane, having a long travel day, then crashing, when I got to Europe.
  • I decided to come back from Europe early so that I could meet with Brienne and Duncan about instructor training, instead of hanging with FHI.
  • I didn't join the conversation about [topic] between [people].
  • I downloaded [that sketchy file].
  • I told [employer] that I could do about 10 to 12 hours of [category] work in October.
  • I bought access to AWC’s demonstrations of Focusing.
  • I stayed two extra days in Prague and then had a flight that left at 9:00 AM from Prague to Copenhagen, and then a connecting flight from Copenhagen to Oakland. Getting up really early to go to the airport didn’t suit me much since I had been waking up around 10:00. So I bought an $85 ticket to Copenhagen a day early, and stayed in the cheapest Hostel I could find.

I think at least one trigger for flagging decisions might be something like "I'm about to 'pull the trigger' on something." I have some amount of indecision, or conflictedness, and then I settle into one state or another.

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T09:28:49.061Z · score: 3 (2 votes) · LW · GW

Ooo. Thank you!

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T07:53:39.329Z · score: 2 (1 votes) · LW · GW

Deep Blue: a chess engine beats the reigning world chess champion, Gary Kasparov.

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T07:51:44.790Z · score: 2 (1 votes) · LW · GW

GTP2.

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T07:48:46.707Z · score: 4 (2 votes) · LW · GW

AlexNet in 2012. I'm not super clear on the details, but it seems to be the first time a deep neural net substantially outperformed other AI methods, and thereby kicked off the deep learning revolution.

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T07:46:26.182Z · score: 2 (1 votes) · LW · GW

Frank Rosenblatt develops the perceptron algorithm.

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T07:46:05.418Z · score: 2 (1 votes) · LW · GW

AlphaZero.

Comment by elityre on What were the biggest discoveries / innovations in AI and ML? · 2020-01-06T07:45:46.343Z · score: 2 (1 votes) · LW · GW

AlphaGo.

Comment by elityre on Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical · 2020-01-03T23:58:27.360Z · score: 4 (2 votes) · LW · GW

This is great. I was so pleased to see all those footnote citations.

Comment by elityre on Has there been a "memetic collapse"? · 2019-12-29T00:07:02.595Z · score: 6 (3 votes) · LW · GW

Heh. This is a good observation.

Could it just be increasing literacy? One hypothesis might be that as more people read at all, the average reading level drops.

Comment by elityre on Has there been a "memetic collapse"? · 2019-12-29T00:05:52.867Z · score: 2 (1 votes) · LW · GW

I agree that it doesn't match up with the specific model that Eliezer outlines, but that model is part of a broader class of ideas with different time horizons. So I feel like this is still useful evidence.

Comment by elityre on Meditation Retreat: Immoral Mazes Sequence Introduction · 2019-12-28T06:16:37.642Z · score: 6 (3 votes) · LW · GW

Ehhhh.

I don't know, one might say that Moloch is slowly winning. And I do expect things to change pretty radically soon, which might provide an opportunity for a coup de grace.

Comment by elityre on Local Validity as a Key to Sanity and Civilization · 2019-12-28T05:26:32.629Z · score: 2 (1 votes) · LW · GW

Or another way to put it, is that "someone" unskillfully said "the argument I offered is not a crux for me."

Comment by elityre on Local Validity as a Key to Sanity and Civilization · 2019-12-28T05:22:32.109Z · score: 2 (1 votes) · LW · GW
The notion that you can "be fair to one side but not the other", that what's called "fairness" is a kind of favor you do for people you like, says that even the instinctive sense people had of law-as-game-theory is being lost in the modern memetic collapse. People are being exposed to so many social-media-viral depictions of the Other Side defecting, and viewpoints exclusively from Our Side without any leavening of any other viewpoint that might ask for a game-theoretic compromise, that they're losing the ability to appreciate the kind of anecdotes they used to tell in ancient China.
(Or maybe it's hormonelike chemicals leached from plastic food containers. Let's not forget all the psychological explanations offered for a wave of violence that turned out to be lead poisoning.)

Is it also possible that it has always been like that? People mostly feeling that the other side is evil and trying to get the better of them, with a few people sticking up for fairness, and overall civilization just barely hanging on?

I like the reference to the Little Fuzzy, but going further, how could we tell whether and how much things have changed on this dimension?

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-27T22:49:51.837Z · score: 32 (10 votes) · LW · GW

Actually, I think this touches on something that is useful to understand about CFAR in general.

Most of our "knowledge" (about rationality, about running workshops, about how people can react to x-risk, etc.) is what I might call "trade knowledge", it comes from having lots of personal experience in the domain, and building up good procedures via mostly-trial and error (plus metacognition and theorizing about noticed problems might be, and how to fix them).

This is distinct from scientific knowledge, which is build up from robustly verified premises, tested by explicit attempts at falsification.

(I'm reminded of an old LW post, that I can't find, about Eliezer giving some young kid (who wants to be a writer) writing advice, while a bunch of bystanders signal that they don't regard Eliezer as trustworthy.)

For instance, I might lead someone through an IDC like process at a CFAR workshop. This isn't because I've done rigorous tests (or I know of others who have done rigorous tests) of IDC, or because I've concluded from the neuroscience literature the IDC is the optimal process for arriving at true beliefs.

Rather, its that I (and other CFAR staff) have interacted with people who have a conflict between beliefs / models / urges / "parts", a lot, in addition to spending even more time engaging with those problems in ourselves. And from that exploration, this IDC-process seems to work well, in the sense of getting good results. So, I have a prior that it will be useful for the nth person. (Of course sometime this isn't the case, because people can be really different, and occasionally a tool will be ineffective, or even harmful, despite being extremely useful for most people.)

The same goes for, for instance, whatever conversational facilitation acumen I've acquired. I don't want to be making a claim that, say, "finding a Double Crux is the objectively correct process, or the optimal process, for resolving disagreements." Only that I've spent a lot of time resolving disagreements, and, at least sometimes, at least for me, this strategy seems to help substantially.

I can also give theoretical reasons why I think it works, but those theory reasons are not much of a crux: if a person can't seem to make something useful happen when they try to Double Crux, but something useful does happen when they do this other thing, I think they should do the other thing, theory be damned. It might be that that person is trying to apply the Double Crux pattern in a domain that its not suited for (but I don't know that, because I haven't tried to work in that domain yet), or it might be that they're missing a piece or doing it wrong, and we might be able to iron it out if I observed their process, or maybe they have some other skill that I don't have myself, and they're so good at that skill that trying to do the Double Crux thing is a step backwards (in the same way that there are different schools of martial arts).

The fact that my knowledge, and CFAR's knowledge, in these domains is trade knowledge has some important implications:

  • It means that our content is path dependent. There are probably dozens or hundreds of stable, skilled "ways of engaging with minds." If you're trying to build trade knowledge you will end up gravitating to one cluster, and build out skill and content there, even if that cluster is a local optimum, and another cluster is more effective overall.
  • It means that you're looking for skill, more than declarative third-person knowledge and that you're not trying to make things that are legible to other fields. A carpenter wants to have good techniques for working with wood, and in most cases doesn't care very much if his terminology or ontology lines up with that of botany.
    • For instance, maybe to the carpenter there are 3 kinds of knots in wood, and they need to be worked with in different ways, but he's actually conflating 2 kinds of biological structures in the first type, and the second and third type are actually the same biological structure, but flipped vertically (because sometimes the wood is "upside down" from the orientation of the tree). The carpenter, qua carpenter, doesn't care about this. He's just trying to get the job done. But that doesn't mean that bystanders should get confused and think that the carpenter thinks that he has discovered some new, superior framework of botany.
  • It means that a lot of content can only easily be conveyed tacitly, and in person, or at least, making it accessible via writing, etc. is an additional hard task.
    • Carpentry (I speculate) involves a bunch of subtle tacit, perceptual maneuvers, like (I'm making this up) learning to tell when the wood is "smooth to the grain" or "soft and flexible", and looking at a piece of wood and knowing that you should cut it up top near the knot, even though that seems like it it would be harder to work around, because of how "flat" it gets down the plank. (I am still totally making this up.) It is much easier to convey these things to a learner who is right there with you, so that you can watch their process, and, for instance, point out exactly what you mean by "soft and flexible" via iterated demonstration.
    • That's not to say that you couldn't figure out how to teach the subtle art of carpentry via blog post or book, but you would have to figure out how to do that (and it would still probably be worse than learning directly from someone skilled). This is related to why CFAR has historically been reluctant to share the handbook: the handbook sketches the techniques, and is a good reminder, but we don't think it conveys the techniques particularly well, because that's really hard.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-26T00:58:24.359Z · score: 7 (4 votes) · LW · GW

It's not sensitive so much has context-heavy, and I don't think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people's experiences of things like Circling better.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-26T00:56:49.548Z · score: 9 (6 votes) · LW · GW

A Minute to Unlimit You

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-25T05:24:14.427Z · score: 9 (5 votes) · LW · GW
then go down the rabbit hole of finding all the other things created by that person, and all of their sources, influences, and collaborators.

Oh. Yeah. I think this is pretty good. When someone does something particularly good, I do try to follow up on all their stuff.

And, I do keep track of the histories of the various lineages and where people came from and what influenced them. It's pretty interesting how many different things are descended from the same nodes.

But, you know, limited time. I don't follow up on everything.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-25T03:26:36.702Z · score: 41 (11 votes) · LW · GW

Some sampling of things that I'm currently investigating / interested in (mostly not for CFAR), and sources that I'm using:

  • Power and propaganda
    • reading the Dictator's Handbook and some of the authors' other work.
    • reading Kissinger's books
    • rereading Samo's draft
    • some "evil literature" (an example of which is "things Brent wrote")
    • thinking and writing
  • Disagreement resolution and conversational mediation
    • I'm currently looking into some NVC materials
    • lots and lots of experimentation and iteration
  • Focusing, articulation, and aversion processing
    • Mostly iteration with lots of notes.
    • Things like PJ EBY's excellent ebook.
    • Reading other materials from the Focusing institute, etc.
  • Ego and what to do about it
    • Byron Katie's The Work (I'm familiar with this from years ago, it has an epistemic core (one key question is "Is this true?"), and PJ EBY mentioned using this process with clients.)
    • I might check out Eckhart Tolle's work again (which I read as a teenager)
  • Learning
    • Mostly iteration as I learn things on the object level, right now, but I've read a lot on deliberate practice, and study methodology, as well as learned general learning methods from mentors, in the past.
    • Talking with Brienne.
    • Part of this project will probably include a lit review on spacing effects and consolidation.
  • General rationality and stuff:
    • reading Artificial Intelligence: a Modern Approach
    • reading David Deutsch's the Beginning of Infinity
    • rereading IQ and Human Intelligence
    • The Act of Creation
    • Old Micheal Vassar talks on youtube
    • Thinking about the different kinds of knowledge creation, and how rigorous argument (mathematical proof, engineering schematics) work.

I mostly read a lot of stuff, without a strong expectation that it will be right.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-25T03:12:20.807Z · score: 27 (8 votes) · LW · GW

I'm going to make a general point first, and then respond to some of your specific objections.

General point:

One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.

But there's a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.

And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)

Specific points:

Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.

On the face of it, I wouldn't assume that it is reliable, but I don't have that strong a reason to assume that it isn't a priori.

Post priori, my experience being in Circles is that there is sometime incentive to obscure what's happening for you, in a circle, but that, at least with skilled facilitation, there is usually enough trust in the process that that doesn't happen. This is helped by the fact that there are many degrees of freedom in terms of one's response: I might say, "I don't want to share what's happening for me" or "I notice that I don't want to engage with that."

I could be typical minding, but I don't expect most people to lie outright in this context.

(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)

That seems like a reasonable hypothesis.

Not sure if it's a crux, in so far as if something works well in circling, you can intentionally import the circling context. That is, if you find that you can in fact transfer intuitions, process fears, track what's motivating a person, etc., effectively in the circling context, an obvious next step might be to try and and do this on topics that you care about, in the circling context. e.g. Circles on X-risk.

In practice it seems to be a little bit of both: I've observed people build skills in circling, that they apply in other contexts, and also their other contexts do become more circling-y.

Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address.

Sorry, I wasn't really trying to give a full response to your question, just dropping in with a little "here's how I do things."

You're referring to this question?

What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)

I expect there's some talking past eachother going on, because this question seems surprising to me.

Um. I don't think there are examples of their output with regard to research or research intuitions. The Circlers aren't trying to do that, even a little. They're a funny subculture that engages a lot with an interpersonal practice, with the goals of fuller understanding of self and deeper connections with others (roughly. I'm not sure that they would agree that those are the goals.)

But they do pass some of my heuristic checks for "something interesting might be happening here." So I might go investigate and see what skill there is over in there, and how I might be able to re-purpose that skill for other goals that I care about.

Sort of like (I don't know) if I was a biologist in an alternative world, and I had an inkling that I could do population simulations on a computer, but I don't know anything about computers. So I go look around and I see who does seem to know about computers. And I find a bunch of hobbyists who are playing with circuits and making very simple video games, and have never had a thought about biology in their lives. I might hang out with these hobbyist and learn about circuits and making simple computer games, so that I can learn skills for making population simulations.

This analogy doesn't quite hold up, because its easier to verify that the hobbyists are actually successfully making computer games, and to verify that their understanding of circuits reflects standard physics. The case of the Circlers is less clean cut, because it is less obvious that they are doing anything real, and because their own models of what they are doing and how are a lot less grounded.

But I think the basic relationship holds up, noting that figuring out which groups of hobbyists are doing real things is much trickier.

Maybe to say it clearly: I don't think it is obvious, or a slam dunk, or definitely the case (and if you don't think so then you must be stupid or misinformed) that "Circling is doing something real." But also, I have heuristics that suggest that Circling is more interesting than a lot of woo.

In terms of evidence that make me think Circling is interesting (which again, I don't expect to be compelling to everyone):

  • Having decent feedback loops.
  • Social evidence: A lot of people around me, including Anna, think it is really good.
  • Something like "universality". (This is hand-wavy) Circling is about "what's true", and has enough reach to express or to absorb any way of being or any way the world might be. This is in contrast to many forms of woo, which have an ideology baked into them that reject ways the world could be a priori, for instance that "everything happens for a reason". (This is not to say that Circling doesn't have an ideology, or a metaphysics, but it is capable of holding more than just that ideology.)
  • Circling is concerned with truth, and getting to the truth. It doesn't reject what's actually happening in favor of a nicer story.
  • I can point to places where some people seem much more socially skilled, in ways that relate to circling skill.
  • Pete is supposedly able to be good at detecting lying.
  • The thing I said about picking out people who "seemed to be doing something", and turned out to be circlers.
  • Somehow people do seem to cut past their own bullshit in circles, in a way that seems relevant to human rationality.
  • I've personally had some (few) meaningful realizations in Circles

I think all of the above are much weaker evidence than...

  • "I did x procedure, and got y, large, externally verifiable result",

or even,

  • "I did v procedure, and got u, specific, good (but hard to verify externally) result."

These days, I generally tend to stick to doing things that are concretely and fairly obviously (if only to me) having good immediate effects. If there aren't pretty immediate obvious, effects, then I won't bother much with it. And I don't think circling passes that bar (for me at least). But I do think there are plenty of reasons to be interested in circling, for someone who isn't following that heuristic strongly.


I also want to say, while I'm giving a sort-of-defense of being interested in circling, that I'm, personally, only a little interested.

I've done some ~1000 hours of Circling retreats, for personal reasons rather than research reasons (though admittedly the two are often entangled). I think I learned a few skills, which I could have learned faster, if I knew what I was aiming for. My ability to connect / be present with (some) others, improved a lot. I think I also damaged something psychologically, which took 6 months to repair.

Overall, I concluded it was fine, but I would have done better to train more specific and goal-directed skills like NVC. Personally, I'm more interested in other topics, and other sources of knowledge.





Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-25T00:48:00.240Z · score: 15 (6 votes) · LW · GW

Hahahahah. Strong agree.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-25T00:41:56.533Z · score: 8 (4 votes) · LW · GW

This is the best idea I've heard yet.

It would be pretty confusing to people, and yet...

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T23:47:25.510Z · score: 13 (4 votes) · LW · GW
We’ve been internally joking about renaming ourselves this for some months now.

I'm not really joking about it. I wish the name better expressed what the organization does.

Though I admit that CfBCSSS, leaves a lot to be desired in terms of acronyms.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T23:17:19.779Z · score: 13 (4 votes) · LW · GW
because Elon Musk is a terrible leader

This is a drive-by, but I don't believe this statement, based on the fact that Elon has successfully accomplished several hard things via the use of people organized in hierarchies (companies). I'm sure he has foibles, and it might not be fun to work for him, but he does get shit done.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T23:12:52.590Z · score: 6 (5 votes) · LW · GW

Oh and as a side note, I have twice in my life had a short introductory conversation with a person, noticed that something unusual or interesting was happening (but not having any idea what), and then finding out subsequently that the person I was talking with had done a lot of circling.

The first person was Pete, who I had a conversation with shortly after EAG 2015, before he came to work for CFAR. The other was an HR person at a tech company that I was cajoled into interviewing at, despite not really having any relevant skills.

I would be hard pressed to say exactly what was interesting about those conversations: something like "the way they were asking questions was...something. Probing? Intentional? Alive?" Those words really don't capture it, but whatever was happening I had a detector that pinged "something about this situation unusual."

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T23:05:34.332Z · score: 23 (9 votes) · LW · GW
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)

I'm speaking for myself here, not any institutional view at CFAR.

When I'm looking at maybe-experts, woo-y or otherwise, one of the main things that I'm looking at is the nature and quality of their feedback loops.

When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason "well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct." This doesn't seem that far off from what Circling is. (For instance, "I have a story that you're feeling defensive" -> "I don't feel defensive, so much as righteous. And...There's a flowering of heat in my belly.")

Circling does not seem like a perfect training regime, to my naive sensors, but if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that they would get increasingly skilled along a particular axis.

This makes it seem worthwhile training with masters in that domain, to see what skills they bring to bear. And I might find out that some parts of the practice which seemed off the mark from my naive projection of how I would design a training environment, are actually features, not bugs.

This is in contrast to say, "energy healing". Most forms of energy healing do not have the kind of feedback loop that would lead to a person acquiring skill along a particular axis, and so I would expect them to be "pure woo."

For that matter, I think a lot of "Authentic Relating" seems like a much worse training regime than Circling, for a number of reasons, including that AR (ironically), seems to more often incentives people to share warm and nice-sounding, but less-than true sentiments than Circling.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T22:48:24.705Z · score: 9 (5 votes) · LW · GW

Well, there are a lot of things out there. Why did you promote these ones?

CFAR staff have done a decent amount of trawling through self help space, in particular people did investigation that turned up Focusing, Circling, and IFS. There have also been other things that people around here tried, and haven't gone much further.

Granted, this is not a systematic investigation of the space of personal development stuff, but that seems less promising to me than people thinking about particular problems (often personal problems, or problems that they've observed in the rationality and EA communities) and investigating know solutions or attempted solutions that relate to those problems.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T22:42:12.326Z · score: 11 (8 votes) · LW · GW

Also, the meetup groups are selected against for agency and initiative, because, for better or for worse, the most initiative taking people often pick up and move to the hubs in the Bay or in Oxford.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-24T22:39:20.507Z · score: 34 (8 votes) · LW · GW

This doesn't capture everything, but one key piece is "People often confuse a lack of motivation to introspect with a lack of ability to introspect. The fact of confabulation does not demonstrate that people are unable articulate what's actually happening in principle." Very related to the other post on confabulation I note above.

Also, if I remember correctly, some of the papers in that meta analysis, just have silly setups: testing whether people can introspect into information that they couldn't have access too. (Possible that I misunderstood or am miss-remembering.)

To give a short positive account:

  • All introspection depends on comparison between mental states at different points in time. You can't introspect on some causal factor that doesn't vary.
  • Also, the information has to be available at the time of introspection, ie still in short term memory.
  • But that gives a lot more degrees for freedom that people seem to predict, and in practice I am able to notice many subtle intentions (such as when my behavior is motivated by signalling), that others want to throw out as unknowable.
Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-22T06:21:05.651Z · score: 23 (6 votes) · LW · GW

The two best books on Rationality:

  • The Sequences
  • Principles by Ray Dalio (I read the PDF that leaked from bridge water. I haven't even looked at the actual book.)

My starter kit for people who want to build the core skills of this mind / personal effectiveness stuff (I reread all of these, for reminders every 2 years or so):

  • Getting things Done: the Art of Stress-free Productivity
  • Nonviolent Communication: a Language for Life
  • Focusing
  • Thinking, Fast and Slow

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-22T06:12:11.440Z · score: 11 (6 votes) · LW · GW

I haven't done any of the programs you mentioned. And I'm pretty young, so my selection is limited. But I've done lots of personal development workshop and trainings, both before and after my CFAR workshop, and my CFAR workshop was far and above the densest in terms of content, and most transformative on both my day-to-day processing, and my life trajectory.

The only thing that compares are some dedicated, years long relationships with skilled mentors.

YMMV. I think my experience was an outlier.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-22T06:03:52.017Z · score: 47 (8 votes) · LW · GW

Some off the top of my head.

  • A bunch of Double Crux posts that I keep promising but am very bad at actually finishing.
  • The Last Term Problem (or why saving the world is so much harder than it seems) - A abstract decision theoretic problem that has confused me about taking actions at all for the past year.
  • A post on how the commonly cited paper on how "Introspection is Impossible" (Nisbett and Wilson) is misleading.
  • Two takes on confabulation - About how the Elephant in the Brain thesis doesn't imply that we can't tell what our motivations actually are, just that we aren't usually motivated to.
  • A lit review on mental energy and fatigue.
  • A lit review on how attention works.

Most of my writing is either private strategy documents, or spur of the moment thoughts / development-nuggets that I post here.

Comment by elityre on The Proper Use of Humility · 2019-12-22T02:36:38.033Z · score: 3 (2 votes) · LW · GW
“To be humble is to take specific actions in anticipation of your own errors.

Do you, or anyone, have good examples of such specific actions?

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-21T04:01:18.324Z · score: 21 (11 votes) · LW · GW
On the other hand, suggestions such as “at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade” make no sense at all if arrived at via… what, exactly? Just trying different things and seeing which of them seemed like it was good?

Why not?

If you're running many, many events, and one of your main goals is to get good conversations happening you'll begin to build up an intuition about which things help and hurt. For instance, look at a room, and be like "it's too dark in here." Then you go get your extra bright lamps, and put them in the middle of the room, and everyone is like "ah, that is much better, I hadn't even noticed."

It seems like if you do this enough, you'll end up with pretty specific recommendations like what Adam outlined.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-20T19:30:28.541Z · score: 14 (6 votes) · LW · GW

I think this project sounds cool. This might be (I don't know enough to know) an example of rationality training in something other than the CFAR paradigm of "1) present 2) techniques to 3) small groups 4) at workshops."

But I think your question is too high context to easily answer. Is there a way I can play the current version? If so would try it for a bit and then tell you what I think, personally.

Comment by elityre on We run the Center for Applied Rationality, AMA · 2019-12-20T18:38:09.394Z · score: 14 (8 votes) · LW · GW

To be clear, this is not to say that those skills are bad, or even that they’re not an important part of rationality. More than half of the CFAR staff (at least 5 of the 7 current core staff, not counting myself, as a contractor) have personally trained their calibration, for instance.

In general, just because something isn’t in the CFAR workshop doesn’t mean that it isn’t an important part of rationality. The workshop is only 4 days, and not everything is well-taught in a workshop context (as opposed to [x] minutes of practice every day, for a year, or something like an undergraduate degree).

Comment by elityre on Historical mathematicians exhibit a birth order effect too · 2019-12-18T17:47:02.795Z · score: 8 (4 votes) · LW · GW
(Personal footnote: This post was essentially what converted me from a LessWrong lurker to a regular commentor/contributor - I think it was mainly just being impressed with how thorough it was and thinking that's the kind of community I'd like to get involved with.)

: )

Comment by elityre on Contra double crux · 2019-12-13T18:22:23.851Z · score: 2 (1 votes) · LW · GW

Some updates on what I think about Double Crux these days are here.

Comment by elityre on Best reasons for pessimism about impact of impact measures? · 2019-12-13T18:21:22.154Z · score: 4 (2 votes) · LW · GW

Not really. Just that when I look that the text I wrote, now, it seems a little hacky / not quite expressing the true spirit of the mental motions that seem useful to me.

It might still be a good procedure for bootstrapping into the right mental motions though? I've haven't done any testing on this one, so I don't know.

Comment by elityre on Best reasons for pessimism about impact of impact measures? · 2019-12-13T00:34:26.889Z · score: 2 (1 votes) · LW · GW

I no longer fully endorse this comment, though I recommend this procedure to anyone who think it sounds interesting.

Comment by elityre on What are the best arguments and/or plans for doing work in "AI policy"? · 2019-12-10T19:41:48.557Z · score: 15 (4 votes) · LW · GW

As I said, I haven't oriented on this subject yet, and I'm talking from my intuition, and I might be about to say stupid things. (And I might think different things on further thought. I think, I 60% to 75% "buy" the arguments that I make here.]

I expect we have very different worldviews about this area, so I'm first going to lay out a general argument, which is intended to give context, and then respond to your specific points. Please let me know if anything I say seems crazy or obviously wrong.

General Argument

My intuition says that in general, governments can only be helpful after the core, hard problems of alignment have been solved. After that point, there isn't much for them to do, and before that point, I think they're much more likely to cause harm, for the sorts of reasons I outline in this comment.

(There is an argument that EAs should go into policy because the default trajectory involves governments interfering in the development of powerful AI, and having EAs in the mix is apt to make that interference smaller and saner. I'm sympathetic to that, if that's the plan.)

To say it more specifically: governments are much stupider than people, and can only do sane, useful things if there is a very clear, legible, common knowledge standard for which things are good and which things are bad.

  • Governments are not competent to do things like assess which technical research is promising. Especially not in fields that are as new and confusing as AI safety, where the experts themselves disagree about which approaches are promising. But my impression is that governments are mostly not even competent to do much more basic assessment of things like "which kinds of batteries for electric cars, seem promising to invest in? (Or even physically plausible?)"
    • There do appear to be some exceptions to this. DARPA and IARPA seem to well designed for solving some kinds of important engineering problems, via a mechanism that spawns many projects and culls most of them. I bet DARPA could make progress on AI alignment if there were clear, legible targets to try and hit.
  • Similarly governments can constrain the behavior of other actors via law, but this only seems useful if it is very clear what standards they should be enforcing. If legislatures freak out about the danger of AI, and then come up with the best compromise solution they can, for making sure "no one does anything dangerous" (from a partial, at best, understanding of the technical details), I expect this to be harmful on net, because it inserts semi-random obstacles in the way of technical experts on the ground trying to solve the problem.

. . .


There are only two situations in which I can foresee policy having a major impact, a non-extreme story, and an extreme story.

The first, non-extreme story is when all of the following conditions hold...

1) Earth experiences a non-local takeoff.
2) We have known, common knowledge, technical solutions to intent alignment.
3) Those technical solutions are not competitive with alternative methods that "cut corners", with regard to alignment, but which do succeed in hitting the operator's goals in the short term.

In this case we know what needs to be done to ensure safe AI, but we have a commons problem: Everyone is tempted to forgo the alignment "best practices" because they're very expensive (in money, or time, or whatever) and you can get your job done without any fancy alignment tech.

But every unit of unaligned optimization represents a kind of "pollution", which adds up to a whimper, or eventually catalyzes a bang.

In this case, what governments should do is simple: tax, or outlaw, unalignment pollution. We still have a bit of an issue in that this tax or ban needs to be global, and free riders who do pollute will get huge gains from their cheaper unaligned AI, but this is basically analogous to the problem of governments dealing with global climate change.

But if any of the above conditions don't hold, then it seems like our story starts to fall apart.

1) If takeoff is local, then I'm confused about how things are supposed to play out. Deep Mind (or some other team) builds a powerful AI system that automates AI research, but is constrained by the government telling them what to do? How does the government know how to manage the intelligence explosion better than the by-definition, literal leaders of the field?

I mean, I hope they use the best alignment technology available, but if the only reason why they are doing that is "its the law", something went horribly wrong already. I don't expect constraints made by governments to compensate for a team that doesn't know or care about alignment. And given how effective most bureaucracies are, I would prefer that a team that does know and care about alignment not be needing to work around the constraints imposed by a legislature somewhere.

(More realistically, in a local takeoff scenario, it seems plausible that the leading team is nationalized, or there is otherwise a very close cooperation between the technical leaders of that team, and military (and political?) leaders of the state, in the style of the Manhattan project.

But this doesn't look much like "policy" as we typically think about it, and the only way to influence this development would be to be part of the technical team, or be one of the highest ranking members of the military, up to the president him/herself. [I have more to say about the Manhattan project, and the relevance to large AI projects, but I'll go into that another time.])

But maybe the government is there as a backstop to shutdown any careless or reckless projects, while the leaders are slowly and carefully checking and double checking the alignment of their system? In which case, see the extreme scenario below.

2) If we don't have solutions to intent alignment, or we don't have common knowledge that they work, then we don't have anything that we can carve off as "fine and legal" in contrast to the systems that are bad and should be taxed or outlawed.

If we don't have such a clear distinction, then there's not much that we can do, except ban AI, or ML entirely (or maybe ban AI above a certain compute threshold, or optimization threshold), which seems like a non-starter.

3) If there aren't more competitive alternatives to intent-aligned systems, then we don't need to bother with policy: that natural thing to do is to use intent-aligned systems.


The second, extreme scenario in which government can help:

We're establishing a global coalition that is going to collectively build safe AI, and we're going to make building advance AI outside of that coalition illegal.

Putting the world on lock-down, and survailing all the compute to make sure that no one is building an AI, while the global coalition figures out how to launch a controlled, aligned intelligence explosion.

This seems maybe good, if totally implausible from looking at today's world.

Aside from those two situations, I don't see how governments can help, because governments are not savvy enough to do the right thing in technical complicated topics.

Responding to your specific points

  • Funding safety research

This is only any use at all if governments can easily identify tractable research programs that actually contribute to AI safety, instead of have "AI safety" as a cool tagline. I guess that you imagine that that will be the case in the future? Or maybe you think that it doesn't matter if they fund a bunch of terrible, pointless research if some "real" research also gets funded?

  • Building aligned AI themselves?

What? It seems like this is only possible if the technical problem is solved and known to be solved. At that point, the problem is solved

  • Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity")

Again, if there are existing, legible standards of what's safe and what isn't this seems good. But without such standards I don't know how this helps?

It seems like most of what makes this work is inside of the "comprehensive review"? If our civilization knows how to do that well, then having the government insist on it seems good, but if we don't know how to do that well, then this looks like security theater.

  • Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year")

This has the same issue as above.

[Overall, I something like 60% to 75% believe the arguments that I outline in this comment.]

(Some) cruxes:

  • [Partial] We are going to have clear, legible, standards for aligning AI systems.
  • We're going to be in scenario 1 or scenario 2 that I outlined above.
  • For some other reason, we will have some verified pieces of alignment technology, but AI employers won't use that technology by default
    • Maybe because tech companies are much more reckless or near-sighted than I'm imagining?
  • Governments are much more competent than I currently believe, or will become much more competent before the endgame.
  • EAs are planning to go into policy to try to make the governmental reaction smaller and saner, rather than try to push the government into positive initiatives, and the EAs are well-coordinated about this.
  • In a local takeoff scenario, the leading team is not concerned about alignment or is basically not cosmopolitan in its values.



Comment by elityre on What are the best arguments and/or plans for doing work in "AI policy"? · 2019-12-09T19:47:33.221Z · score: 2 (1 votes) · LW · GW

Ok. Good to note.

Comment by elityre on What are the best arguments and/or plans for doing work in "AI policy"? · 2019-12-09T19:47:13.414Z · score: 3 (4 votes) · LW · GW

The "obvious resources" are just what I want. Thanks.

Comment by elityre on Drowning children are rare · 2019-12-06T05:50:56.510Z · score: 2 (1 votes) · LW · GW
except that I suspect you aren't tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct.

Interesting.

Quick check: what's your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?

I don't know, certainly not off by more than a half billion in either direction? I don't know how hard it is to estimate the number of people on earth. It doesn't seem like there's much incentive to mess with the numbers here.

Comment by elityre on Drowning children are rare · 2019-12-06T05:48:23.482Z · score: 3 (2 votes) · LW · GW

If I replaced the word "gang" here, with the word "ingroup" or "club" or "class", does that seem just as good?

In these sentences in particular...

Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc.

and

It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.

...I'm tempted to replace the word "gang" with the word "ingroup".

My guess is that you would say, "An ingroup that coordinates to exclude / freeze out non-ingroup-members from a market is a gang. Let's not mince words."