Posts

What are some good pieces on civilizational decay / civilizational collapse / weakening of societal fabric? 2021-08-06T07:18:10.073Z
What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? 2021-07-25T06:47:56.249Z
I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction 2021-06-22T03:53:33.868Z
Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? 2021-06-20T21:57:04.078Z
How do we prepare for final crunch time? 2021-03-30T05:47:54.654Z
What are some real life Inadequate Equilibria? 2021-01-29T12:17:15.496Z
#2: Neurocryopreservation vs whole-body preservation 2021-01-13T01:18:05.890Z
Some recommendations for aligning the decision to go to war with the public interest, from The Spoils of War 2020-12-27T01:04:47.186Z
What is the current bottleneck on genetic engineering of human embryos for improved IQ 2020-10-23T02:36:55.748Z
How To Fermi Model 2020-09-09T05:13:19.243Z
Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? 2020-08-14T19:19:30.980Z
Basic Conversational Coordination: Micro-coordination of Intention 2020-07-27T22:41:53.236Z
If you are signed up for cryonics with life insurance, how much life insurance did you get and over what term? 2020-07-22T08:13:38.931Z
The Basic Double Crux pattern 2020-07-22T06:41:28.130Z
What are some Civilizational Sanity Interventions? 2020-06-14T01:38:44.980Z
Ideology/narrative stabilizes path-dependent equilibria 2020-06-11T02:50:35.929Z
Most reliable news sources? 2020-06-06T20:24:58.529Z
Anyone recommend a video course on the theory of computation? 2020-05-30T19:52:43.579Z
A taxonomy of Cruxes 2020-05-27T17:25:01.011Z
Should I self-variolate to COVID-19 2020-05-25T20:29:42.714Z
My dad got stung by a bee, and is mildly allergic. What are the tradeoffs involved in deciding whether to have him go the the emergency room? 2020-04-18T22:12:34.600Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T05:00:35.435Z
Resource for the mappings between areas of math and their applications? 2020-03-30T06:00:10.297Z
When are the most important times to wash your hands? 2020-03-15T00:52:56.843Z
How likely is it that US states or cities will prevent travel across their borders? 2020-03-14T19:20:58.863Z
Recommendations for a resource on very basic epidemiology? 2020-03-14T17:08:27.104Z
What is the best way to disinfect a (rental) car? 2020-03-11T06:12:32.926Z
Model estimating the number of infected persons in the bay area 2020-03-09T05:31:44.002Z
At what point does disease spread stop being well-modeled by an exponential function? 2020-03-08T23:53:48.342Z
How are people tracking confirmed Coronavirus cases / Coronavirus deaths? 2020-03-07T03:53:55.071Z
How should I be thinking about the risk of air travel (re: Coronavirus)? 2020-03-02T20:10:40.617Z
Is there any value in self-quarantine (from Coronavirus), if you live with other people who aren't taking similar precautions? 2020-03-02T07:31:10.586Z
What should be my triggers for initiating self quarantine re: Corona virus 2020-02-29T20:09:49.634Z
Does anyone have a recommended resource about the research on behavioral conditioning, reinforcement, and shaping? 2020-02-19T03:58:05.484Z
Key Decision Analysis - a fundamental rationality technique 2020-01-12T05:59:57.704Z
What were the biggest discoveries / innovations in AI and ML? 2020-01-06T07:42:11.048Z
Has there been a "memetic collapse"? 2019-12-28T05:36:05.558Z
What are the best arguments and/or plans for doing work in "AI policy"? 2019-12-09T07:04:57.398Z
Historical forecasting: Are there ways I can get lots of data, but only up to a certain date? 2019-11-21T17:16:15.678Z
How do you assess the quality / reliability of a scientific study? 2019-10-29T14:52:57.904Z
Request for stories of when quantitative reasoning was practically useful for you. 2019-09-13T07:21:43.686Z
What are the merits of signing up for cryonics with Alcor vs. with the Cryonics Institute? 2019-09-11T19:06:53.802Z
Does anyone know of a good overview of what humans know about depression? 2019-08-30T23:22:05.405Z
What is the state of the ego depletion field? 2019-08-09T20:30:44.798Z
Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? 2019-07-29T22:59:33.170Z
Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea? 2019-07-09T21:57:28.537Z
Does scientific productivity correlate with IQ? 2019-06-16T19:42:29.980Z
Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? 2019-06-16T19:12:48.358Z
Eli's shortform feed 2019-06-02T09:21:32.245Z
Historical mathematicians exhibit a birth order effect too 2018-08-21T01:52:33.807Z

Comments

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-11-25T18:22:26.821Z · LW · GW

What was the best conference that you every attended?

Comment by Eli Tyre (elityre) on Ngo and Yudkowsky on alignment difficulty · 2021-11-19T18:03:40.099Z · LW · GW

Some of your examples don't prove anything,

I agree that they aren't conclusive. 

But are you suggesting that the reckless driving was well-considered expected utility maximizing? 

I guess I can see that if fatal accidents are rare, I guess, but I don't think that was the case?

"Activities that have a small, but non-negligible chance of death or permanent injury are not worth the immediate short-term thrill", seems like a textbook case of a conclusion one would draw from considering expected utility theory in practice, in one's life. 

At minimum, it seems like there ought to be pareto-improvements that are just as or close to as fun, but which entail a lot less risk?

Comment by Eli Tyre (elityre) on Ngo and Yudkowsky on alignment difficulty · 2021-11-18T01:28:35.659Z · LW · GW

Von Neumann was actually a fairly reflective fellow who knew about, and indeed helped generalize, utility functions. The great achievements of von Neumann were not achieved by some very specialized hypernerd who spent all his fluid intelligence on crystallizing math and science and engineering alone, and so never developed any opinions about politics or started thinking about whether or not he had a utility function.

Uh. I don't know about that.

Von Neuman seemed to me to be very much not making rational tradeoffs of the sort that one would if they were conceptualizing themselves as an an agent with a utility function. 

From a short post I wrote, a few years ago, after reading a bit about the man: 

For one thing, at the end of his life, he was terrified of dying. But throughout the course of his life he made many reckless choices with his health.

He ate gluttonously and became fatter and fatter over the course of his life. (One friend remarked that he “could count anything but calories.”)

Furthermore, he seemed to regularly risk his life when driving.

  • Von Neuman was an aggressive and apparently reckless driver. He supposedly totaled his car every year or so. An intersection in Princeton was nicknamed “Von Neumann corner” for all the auto accidents he had there. records of accidents and speeding arrests are preserved in his papers. [The book goes on to list a number of such accidents.] (pg. 25)

(Amusingly, Von Neumann’s reckless driving seems due, not to drinking and driving, but to singing and driving. “He would sway back and forth, turning the steering wheel in time with the music.”)

I think I would call this a bug.

Comment by Eli Tyre (elityre) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-14T21:48:34.816Z · LW · GW

This 12 minute Robert Miles video is a good introduction to the basic argument for why "stopping at destroying earth, and not proceeding to convert the universe into computronium" is implausible.

 

Comment by Eli Tyre (elityre) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-14T21:39:52.337Z · LW · GW

FWIW, I didn't have a problem with it.

 

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-11-11T19:46:59.027Z · LW · GW

I've offered to be a point person for folks who believe that they were severely impacted by Leverage 1.0, and have related information, but who might be unwilling to share that info, for any of a number of reasons. 

In short,

  • If someone wants to tell me private meta-level information (such as "I don't want to talk about my experience publicly because X"), so that I can pass along in an anonymized way to someone else (including Geoff, Matt Fallshaw, Oliver Habryka, or others) - I'm up for doing that.
    • In this case, I'm willing to keep info non-public (ie not publish it on the internet), and anonymized, but am reluctant to keep it secret (ie pretend that I don't have any information bearing on the topic).
      • For instance, let's say someone tells me that they are afraid to publish their account due to a fear of being sued.
      • If later, as a part of this whole process, some third party asks "is there anyone who isn't speaking out of a fear of legal repercussions?", I would respond "yes, without going into the details, one of the people that I spoke to said that", unless my saying that would uniquely identify the person I spoke to.
      • If someone asked me point-blank "is it Y-person who is afraid of being sued?", I would say "I can neither confirm or deny", regardless of whether it was Y-person.
    • This policy is my best guess at the approach that will maximize my ability to help with this whole situation going forward, without gumming up the works of a collective truth-seeking process. If I change my mind about this at a later date, I will, of course, continue to hold to all of the agreements that I made under previous terms.
  • If someone wants to tell me object level information about their experience at Leverage, their experience of this process, to-date etc, and would like me to make that info public in an anonymized way (eg writing a comment that reads "one of the ex-Leveragers that I talked to, who would prefer to remain anonymous, says...") - I'm up for that, as well, if it would help for some reason.
  • I'm probably open to doing other things that seem likely to be helpful for this process so long as I can satisfy my per-existing commitments to maintain privacy, anonymity, etc.
Comment by Eli Tyre (elityre) on Zoe Curzi's Experience with Leverage Research · 2021-10-29T11:22:32.502Z · LW · GW

Is that to say that you have audio of the whole conversation, and video of the first 20 minutes?

Comment by Eli Tyre (elityre) on Self-Integrity and the Drowning Child · 2021-10-25T00:07:36.299Z · LW · GW

You're referring to the original Peter Singer essay, not to this one, yes?

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T09:48:00.458Z · LW · GW

Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT"

I think that's literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.

I think that's backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say "I'm not trying to punish them, I just want to talk freely about some harms."

By pretending that you're not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of "but I was just trying to talk about what's going on. I specifically said not to punish any one!"

and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

This also seems to strong to me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T08:20:37.464Z · LW · GW

Ok. After thinking further and talking about it with others, I've changed my mind about the opinion that I expressed in this comment, for two reasons.

1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, "write off Leverage as reprehensible, treat it as 'an org that we all know is bad', and move on, while feeling good about our selves for not being bad they way that they were". 

Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)

If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.

2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe's post, both in terms of the deliberateness of the bad dynamics and the magnitude the harm they caused.

I think that talking about MIRI or CFAR is mostly a distraction from understanding what happened at Leverage, and what things anyone here should do next. However, there are some similarities between Leverage on the one hand and CFAR or MIRI on the other, and Jessica had some data about the latter which might be relevant to people's view about Leverage.

Basically, there's an epistemic processing happening in these comments and on general principles, it is better for people to share info that they think is relevant, so that the epistemic process has the option of disregarding it or not.

 

 

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I think that both of these views are incorrect simplifications. But I think that the second story is less accurate than the first, and so I think it is a cost if Jessica's post promotes the second view. I have some annoyance about that.

However, I think that we mostly shouldn't be in the business of trying to cater to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.

 

I still wish that this post had been written differently in a number of ways (such as emphasizing more strongly that in Jessica's opinion management in corporate America is worse than MIRI or Leverage), but I acknowledging that writing such a post is hard.

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T06:23:09.295Z · LW · GW

As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment.

I would like a lot more elaboration about this, if you can give it. 

Can you say more specifically what you observed?

Comment by Eli Tyre (elityre) on Zoe Curzi's Experience with Leverage Research · 2021-10-21T06:06:59.877Z · LW · GW

Same of me.

Comment by Eli Tyre (elityre) on Zoe Curzi's Experience with Leverage Research · 2021-10-21T06:05:39.290Z · LW · GW

I guess I don't understand your categories. I would guess that I would should be on both sub-lists. [shrug]

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T06:47:58.164Z · LW · GW

Thank you very much for sharing. I wasn't aware of any of these details.

Comment by Eli Tyre (elityre) on Zoe Curzi's Experience with Leverage Research · 2021-10-20T06:41:39.085Z · LW · GW

I think I should also be in the list of notable part-timers?

 

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T05:54:30.920Z · LW · GW

The OP speaks for me.

Anna, I feel frustrated that you wrote this. Unless I have severely misunderstood you, this seems extremely misleading.

For context, before this post was published Anna and I discussed the comparison between MIRI/CFAR and Leverage. 

At that time, you, Anna, posited a high level dynamic involving "narrative pyramid schemes" accelerating, and then going bankrupt, at about the same time. I agreed that this seemed like it might have something to it, but emphasized that, despite some high level similarities, what happened at MIRI/CFAR was meaningfully different from, and much much less harmful than, what Zoe described in her post.

We then went through a specific operationalization of one of the specific claimed parallels (specifically the frequency and oppressiveness of superior-to-subordinate debugging), and you agreed that while the CFAR case was, quantitatively, an order of magnitude better than what Zoe describes. We talked more generally about some of the other parallels, and you generally agreed that the specific harms were much greater in the Leverage case. 

(And just now, I talked with another CFAR staff member who reported that the two of you went point by point, and for each one you agreed that the CFAR/MIRI situation was much less bad than the Leverage case. [edit: I misunderstood. They only went through 5 points, out of many, but out of those 5 Anna agreed that the Leverage case was broadly worse.])

I think that you believe, as I do, that there were some high-level structural similarities between the dynamics at MIRI/CFAR and at Leverage, and also what happened at Leverage was substantially worse* than what happened at MIRI/CFAR.

Do you believe that?

If so, can you please clearly say so? 

I feel like not clearly stating that second part is extremely and damagingly misleading. 

What is at stake is not just the abstract dynamics, but also the concrete question of how alarmed, qualitatively, people around here should be. It seems to me that you, with this comment, are implying that it is appropriate to be about as alarmed by Zoe's report of Leverage as by this description of MIRI/CFAR. Which seems wrong to me.

[Edit: * - This formally read "an order of magnitude worse". 

I think this is correct, for a number of common sense metrics (ie "there was at least 10x as many hours of superior-subordinate debugging at Leverage, where this seemed to be an institutionalized practice making up a lot of a person's day, compared to CFAR, where this did happen sometimes what wasn't a core feature of the org. (This is without taking into account the differences in how harmful those hours were. The worst case of which I'm aware of this happening at CFAR was less harmful than Zoe's account.) 

I think across most metrics named, Leverage had a worse or stronger version of thing, with a few exceptions. MIRI's (but not CFAR's, mostly) narrative had more urgency about it than Leverage's for instance, because of AI timeline considerations, and overall the level of "intensity" or "pressure" around MIRI and Leverage might have been similar? I'm not close enough to either org to say with confidence.

But overall, I think it is weird to talk about "orders of magnitude" without referring to a specific metric, since it has the veneer of rigor without really adding much substance. I'm hoping that this edit adds some of that substance and I'm walking my claim back to the vaguer "substantially worse", with the caveat that I am generally in favor of, and open to sharing more specific quantitative estimates on specific operationalizations if asked.]

 

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T00:14:25.382Z · LW · GW

As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he's saying. I certainly did not fully succeed. 

My notes.

It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?

I would really like to understand what he's getting at by the way, so if it is clearer for you than it is for me, I'd actively appreciate clarification.

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T00:00:47.650Z · LW · GW

All this sounds broadly correct to me, modulo some nitpicks that are on the whole smaller than Scott's objections (for a sense of scale).

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T08:36:49.781Z · LW · GW

I worked for CFAR from 2016 to 2020, and am still somewhat involved.

This description does not reflect my personal experience at all. 

And speaking from my view of the organization more generally (not just my direct personal experience): Several bullet points seem flatly false to me. Many of the bullet points have some grain of truth to them, in the sense that they refer to or touch on real things that happened at the org, but then depart wildly from my understanding of events, or (according to me) mischaracterize / distort things severely.

I could go through and respond in more detail, point by point, if that is really necessary, but I would prefer not to do that, since it seems like a lot of exhausting work.

As a sort of free sample / downpayment: 

  • At least four people who did not listen to Michael's pitch about societal corruption and worked in some capacity with the CFAR/MIRI team had psychotic episodes.

I don't know who this is referring to. To my knowledge 0 people who are or have been staff at CFAR had a psychotic episode either during or after working at CFAR.

  • Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file. This makes it highly distressing that Michael is being singled out for his drug advocacy by people defending CFAR.

First of all, I think the use of "rank-and-file" throughout the use of this comment is misleading to the point of being dishonest. CFAR has always been a small organization of no more than 10 or 11 people, often flexibly doing multiple roles. The explicit organizational structure involved people having different "hierarchical" relationships depending on context. 

In general, different people lead different projects, and the rest of the staff would take "subordinate" roles, in those projects. That is, if Elizabeth is leading a workshop, she would delegate specific responsibilities to me as one of her workshop staff. But in a different context, where I'm leading a project, I might delegate to her, and I might have the final say. (At one point this was an official, structural, policy, with a hierarchy of reporting mapped out on a spreadsheet, but for most of the time I've been there it has been much more organic than that.)

But these hierarchical arrangements are both transient and do not at all dominate the experience of working for CFAR. Mostly we are and have been a group of pretty independent contributors, with different views about x-risk and rationality and what-CFAR-is-about, who collaborate on specific workshops and (in a somewhat more diffuse way) in maintaining the organization. There is not anything like the hierarchy you typically see in larger organizations, which makes the frequent use of the term "rank and file" seem out of place and disingenuous, to me.

Certainly, Anna was always in a leadership role, in the sense that the staff respected her greatly, and were often willing to defer to her, and at most times there was an Executive Director (ED) in addition to Anna. 

That said, I don't think that Anna, or either of the EDs ever confided to me that they had ever taken psychedelics, even in private. I certainly didn't feel pressured to do psychedelics, and I don't see how that practice could have spread by imitation, given that it was never discussed, much less modeled. And there was not anything like "institutional encouragement".

The only conversations I remember having about psychedelic drugs are the conversations in which we were told that it was one of the topics that we were not to discuss with workshop participants, and a conversation in which Anna strongly stated that psychedelics were destabilizing and implied that they were...generally bad, or at least that being reckless with them was really bad.

Personally, I have never taken any psychoactive drugs aside from nicotine (and some experimentation with caffeine and modafinil, once). This stance was generally respected by CFAR staff. Occasionally, some people (not Anna or either ED) expressed curiosity about or gently ribbed me about my hard-line stance of not drinking alcohol, but in a way that was friendly and respectful of my boundaries. My impression is that Anna more-or-less approves of stance on drugs, without endorsing it as the only or obvious stance.

  • Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as "implanting an engine of desperation" within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.

This is false, or at minimum is overly general, in that it does not resemble my experience at all. 

My experience: 

I could and can easily avoid debugging sessions with Anna. Every interaction that I've had with her has been consensual, and she has, to my memory, always respected my boundaries, when I had had enough, or was too tired, or the topic was too sensitive, or whatever. In general, if I say that I don't want to talk about something, people at CFAR respect that. They might offer care, or help, for if I decided I wanted it, but then they would leave me alone. (Most of the debugging, etc., conversations that I had at CFAR, I explicitly sought out.)

This also didn't happen that frequently. While I've had lots of conversations with Anna, I estimate I've had deep "soulful" conversations, or conversations in which she was explicitly teaching me a mental technique...around once every 4 months, on average? 

Also, though it has happened somewhat more rarely, I have participated in debugging style conversations with Anna where I was in the "debugger" role.

(By the way, is in CFAR's context, the "debugger" role is explicitly a role of assistance / midwifery, of helping a person get traction and understanding on some problem, rather than an active role of doing something to or intervening on the person being debugged. 

Though I admit that this can still be a role with a lot of power and influence, especially in cases where there is an existing power or status differential. I do think that early in my experience with CFAR, I was to willing to defer to Anna about stuff in general, and might make big changes in my personal direction at her suggestion, despite not really having and inside view of why I should prefer that direction. She and I would both agree, today, that this is bad, though I don't consider myself to have been majorly harmed by it. I also think it is not that unusual. Young people are often quite influenced by role models that they are impressed by, often without clear-to-them reasons.)

I have never heard the phrase "engine of desperation" before today, though it is true that there was a period in which Anna was interested in a kind of "quiet desperation" that she thought was a effective place to think and act from.

I am aware of some cases of Anna debugging with CFAR staff that seem somewhat more fraught than my own situation, but from what I know of those, they are badly characterized by the above bullet point.

 

I could go on, and I will if that's helpful. I think my reaction to these first few bullet points is a broadly representative sample.

Comment by Eli Tyre (elityre) on Zoe Curzi's Experience with Leverage Research · 2021-10-19T00:31:02.465Z · LW · GW

Do you have a suggestion for another forum that you think would be better? 

In particular, do you have pointers to online forums that do incorporate the emotional and physical layers ("in a non-toxic way", he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T08:11:54.178Z · LW · GW

Without denying that it is a small org and staff usually have some input over hiring, that input is usually informal.

My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting. 

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T08:02:57.565Z · LW · GW

This feels especially salient because a number of the specific criticisms, in my opinion, don't hold up to scrutiny, but this is obscured by the comparison to Leverage.

Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of "having a culture of self improvement and debugging", and also versions that are harmful. 

For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.

For instance,

Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR's self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.

Assuming that for a moment that my assessment about CFAR is true (of course, it might not be), your comparing debugging at CFAR to debugging at Leverage is confusing to the group cognition, because they have been implicitly lumped together

Now, more people's estimation of CFAR's debugging culture will rise or fall with their estimation of Leverage's debugging culture. And recognizing this, consciously or unconsciously, people are now incentivized to bias their estimation of one or of the other (because they want to defend CFAR, or defend Leverage, or attack CFAR, or attack Leverage).

I'm under this weird pressure, because if I state "Anna debugging with me while I worked at CFAR might seems bad, but it was actually mostly innocuous" is kind of awkward, because this seems to be implying that that what happened at Leverage was also not so bad. 

And on the flip side, I'll feel more cagey about talking about the toxic elements of CFAR's debugging culture, because in context, that seems to be implying that it was as bad as Zoe's account of Leverage. 

"Debugging culture" is just one example. For many of these points, I think further investigation might show that the thing that happened at one org was meaningfully different from the thing that happened at the other org, in which case, bucketing them together from the getgo seems counterproductive to me. 

Drawing the parallels between MIRI/CFAR and Leverage, point by point, makes it awkward to consider each org's pathologies on it's own terms. It makes it seem like if one was bad, then the other was probably bad too, even though it is at least possible that one org had mostly healthy versions of some cultural elements and the other had mostly unhealthy versions of similar elements, or (even more likely) they each had a different mix of pathologies.

I contend that if the goal is to get clear on the facts, we want to do the opposite thing: we want to, as much as possible, consider the details of the cases independently, attempting to do original seeing, so that we can get a good grasp of what happened in each situation. 

And only after we've clarified what happened might we want to go back and see if there are common dynamics in play.

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T07:41:47.662Z · LW · GW

[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.

I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.

But then the post repeatedly (in every section!) makes reference to Zoe's post, comparing her experience at Leverage to your (and others') experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!

Some more or less randomly chosen examples (ctrl-f "Leverage" or "Zoe" for lots more):

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

...

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

...

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research). 

...

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.

If the goal is just to clarify what happened and not at all to blame or compare, then why not...just state what happened at MIRI/CFAR without comparing to the Leverage case, at all?

You (Jessica) say, "I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened." But in that case, why not use her post as a starting point for organizing your own thoughts, but then write something about MIRI/CFAR that stands on its own terms?

. . . 

To answer my own question...

My guess is that you adopted this essay structure because you want to argue that the things that happened at Leverage were not a one-off random thing, they were structurally (not just superficially) similar to dynamics at MIRI / CFAR. That is, there is a common cause in of similar symptoms, between those two cases.

If so, my impression is that this essay is going too fast, by introducing a bunch of new interpretation-laden data, and fitting that data into a grand theory of similarity between Leverage and MIRI all at once. Just clarifying the facts about what happened is a different (hard) goal than describing the general dynamics underlying those events. I think we'll make more progress if we do the first, well, before moving on to the second.

In effect, because the data is presented as part of some larger theory, I have to do extra cognitive work to evaluate the data on its own terms, instead of slipping into the frame of evaluating whether the larger theory is true or false, or whether my affect towards MIRI should be the same as my affect toward Leverage, or something. It made it harder instead of easier for me to step out of the frame of blame and "who was most bad?".


 

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T04:09:52.947Z · LW · GW

If you're not disagreeing with people about important things then you're not thinking.

This is a great sentence. I kind of want it on a t-shirt.

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T03:38:02.202Z · LW · GW

This comment was very helpful. Thank you.

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T03:00:28.095Z · LW · GW

Yeah, I very strongly don't endorse this as a description of CFAR's activities or of CFAR's goals, and I'm pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I'm less surprised). 

Most of my probability mass is on the CFAR instructor was taking "become Elon Musk" to be a sort of generic, hyperbolic term for "become very capable."

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T02:33:32.604Z · LW · GW

MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI.

I just want to note that this is a contentious claim. 

There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.

One could make the claim "healthy" people (whatever that means) wouldn't exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that's a non-standard view. 

I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you're not going into detail on the argument and that you don't expect others to accept the claim.

As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.  

Comment by Eli Tyre (elityre) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T01:56:15.140Z · LW · GW

As someone who was more involved with CFAR than Duncan was from in 2019 on, all this sounds correct to me.

Comment by Eli Tyre (elityre) on Eli's shortform feed · 2021-09-17T00:59:36.896Z · LW · GW

I remember reading a thread on Facebook, where Eliezer and Robin Hanson were discussing the implications of the Alpha Go (or Alpha Zero) on the content of the AI foom debate, and Robin made an analogy to Linear Regression as one thing that machines can do better than humans, but which doesn't make them super-human.

Does anyone remember what I'm talking about?

Comment by Eli Tyre (elityre) on Open & Welcome Thread September 2021 · 2021-09-05T10:06:10.348Z · LW · GW

I remember someone (Paul Christiano, I think?) commenting somewhere on LessWrong, saying that Ian Goodfellow got the first GAN working the on the same day that he had the idea, with a link to an article.

Does anyone happen to remember that comment, or have a link to that article?

Comment by Eli Tyre (elityre) on Long Covid Is Not Necessarily Your Biggest Problem · 2021-09-05T10:03:11.645Z · LW · GW

Thank you for being thoughtful about how to serve the community's needs!

Comment by Eli Tyre (elityre) on Open & Welcome Thread September 2021 · 2021-09-05T09:59:21.992Z · LW · GW

Hello and welcome!

I felt much warmth reading your intro. I remember how magical LessWrong was for me when I first discovered it. (Now, almost a decade in, I have a different feeling towards it, but I remain deeply proud to participate in this community.)

All of which is to say that I feel vicarious excitement for the experiences you have ahead of you. I look forward to meeting you in person one day. : )

(The only troublesome side effect: school has become much less tolerable as a whole. I'm truly trying to get through it with top grades, but now that I see how much time I waste there, it's much harder to try and be interesting in the actual material...)

I think this would not have helped me very much, so YMMV, but one frame you might want to consider is that of half-assing [school] with everything you've got.

Comment by Eli Tyre (elityre) on Challenges to Christiano’s capability amplification proposal · 2021-08-31T07:46:56.276Z · LW · GW

[Eli's personal notes. Feel free to comment or ignore.]

My summary of Eliezer's overall view:

  • 1. I don't see how you can't get cognition to "stack" like that, short of running a Turing machine made up of the agents in your system. But if you do that, then we throw alignment out the window.
  • 2. There's this strong X-and-only-X problem.
    • If our agents are perfect imitations of humans, then we do solve this problem. But having perfect imitations of humans is a very high bar that depends have a very powerful superintelligence already. And now we're just passing the buck. How is that extremely powerful superintelligence aligned?
    • If our agents are not perfect imitations, it seems like have no guaranty of X-and-only-X.
      • This might still work, depending on the exact ways in which the imitation deviates from the subject, but most of the plausible ways seem like they don't solve this problem.
        • And regardless, even if it only deviated in ways that we think are safe, we would want some guaranty of that fact.
Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-28T08:38:14.078Z · LW · GW

This is a particularly helpful answer for me somehow. Thanks.

I think I might add one more: probability. For instance, "what are the base rates for people meeting good cofounders (in general, or in specific contexts)?" Knowing the answer to this might tell you how much you should make tradeoffs to optimize for working with possible cofounders. 

Though, probably "risk" and "probability" should be one category.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:24:56.141Z · LW · GW

Really? The plausibility ordering is "transplant to new body > become robot > revive old body"?

I would have guessed it would be "revive old body > transplant to new body > become robot". 

Am I missing something?

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:21:06.278Z · LW · GW

What seems ideal to me would be doing both: remove the head from the body, and then cryopreserve, and store, them both separately. This would give you the benefit of a faster perfusion of the brain and ease of transport in an emergency, but also keeps the rest of the body around on the off-chance that it contains personality-relevant info.

I might consider this "option" [Is this an option? As far as I know, no one has done this, so it would presumably be a special arrangement with Alcor.] when I am older and richer.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:17:45.097Z · LW · GW

It seems worth noting that I have opted for neuropreservation instead of full body, at least at this time, in large part due to price difference. The "inclination to cryopreserve my full body" noted above, was not sufficient to sway my choice.

Comment by Eli Tyre (elityre) on #2: Neurocryopreservation vs whole-body preservation · 2021-07-28T08:07:16.208Z · LW · GW

Fortunately, before the Coroner executed a search warrant, her head mysteriously disappeared from the Alcor facility. That gave Alcor the time to get a permanent injunction in the courts against autopsying her head.

Wow. Sounds like that was an exciting (and/or nerve-wracking) week at Alcor!

Comment by Eli Tyre (elityre) on The shoot-the-moon strategy · 2021-07-25T10:10:28.561Z · LW · GW

hahahahah

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-25T09:01:06.555Z · LW · GW

I probably do basic sanity checks moderately often, just to see if something makes sense in context. But that's already intuition-level, almost. 

If it isn't too much trouble, can you give four more real examples of when you've done this? (They don't need to be as detailed as your first one. A sentence describing the thing you were checking is fine.)

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-25T08:59:22.260Z · LW · GW

Last time I actually pulled an excel was when Taleb was against IQ and said its only use is to measure low IQ. I wanted to see if this could explain (very) large country differences. So I made a trivial model where you have parts of the population affected by various health issues that can drop the IQ by 10 points. And the answer was yes, if you actually have multiple causes and they stack up, you can end up with the incredibly low averages we see (in the 60s for some areas). 

I'm glad that I asked the alternative phrasing of my question, because this anecdote is informative!

Comment by Eli Tyre (elityre) on What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model? · 2021-07-25T08:58:41.252Z · LW · GW

Can you be more specific? Presumably it was possible to open a spreadsheet when you were typing this answer, but I'm guessing that you didn't?

Comment by Eli Tyre (elityre) on What would it look like if it looked like AGI was very near? · 2021-07-14T01:29:00.878Z · LW · GW

and it's very difficult to have [a general intelligence] below human-scale!

I would be surprised if this was true, because it would mean that the blind search process of evolution was able to create a close to maximally-efficient general intelligence.

Comment by Eli Tyre (elityre) on What is the current bottleneck on genetic engineering of human embryos for improved IQ · 2021-07-14T01:14:50.393Z · LW · GW

Greg Cochran's idea

Do you have a citation for this?

Comment by Eli Tyre (elityre) on Moral Complexities · 2021-07-13T07:56:31.828Z · LW · GW

Perhaps even simpler: it is adaptive to have a sense of fairness because you don't want to be the jerk. 'cuz then everyone will dislike you, oppose you, and not aid you.

The biggest, meanest, monkey doesn't stay on top for very long, but a big, largely fair, monkey, does?

Comment by Eli Tyre (elityre) on Moral Complexities · 2021-07-13T07:51:35.217Z · LW · GW

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?  Why are the two propositions argued in different ways?

  • I want to consider this question carefully.
    • My first answer is that arguing about morality is a political maneuver that is more likely to work for getting what you want than simply declaring your desires.
      • But that begs the question, why is it more likely to work? Why are other people, or non-sociopaths, swayed by moral arguments?
        • It seems like they, or their genes, must get something out of being swayed by moral arguments.
        • You might think that it is better coordination or something. But I don't think that adds up. If everyone makes moral arguments insincerely, then the moral argument don't actually add more coordination.
          • But remember that morality is enforced...?
          • Ok. Maybe the deal is that humans are loss averse. And they can project, in any given conflict, being in the weaker party's shoes, and generalize the situation to other situations that they might be in. And so, any given onlooker would prefer norms that don't hurt the looser too badly? And so, they would opt into a timeless contract where they would uphold a standard of "fairness"?
            • But also the contract is enforced.
          • I think this can maybe be said more simply? People have a sense rage at someone taking advantage of someone else iff they can project that they could be in the loser's position?
            • And this makes sense if the "taking advantage" is likely to generalize. If the jerk is pretty likely to take advantage of you, then it might be adaptive to oppose the jerk in general?
              • For one thing, if you oppose the jerk when he bullies someone else, then that someone else is more likely to oppose him when he is bullying you.
          • Or maybe this can be even more simply reduced to a form of reciprocity? It's adaptive to do favors for non-kin, iff they're likely to do favors for you?
            • There's a bit of bootstrapping problem there, but it doesn't seem insurmountable.
          • I want to keep in mind that all of this is subject to scapegoating dynamics, where some group A coordinates to keep another group B down, because A and B can be clearly differentiated and therefore members of A don't have to fear the bullying of other members of A. 
            • This seems like it has actually happened, a bunch, in history. Whites and Blacks in American history is a particularly awful example that comes to mind.
Comment by Eli Tyre (elityre) on Potential Bottlenecks to Taking Over The World · 2021-07-08T00:39:43.772Z · LW · GW

Pinky, v3.41.08: Well, coordination constraints are a big one. They appear to be fundamentally intractable, as soon as we allow for structural divergence of world-models. Which means I can’t even coordinate robustly with copies of myself unless we either lock in the structure of the world-model (which would severely limit learning), or fully synchronize at regular intervals (which would scale very poorly, the data-passing requirements would be enormous).

This seems like a straightforward philosophy / computer science / political science  problem. Is there a reason why Pinky version [whatever] can't just find a good solution to it? Maybe after it has displaced the entire software industry?

It seems like you need a really strong argument that this problem is intractable, and I don't see what it is.

Comment by Eli Tyre (elityre) on Less Realistic Tales of Doom · 2021-07-05T10:43:14.744Z · LW · GW

Yes. Though it could be improved further by elaborating "the work of fiction that is spoiled", instead of just "the work."

Comment by Eli Tyre (elityre) on A Parable On Obsolete Ideologies · 2021-07-02T16:37:42.075Z · LW · GW

I think the jargon here is actually useful compression.

Comment by Eli Tyre (elityre) on A Parable On Obsolete Ideologies · 2021-07-02T16:36:38.526Z · LW · GW

doesn't have an evil ideology tied up with it? 

 

(At least no ideology much worse than a lot of popular political movements).

These are strongly different claims.