MIRI's June 2024 Newsletter

post by Harlan · 2024-06-14T23:02:23.721Z · LW · GW · 18 comments

This is a link post for https://intelligence.org/2024/06/14/june-2024-newsletter/

Contents

  MIRI updates
  News and links
None
18 comments

MIRI updates

You can subscribe to the MIRI Newsletter here.

18 comments

Comments sorted by top scores.

comment by quetzal_rainbow · 2024-06-15T15:56:46.682Z · LW(p) · GW(p)

It would be really interesting to read postmortem on Agent Foundarions work in MIRI.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2024-06-16T03:48:10.449Z · LW(p) · GW(p)

Also a strategy postmortem on the decision to pivot to technical research in 2013: https://intelligence.org/2013/04/13/miris-strategy-for-2013/

I do wonder about the counterfactual where MIRI never sold the Singularity Summit, and it was blowing up as an annual event, same way Less Wrong blew up as a place to discuss AI. Seems like owning the Summit could create a lot of leverage for advocacy.

One thing I find fascinating is the number of times MIRI has reinvented themselves as an organization over the decades. People often forget that they were originally founded to bring about the Singularity with no concern for friendliness. (I suspect their advocacy would be more credible if they emphasized that.)

comment by plex (ete) · 2024-06-16T10:01:45.077Z · LW(p) · GW(p)

I initially thought MIRI dropping the AF team was a really bad move, and wrote (but didn't publish) an open letter [? · GW] aiming to discourage this (tl;dr thesis: This research might be critical, we want this kind of research to be ready to take advantage of a possible AI assisted research window).

After talking with the team more, I concluded that actually having an institutional home for this kind of work which is focused on AF would be healthier, as they'd be able to fundraise independently, self-manage, set their own agendas entirely freely, have budget sovereignty, etc, rather than being crammed into an org which was not hopeful about their work.

I've been talking in the background and trying to set them up with fiscal sponsorship and advising on forming an org for a few weeks now, it looks like this will probably work for most of the individuals, but the team has not cohered around a leadership structure or agenda yet. I'm hopeful that this will come together, as I think that this kind of theoretical research is one of the most likely classes of progress we need to navigate the transition to superintelligence. Most likely an umbrella org which hosts individual researchers is the short term solution, hopefully coalescing into a more organized team at some point.

Replies from: Joe_Collman
comment by Joe Collman (Joe_Collman) · 2024-06-18T21:58:36.489Z · LW(p) · GW(p)

Broadly I agree.

I'm not sure about:

but the team has not cohered around a leadership structure or agenda yet. I'm hopeful that this will come together

I don't expect the most effective strategy at present to be [(try hard to) cohere around an agenda]. An umbrella org hosting individual researchers seems the right starting point. Beyond that, I'd expect [structures and support to facilitate collaboration and self-organization] to be ideal.
If things naturally coalesce that's probably a good sign - but I'd prefer that to be a downstream consequence of exploration, not something to aim for in itself.

To be clear, this is all on the research side - on the operations side organization is clearly good.

Replies from: ete
comment by plex (ete) · 2024-06-19T22:11:17.303Z · LW(p) · GW(p)

Yeah, I mostly agree with the claim that individuals pursuing their own agendas is likely better than trying to push for people to work more closely. Finding directions which people feel like converging on could be great, but not at the cost of being able to pursue what seems most promising in a self-directed way.

I think I meant I was hopeful about the whole thing coming together, rather than specifically the coherent agenda part.

comment by ozziegooen · 2024-06-15T16:41:21.587Z · LW(p) · GW(p)

Really wishing the new Agent Foundations team the best. (MIRI too, but its position seems more secured)

I think that naively, I feel pretty good about this potential split. If MIRI is doing much more advocacy work, that work just seems very different to Agent Foundations research.

This could allow MIRI to be more controversial and risk-taking without tying things to the Agent Foundations research, and that research could hypothetically more easily getting funding from groups that otherwise disagree with MIRI's political views.

I hope that team finds good operations support or a different nonprofit sponsor of some kind. 

comment by niplav · 2024-06-17T16:27:10.150Z · LW(p) · GW(p)

I think MIRI should update its team page if there are drastic changes to its team.

comment by Zack_M_Davis · 2024-06-16T05:03:21.578Z · LW(p) · GW(p)

Senior MIRI leadership explored various alternatives, including reorienting the Agent Foundations team’s focus and transitioning them to an independent group under MIRI fiscal sponsorship with restricted funding, similar to AI Impacts. Ultimately, however, we decided that parting ways made the most sense.

I'm surprised! If MIRI is mostly a Pause advocacy org now, I can see why agent foundations research doesn't fit the new focus and should be restructured. But the benefit of a Pause is that you use the extra time to do something in particular [LW(p) · GW(p)]. Why wouldn't you want to fiscally sponsor research on problems that you think need to be solved for the future of Earth-originating intelligent life to go well? (Even if the happy-path plan is Pause and superbabies, presumably you want to hand the superbabies as much relevant prior work as possible.) Do we know how Garrabrant [LW · GW], Demski [LW · GW], et al. are going to eat??

Relatedly, is it time for another name change? Going from "Singularity Institute for Artificial Intelligence" to "Machine Intelligence Research Institute" must have seemed safe in 2013. (You weren't unambiguously for artificial intelligence anymore, but you were definitely researching it.) But if the new–new plan is to call for an indefinite global ban on research into machine intelligence, then the new name doesn't seem appropriate, either?

Replies from: RobbBB, valley9
comment by Rob Bensinger (RobbBB) · 2024-06-16T07:33:16.020Z · LW(p) · GW(p)

But the benefit of a Pause is that you use the extra time to do something in particular [LW(p) · GW(p)]. Why wouldn't you want to fiscally sponsor research on problems that you think need to be solved for the future of Earth-originating intelligent life to go well? 

MIRI still sponsors some alignment research, and I expect we'll sponsor more alignment research directions in the future. I'd say MIRI leadership didn't have enough aggregate hope in Agent Foundations in particular to want to keep supporting it ourselves (though I consider its existence net-positive).

My model of MIRI is that our main focus these days is "find ways to make it likelier that a halt occurs" and "improve the world's general understanding of the situation in case this helps someone come up with a better idea [LW · GW]", but that we're also pretty open to taking on projects in all four of these quadrants, if we find something that's promising and that seems like a good fit at MIRI (or something promising that seems unlikely to occur if it's not housed at MIRI):

 AI alignment workNon-alignment work
High-EV absent a pause  
High-EV given a pause  
Replies from: valley9, Raemon
comment by Ebenezer Dukakis (valley9) · 2024-06-16T12:56:15.837Z · LW(p) · GW(p)

In terms of "improve the world's general understanding of the situation", I encourage MIRI to engage more with informed skeptics [EA(p) · GW(p)]. Our best hope is if there is a flaw in MIRI's argument for doom somewhere. I would guess that e.g. Matthew Barnett he has spent something like 100x as much effort engaging with MIRI as MIRI has spent engaging with him, at least publicly. He seems unusually persistent -- I suspect many people are [LW(p) · GW(p)] giving up [LW(p) · GW(p)], or gave up long ago. I certainly feel quite cynical about whether I should even bother writing a comment like this one [LW(p) · GW(p)].

Replies from: akash-wasil
comment by Akash (akash-wasil) · 2024-06-16T13:24:38.108Z · LW(p) · GW(p)

Offering a quick two cents: I think MIRI‘s priority should be to engage with “curious and important newcomers” (e.g., policymakers and national security people who do not yet have strong cached views on AI/AIS). If there’s extra capacity and interest, I think engaging with informed skeptics is also useful (EG big fan of the MIRI dialogues), but on the margin I don’t suspect it will be as useful as the discussions with “curious and important newcomers.”

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2024-06-16T13:31:27.307Z · LW(p) · GW(p)

So what's the path by which our "general understanding of the situation" is supposed to improve? There's little point in delaying timelines by a year, if no useful alignment research is done in that year. The overall goal should be to maximize the product of timeline delay and rate of alignment insights.

Also, I think you may be underestimating the ability of newcomers to notice that MIRI tends to ignore its strongest critics. See also previously linked comment [EA(p) · GW(p)].

Replies from: akash-wasil
comment by Akash (akash-wasil) · 2024-06-16T13:49:02.633Z · LW(p) · GW(p)

I think if MIRI engages with “curious newcomers” those newcomers will have their own questions/confusions/objections and engaging with those will improve general understanding.

Based on my experience so far, I don’t expect their questions/confusions/objections to overlap a lot with the questions/confusions/objections that tech-oriented active LW users have.

I also think it’s not accurate to say that MIRI tends to ignore its strongest critics; there’s perhaps more public writing/dialogues between MIRI and its critics than for pretty much any other organization in the space.

My claim is not that MIRI should ignore its critics but moreso that it should focus on replying to criticisms or confusions from “curious and important newcomers”. My fear is that MIRI might engage too much with criticisms from LW users and other ingroup members and not focus enough on engaging with policy folks, whose cruxes and opinions often differ substantially than EG the median LW commentator.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2024-06-16T14:38:18.973Z · LW(p) · GW(p)

I think if MIRI engages with “curious newcomers” those newcomers will have their own questions/confusions/objections and engaging with those will improve general understanding.

You think policymakers will ask the sort of questions that lead to a solution for alignment?

In my mind, the most plausible way "improve general understanding" can advance the research frontier for alignment is if you're improving the general understanding of people fairly near that frontier.

Based on my experience so far, I don’t expect their questions/confusions/objections to overlap a lot with the questions/confusions/objections that tech-oriented active LW users have.

I expect MIRI is not the only tech-oriented group policymakers are talking to. So in the long run, it's valuable for MIRI to either (a) convince other tech-oriented groups of its views, or (b) provide arguments that will stand up against those from other tech-oriented groups.

there’s perhaps more public writing/dialogues between MIRI and its critics than for pretty much any other organization in the space.

I believe they are also the only organization in the space that says its main focus is on communications. I'm puzzled that multiple full-time paid staff are getting out-argued by folks like Alex Turner who are posting for free in their spare time.

If MIRI wants us to make use of any added timeline in a way that's useful, or make arguments that outsiders will consider robust, I think they should consider a technical communications strategy in addition to a general-public communications strategy. The wave-rock model could help for technical communications as well. Right now their wave game for technical communications seems somewhat nonexistent. E.g. compare Eliezer's posting frequency on LW [LW · GW] vs X.

You depict a tradeoff between focusing on "ingroup members" vs "policy folks", but I suspect there are other factors which are causing their overall output to be low, given their budget and staffing levels. E.g. perhaps it's an excessive concern with org reputation that leads them to be overly guarded in their public statements. In which case they could hire an intern to argue online for 40 hours a week, and if the intern says something dumb, MIRI can say "they were just an intern -- and now we fired them." (Just spitballing here.)

It's puzzling to me that MIRI originally created LW for the purpose of improving humanity's thinking about AI, and now Rob says that's their "main focus", yet they don't seem to use LW that much? Nate hasn't said anything about alignment here in the past ~6 months [LW · GW]. I don't exactly see them arguing with the ingroup too much.

Replies from: akash-wasil
comment by Akash (akash-wasil) · 2024-06-16T17:36:59.681Z · LW(p) · GW(p)

Don’t have time to respond in detail but a few quick clarifications/responses:

— I expect policymakers to have the most relevant/important questions about policy and to be the target audience most relevant for enacting policies. Not solving technical alignment. (Though I do suspect that by MIRI’s lights, getting policymakers to understand alignment issues would be more likely to result in alignment progress than having more conversations with people in the technical alignment space.)

— There are lots of groups focused on comms/governance. MIRI is unique only insofar as it started off as a “technical research org” and has recently pivoted more toward comms/governance.

— I do agree that MIRI has had relatively low output for a group of its size/resources/intellectual caliber. I would love to see more output from MIRI in general. Insofar as it is constrained, I think they should be prioritizing “curious policy newcomers” over people like Matthew and Alex. — Minor but I don’t think MIRI is getting “outargued” by those individuals and I think that frame is a bit too zero-sum.

— Controlling for overall level of output, I suspect I’m more excited than you about MIRI spending less time on LW and more time on comms/policy work with policy communities (EG Malo contributing to the Schumer insight forums, MIRI responding to government RFCs). — My guess is we both agree that MIRI could be doing more on both fronts and just generally having higher output. My impression is they are working on this and have been focusing on hiring; I think if their output stayed relatively the same 3-6 months from now I will be fairly disappointed.

Replies from: valley9
comment by Ebenezer Dukakis (valley9) · 2024-06-17T00:52:30.642Z · LW(p) · GW(p)

Don’t have time to respond in detail but a few quick clarifications/responses:

Sure, don't feel obligated to respond, and I invite the people disagree-voting my comments to hop in as well.

— There are lots of groups focused on comms/governance. MIRI is unique only insofar as it started off as a “technical research org” and has recently pivoted more toward comms/governance.

That's fair, when you said "pretty much any other organization in the space" I was thinking of technical orgs.

MIRI's uniqueness does seem to suggest it has a comparative advantage for technical comms. Are there any organizations focused on that?

by MIRI’s lights, getting policymakers to understand alignment issues would be more likely to result in alignment progress than having more conversations with people in the technical alignment space

By 'alignment progress' do you mean an increased rate of insights per year? Due to increased alignment funding?

Anyway, I don't think you're going to get "shut it all down" without either a warning shot or a congressional hearing.

If you just extrapolate trends, it wouldn't particularly surprise me to see Alex Turner at a congressional hearing arguing against "shut it all down". Big AI has an incentive to find the best witnesses it can, and Alex Turner seems to be getting steadily more annoyed. (As am I, fwiw.)

Again, extrapolating trends, I expect MIRI's critics like Nora Belrose will increasingly shift from the "inside game" of trying to engage w/ MIRI directly to a more "outside game" strategy of explaining to outsiders why they don't think MIRI is credible. After the US "shuts it down", countries like the UAE (accused of sponsoring genocide in Sudan) will likely try to quietly scoop up US AI talent. If MIRI is considered discredited in the technical community, I expect many AI researchers to accept that offer instead of retooling their career. Remember, a key mistake the board made in the OpenAI drama was underestimating the amount of leverage that individual AI researchers have, and not trying to gain mindshare with them.

Pause maximalism (by which I mean focusing 100% on getting a pause and not trying to speed alignment progress) only makes sense to me if we're getting a ~complete ~indefinite pause. I'm not seeing a clear story for how that actually happens, absent a much broader doomer consensus. And if you're not able to persuade your friends, you shouldn't expect to persuade your enemies.

Right now I think MIRI only gets their stated objective in a world where we get a warning shot which creates a broader doom consensus. In that world it's not clear advocacy makes a difference on the margin.

comment by Raemon · 2024-06-16T20:10:18.541Z · LW(p) · GW(p)

I realize if you had a good answer here the org would be doing different stuff, but, do you (or other MIRI folk) have any rough sense of the sort of alignment work that'd plausibly be in the left two quadrants there?

(also, when you say "high EV", are you setting the "high" bar at a level that means "good enough that anyone should be prioritizing?" or "MIRI is setting a particularly high bar for alignment research right now because it doesn't seem like the most important thing to be focusing on?")

comment by Ebenezer Dukakis (valley9) · 2024-06-16T12:32:59.938Z · LW(p) · GW(p)

superbabies

I'm concerned there may be an alignment problem for superbabies.

Humans often have contempt for people and animals with less intelligence than them. "You're dumb" is practically an all-purpose putdown. We seem to assign moral value to various species on the basis of intelligence rather than their capacity for joy/suffering. We put chimpanzees in zoos and chickens in factory farms.

Additionally, jealousy/"xenophobia" towards superbabies from vanilla humans could lead them to become misanthropes. Everyone knows genetic enhancement is a radioactive topic. At what age will a child learn they were modified? It could easily be just as big of a shock as learning that you were adopted or conceived by a donor. Then stack more baggage on top: Will they be bullied for it? Will they experience discrimination?

I feel like we're charging headlong into these sociopolitical implications, hollering "more intelligence is good!", the same way we charged headlong into the sociopolitical implications of the internet/social media in the 1990s and 2000s while hollering "more democracy is good!" There's a similar lack of effort to forecast the actual implications of the technology.

I hope researchers are seeking genes for altruism and psychological resilience in addition to genes for intelligence.