Posts

Job Listing: Managing Editor / Writer 2024-02-21T23:41:26.818Z
Where is the Town Square? 2024-02-13T03:53:18.205Z
Job listing: Communications Generalist / Project Manager 2023-11-06T20:21:03.721Z
Announcing MIRI’s new CEO and leadership team 2023-10-10T19:22:11.821Z
Another Way to Be Okay 2023-02-19T20:49:31.895Z

Comments

Comment by Gretta Duleba (gretta-duleba) on Job Listing: Managing Editor / Writer · 2024-02-22T18:25:31.413Z · LW · GW

Writers at MIRI will primarily be focusing on explaining why it's a terrible idea to build something smarter than humans that does not want what we want. They will also answer the subsequent questions that we get over and over about that. 

Comment by Gretta Duleba (gretta-duleba) on Job listing: Communications Generalist / Project Manager · 2023-11-21T19:38:19.182Z · LW · GW

We want a great deal of overlap with Pacific time hours, yes. A nine-hour time zone difference would probably be pretty rough unless you're able to shift your own schedule by quite a bit.

Comment by Gretta Duleba (gretta-duleba) on Job listing: Communications Generalist / Project Manager · 2023-11-11T16:17:41.759Z · LW · GW

Of course. But if it's you, I can't guess which application was yours from your LW username. Feel free to DM me details.

Comment by Gretta Duleba (gretta-duleba) on Job listing: Communications Generalist / Project Manager · 2023-11-06T21:21:26.915Z · LW · GW

No explicit deadline, I currently expect that we'll keep the position open until it is filled. That said, I would really like to make a hire and will be fairly aggressively pursuing good applications.

I don't think there is a material difference between applying today or later this week, but I suspect/hope there could be a difference between applying this week and next week.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-17T14:17:37.902Z · LW · GW

"Wearing your [feelings] on your sleeve" is an English idiom meaning openly showing your emotions.

It is quite distinct from the idea of belief as attire from Eliezer's sequence post, in which he was suggesting that some people "wear" their (improper) beliefs to signal what team they are on.

Nate and Eliezer openly show their despair about humanity's odds in the face of AI x-risk, not as a way of signaling what team they're on, but because despair reflects their true beliefs.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-13T22:30:27.629Z · LW · GW

2. Why do you see communications as being as decoupled (rather, either that it is inherently or that it should be) from research as you currently do? 

The things we need to communicate about right now are nowhere near the research frontier.

One common question we get from reporters, for example, is "why can't we just unplug a dangerous AI?" The answer to this is not particularly deep and does not require a researcher or even a research background to engage on.

We've developed a list of the couple-dozen most common questions we are asked by the press and the general public and they're mostly roughly on par with that one.

There is a separate issue of doing better at communicating about our research; MIRI has historically not done very well there. Part of it is that we were/are keeping our work secret on purpose, and part of it is that communicating is hard. To whatever extent it's just about 'communicating is hard,' I would like to do better at the technical comms, but it is not my current highest priority.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-13T15:36:42.844Z · LW · GW

Re: the wording about airstrikes in TIME: yeah, we did not anticipate how that was going to be received and it's likely we would have wordsmithed it a bit more to make the meaning more clear had we realized. I'm comfortable calling that a mistake. (I was not yet employed at MIRI at the time but I was involved in editing the draft of the op-ed so it's at least as much on me as anybody else who was involved.)

Re: policy division: we are limited by our 501(c)3 status as to how much of our budget we can spend on policy work, and here 'budget' includes the time of salaried employees. Malo and Eliezer both spend some fraction of their time on policy but I view it as unlikely that we'll spin up a whole 'division' about that. Instead, yes, we partner with/provide technical advice to CAIP and other allied organizations. I don't view failure-to-start-a-policy-division as a mistake and in fact I think we're using our resources fairly well here.

Re: critiquing existing policy proposals: there is undoubtedly more we could do here, though I lean more in the direction of 'let's say what we think would be almost good enough' rather than simply critiquing what's wrong with other proposals.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-13T01:39:58.119Z · LW · GW

Ditto.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T19:16:45.650Z · LW · GW

I think that's pretty close, though when I hear the word "activist" I tend to think of people marching in protests and waving signs, and that is not the only way to contribute to the effort to slow AI development. I think more broadly about communications and policy efforts, of which activism is a subset.

It's also probably a mistake to put capabilities researchers and alignment researchers in two entirely separate buckets. Their motivations may distinguish them, but my understanding is that the actual work they do unfortunately overlaps quite a bit.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T19:13:14.170Z · LW · GW

That's pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration.

 

"Don't talk too much about how powerful AI could get because it will just make other people get excited and go faster" was a prevailing view at MIRI for a long time, I'm told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it's better to speak openly about the risks.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T19:09:19.710Z · LW · GW
  • What do you see as the most important messages to spread to (a) the public and (b) policymakers?

 

That's a great question that I'd prefer to address more comprehensively in a separate post, and I should admit up front that the post might not be imminent as we are currently hard at work on getting the messaging right and it's not a super quick process.

  • What mistakes do you think MIRI has made in the last 6 months?

Huh, I do not have a list prepared and I am not entirely sure where to draw the line around what's interesting to discuss and what's not; furthermore it often takes some time to have strong intuitions about what turned out to be a mistake. Do you have any candidates for the list in mind?

  • Does MIRI also plan to get involved in policy discussions (e.g. communicating directly with policymakers, and/or advocating for specific policies)?

We are limited in our ability to directly influence policy by our 501(c)3 status; that said, we do have some latitude there and we are exercising it within the limits of the law. See for example this tweet by Eliezer.

  • Does MIRI need any help? (Or perhaps more precisely "Does MIRI need any help from the right kind of person with the right kind of skills, and if so, what would that person or those skills look like?")

Yes, I expect to be hiring in the comms department relatively soon but have not actually posted any job listings yet. I will post to LessWrong about it when I do.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-12T00:01:00.836Z · LW · GW

I do not (yet) know that Nye resource so I don't know if I endorse it. I do endorse the more general idea that many folks who understand the basics of AI x-risk could start talking more to their not-yet-clued-in friends and family about it.

I think in the past, many of us didn't bring this up with people outside the bubble for a variety of reasons: we expected to be dismissed or misunderstood, it just seemed fruitless, or we didn't want to freak them out.

I think it's time to freak them out.

And what we've learned from the last seven months of media appearances and polling is that the general public is actually far more receptive to x-risk arguments than we (at MIRI) expected; we've been accustomed to the arguments bouncing off folks in tech, and we over-indexed on that. Now that regular people can play with GPT-4 and see what it does, discussion of AGI no longer feels like far-flung science fiction. They're ready to hear it, and will only get more so as capabilities demonstrably advance.

We hope that if there is an upsurge of public demand, we might get regulation/legislation limiting the development and training of frontier AI systems and the sales and distribution of the high-end GPUs on which such systems are trained.

Comment by Gretta Duleba (gretta-duleba) on Announcing MIRI’s new CEO and leadership team · 2023-10-11T23:52:57.614Z · LW · GW

Thanks, much appreciated! Your work is on my (long) list to check out. Is there a specific video you're especially proud of that would be a great starting point?

Feel free to send me a discord server invitation at gretta@intelligence.org.

Comment by Gretta Duleba (gretta-duleba) on Exposure to Lizardman is Lethal · 2023-04-02T20:47:01.849Z · LW · GW

I think your thesis is not super crisp, because this was an off the cuff post! And your examples are accordingly not super clear either, same reason. But there's definitely still a nugget of an idea in here.

It's something like, with the decentralization of both taking a position in the first place, and commenting on other people's positions, the lizardmen have more access to the people taking positions than they did in a world without social media. And lizardmen can and do serious damage to individuals in a seemingly random fashion.

Yup, seems legit. Our species does not have sufficient community-norm tech to be decentralized with full-mesh contact in groups larger than Dunbar's number.

Also, what a sentence I just wrote. What the fuck.

Comment by Gretta Duleba (gretta-duleba) on Another Way to Be Okay · 2023-02-20T17:15:15.939Z · LW · GW

Do what you need to do to take care of yourself! It sounds like you don't choose to open up to your wife about your distress, for fear of causing her distress. I follow your logic there, but I also hope you do have someone you can talk to about it whom you don't fear harming, because they already know and are perhaps further along on the grief / acceptance path than you are.

Good luck. I wish you well.

Comment by Gretta Duleba (gretta-duleba) on Another Way to Be Okay · 2023-02-20T02:39:45.568Z · LW · GW

Yes, that's correct, I was referring to the fable. I should probably have included a broader hint about that.