Posts
Comments
Hi, I'm part of the communications team at MIRI. Here's a very high-level summary of what MIRI is currently doing:
- Our research portfolio includes the new Technical Governance Team, as well as some alignment research (though much less than before).
- We have also spun up a comms team. We think of our comms work in terms of "rock content" and "wave content." Currently more effort is going into "rock content" projects which will be announced later.
- We also do some work in DC, though this is limited by our status as a 501(c)(3).
MIRI's strategy update from earlier this year explains the reasoning behind our shift from primarily doing technical alignment research to focusing more on communications and policy. The actual work of making that shift is a lot of why 2024 looked quieter, from the outside, than 2023 (and than our hopes for 2025).
Hi, I’m part of the communications team at MIRI.
To address the object-level question: no, that’s not MIRI’s full public output for the year (but our public output for the year was quite small; more on that below). The links on the media page and research page are things that we put in the spotlight. We know the current website isn’t great for seeing all of our output, and we have plans to fix this. In the meantime, you can check out our newsletters, TGT’s new website, and a forthcoming post with more details about the media stuff we’ve done recently.
To address the pertinent meta-level thing, there are a few reasons why our public output was low for the year:
- A lot of MIRI’s energy this year has gone into “revving up,” including hiring new staff, spinning up those new teams, and moving into a larger office space.
- The comms team has primarily been working on “rock content” projects that will be announced later.
- As you mentioned, some of our work is not in the form of public output, such as engaging with policymakers.
Thanks for pointing out this mistake!
I wrote that “it appears that not all of the leading AI labs are honoring the voluntary agreements they made at [AI Safety Summit],” citing a Politico article. However, after seeing more discussion about it (e.g. here), I am now highly uncertain about whether the labs made specific commitments, what those commitments were, and whether commitments were broken. These seem like important questions, so I hope that we can get more clarity.
One of my most-confident guesses about anthropics is that being multiply-instantiated in other ways is analogous. For instance, if there are two identical physical copies of you (in physical rooms that are identical enough that you're going to make the same observations for the length of the hypothetical, etc.), then my guess is that there isn't a real question about which one is you. They are both you. You are the pattern, not the meat.
Thinking about identical brains as the same person is an interesting idea, and I think it's useful for reasoning about some decision puzzles.
To anyone thinking about this idea, it has some important limitations. Don't try to use it in domains where counting the number of individuals/observers is important. If you roll a die 100 times and it keeps coming up "6" then you should update towards it being a loaded die, even though there are infinite copies of every brain state experiencing every possibility of the die rolls. If you're in a trolley problem where the five people on the track have identical brains, you should still pull the lever, or else utilitarian ethics don't work (and if you're going to bite the bullet that utilitarian ethics don't work because of this, you have to also bite the bullet on reasoning about the world from your own observations not working, which it obviously does).
Here's a Bostrom paper talking about this https://nickbostrom.com/papers/experience.pdf
Her parents aren't letting her go to college out of state, or so much as move out until she's married. She can't do anything to stop them; any fighting back will result in even worse conditions for her.
Overall I strongly agree with your post, but I'm confused about this example.
I don't know all of the context of your friend's situation, but you say "out of state" which makes me think that she lives in the US, in which case I don't understand how her parents could prevent her from leaving home once she is an adult.
Are they...
-
Using emotional manipulation, e.g. making it clear that they'll be really mad or disappointed if she doesn't comply? In that case, that sounds like a toxic situation that your friend should leave.
-
Threatening to withhold funding and support that they would otherwise provide? This one is tough but it's possible to go to college without being financially supported by your parents.
-
Physically preventing her from leaving? Less common but I'm sure it happens. They can't legally do this, so at this point, if she is a legal adult, she could get police involved to escort her out out of the property.
This is an important discussion to have and I'm looking forward to seeing what you have to say in the rest of the series.
One concept that I wish more people knew about, and might be particularly relevant to this community, is disenfranchised grief, which is grief that is not understood or accepted by society or the people around you. If a relative dies, it is easy to receive support and understanding, even from complete strangers. If you are grieving about something that's e.g. difficult to explain, taboo, a secret, or unrelatable to most people, then you might end up processing your grief alone, which can suck.
I believe that preventing X-Risks should be the greatest priority for humanity right now. I don't necessarily think that it should be everyone's individual greatest priority though, because someone's comparative advantage might be in a different cause area.
"Holy Shit, X-Risk" can be extremely compelling, but it can also have people feeling overwhelmed and powerless, if they're not in a position to do much about it. One advantage of the traditional EA pitch is that it empowers the person with clear actions they can take that have a clear and measurable impact.
Thanks. Thinking about it in terms of convincing a sub-agent does help.
Breathing happens automatically, but you can manually control it as soon as you notice it. I think that sometimes I've expected changing my internal state to be more like breathing than it realistically can be.
The psychologist Lisa Feldman Barrett makes a compelling case that emotions are actually stories that our minds construct to explain the bodily sensations that we experience in different situations. She says that there are no "emotion circuits" in the brain and that you can train yourself to associate the same bodily sensations and situations with more positive emotions. I find this idea liberating and I want it to be true, but I worry that if it's not true, or if I'm applying the idea incorrectly, I will be doing something like ignoring my emotions in a bad way. I'm not sure how to resolve the tension between "don't repress your emotions" and "don't let a constructed narrative about your negative emotions run out of control and make you suffer."