[Market] Will AI xrisk seem to be handled seriously by the end of 2026?
post by tailcalled · 2023-05-25T18:51:49.184Z · LW · GW · 2 commentsThis is a link post for https://manifold.markets/tailcalled/will-ai-xrisk-seem-to-be-handled-se?r=dGFpbGNhbGxlZA
Contents
2 comments
https://manifold.markets/tailcalled/will-ai-xrisk-seem-to-be-handled-se?r=dGFpbGNhbGxlZA
Lately it seems like there's been a bunch of people taking big AI risks seriously, from OpenAI's Governance of superintelligence to Deepmind's Model evaluation for extreme risks.
We're not quite there yet in terms of my standards for safety, and even less there in terms of e.g. Eliezer Yudkowsky's. However I wonder if this marks a turning point.
Obviously this is a very subjective question, so I am warning you ahead of time that it is going to resolve in opinionated ways. Trying to anchor the discussion, I expect the following to be necessary for a YES resolution:
- Major leading AI companies openly acknowledge that existential risk is a possibility, not just in a marginal sense (e.g. internal discussion of it by employees, rare cases where the leaders begrudgingly admit it) but also in a central sense (e.g. openly having research teams working on it, having long sections of documents for politicians discussing it).
- Something is done to handle unilateral actors, e.g. there is active progress made in creating an international organization which can prevent unilateral actors from creating unfriendly AI, or maybe somehow the only plausible creators all take AI xrisk seriously.
- Yann LeCun changes his mind to take AI xrisk seriously or no longer holds much sway about it at Facebook.
- The lessons of Worlds Where Iterative Design Fails [LW · GW] are taken seriously by the above systems.
Please ask me more questions in the comments to help cement the resolution criteria. If my opinion on the inherent danger of AI xrisk changes during the resolution period, I will try to respond based on the level of risk implied by my criteria, not based on my later evaluation of things.
However, if it turns out that there is a similarly powerful way of handling AI xrisk which is qualitatively different, which gets implemented in practice, I will also resolve this question to YES.
2 comments
Comments sorted by top scores.
comment by faul_sname · 2023-05-26T03:52:10.407Z · LW(p) · GW(p)
Would "we get strong evidence that we're not in one of the worlds where iterative design is guaranteed to fail, and it looks like the group's doing the iterative design are proceeding with sufficient caution" qualify as a YES?
Replies from: tailcalled↑ comment by tailcalled · 2023-05-26T07:06:36.179Z · LW(p) · GW(p)
No, by the "If my opinion on the inherent danger of AI xrisk changes during the resolution period, I will try to respond based on the level of risk implied by my criteria, not based on my later evaluation of things." rule, but maybe in such a case I would change the title to reflect the relevant criteria.