0 comments
Comments sorted by top scores.
comment by azsantosk · 2024-05-29T02:52:03.538Z · LW(p) · GW(p)
My three fundamental disagreements with MIRI, from my recollection of a ~1h conversation with Nate Soares [LW · GW] in 2023. Please let me know if you think any positions have been misrepresented.
MIRI thinks (A) evolution is a good analogy for how alignment will fail-by-default in strong AIs, that (B) studying weak AGIs will not shine much light on how to align strong AIs, and that (C) strong narrow myopic optimizers will not be very useful for anything like alignment research.
Now my own positions:
(A) Evolution is not a good analogy for AGI.
- See Steven Byrnes' Against evolution as an analogy for how humans will create AGI [LW · GW].
(B) Alignment techniques for weak-but-agentic AGI are important.
Why:
- In multipolar competitive scenarios, self-improvement may happen first for entire civilizations or economies, rather than for individual minds or small clusters of minds.
- Techniques that work for weak-agentic-AGIs may help for aligning stronger minds. Reflection, onthological crises and self-modification makes alignment more difficult, but without strong local recursive self-improvement, it may be possible to develop techniques for better preserving alignment during these episodes, if these systems can be studied while still under control.
(C) Strong narrow myopic optimizers can be incredibly useful.
- A hypothetical system capable of generating fixed-length text that strongly maximizes simple reward (e.g. expected value of next upvote) can be extremely helpful if reward is based on very careful objective evaluation. Careful judgement of adversarial "debate" setups of such systems may also generate great breakthoughts, including for alignment research.
comment by azsantosk · 2023-10-18T17:12:15.472Z · LW(p) · GW(p)
Does AI governance needs a "Federalist papers" debate?
During the American Revolution, a federal army and government was needed to fight against the British. Many people were afraid that the powers granted to the government for that purpose would allow it to become tyrannical in the future.
If the founding fathers had decided to ignore these fears, the United States would not exist as it is today. Instead they worked alongside the best and smartest anti-federalists to build a better institution with better mechanisms and with limited powers, which allowed them to obtain the support they needed for the constitution.
Where are the federalist vs anti-federalist debates of today regarding AI regulation? Is there someone working on creating a new institution with better mechanisms to limit their power, therefore assuring those on the other side that it won't be used a a path to totalitarianism?