Memo on some neglected topics
post by Lukas Finnveden (Lanrian) · 2023-11-11T02:01:55.834Z · LW · GW · 2 commentsThis is a link post for https://open.substack.com/pub/lukasfinnveden/p/memo-on-some-neglected-topics
Contents
2 comments
2 comments
Comments sorted by top scores.
comment by Roman Leventov · 2023-11-13T20:57:02.327Z · LW(p) · GW(p)
Roko would probably call "the most important century" work "building a stable equilibrium [LW · GW] to land an AGI/ASI on".
I broadly agree with you and Roko that this work is important and that it would often make more sense for people to do this kind of work than "narrowly-defined" technical AI safety.
An aspect for why this may be the case that you didn't mention is money: technical AI safety is probably bottlenecked on funding [LW · GW], but more of the "most important century/stable equilibrium" are more amenable to conventional VC funding [LW(p) · GW(p)], and the funders shouldn't even be EA/AI x-risk/"most important century"-pilled.
In a comment to Roko's post, [LW(p) · GW(p)] I offered my classification of this "stable equilibrium" systems and work that should be done. Here I will reproduce it, with extra directions that appeared to me later:
- Digital trust infrastructure: decentralised identity, secure communication (see Layers 1 and 2 in Trust Over IP Stack), proof-of-humanness, proof of AI (such as, a proof that such and such artifact is created with such and such agent, e.g., provided by OpenAI -- watermarking failed [LW · GW], so need new robust solutions with zero-knowledge proofs).
- Infrastructure for collective sensemaking and coordination: the infrastructure for communicating beliefs and counterfactuals, making commitments, imposing constraints on agent behaviour, and monitoring the compliance. We at Gaia Consortium are doing this.
- Infrastructure and systems for collective epistemics: next-generation social networks (e.g., https://subconscious.network/), media, content authenticity, Jim Rutt's "info agents" (he advises "three different projects that are working on this").
- Related to the previous item, in particular, to content authenticity: systems for personal data sovereignty (I don't know any good examples besides Inrupt), dataset verification/authenticity more generally, dataset governance.
- The science/ethics of consciousness and suffering mostly solved, and much more effort in biology to understand whom (or whose existence, joy, or non-suffering) the civilisation should value, to better inform the constraints and policy for the economic agents (which is monitored and verified through the infra from item 2.)
- Systems for political decision-making and collective ethical deliberation: see Collective Intelligence Project, Policy Synth, simulated deliberative democracy. These types of systems should also be used for governing all of the above layers.
- Accelerating enlightenment using AI teachers (Khanmigo, Quantum Leap) and other tools for individual epistemics (Ought) so that the people who participate in governance (the previous item) could do a better job.
The list above covers all the directions mentioned in the post, and there are a few more important ones.