Posts
Comments
Quite like this forecast from Andrew Critch on milestones on AI development, and my reactions:
The timeline he suggests, in ~10 years, we face choice 6a/b, which implies at least 3 possibilities:
A) we need society-level consensus (which might be force-backed) that humans can/should control agents (or digital entities, more generally) who are in all (economically/militarily) relevant aspects, superior to us. Assuming they fit within the moral circle as we currently conceive (@davidchalmers42 @jeffrsebo Thomas Metzinger and Nick Bostrom / Carl Shulman, have analysed this in various ways), and absent some novel claim about how AIs are different/lesser ethical beings, hard to see how this is essentially different from slavery, or animal cruelty, something that will presumably be obvious to any AGI worth the name; or
B) we are able to engineer AI motivations to act harmlessly/subserviently in a way that is "better" than (A), which (wild guesses) could be a form of open individualism or AI-specific conception of identity (e.g Buddhist/Hindu and some indigenous traditions have more radically inclusive conceptions of identity than the Greco-Judeo-Christian human-centred frame that currently dominates AI ethics); or
C) have some very solid person- and species-neutral grounds for why humans (and our ecosystem) are worth preserving, drafted in a way that is at least potentially reasonable in the ontology/value system of the most powerful/friendliest AIs.
To flesh out my thoughts on (C), I (reworking a 2023 LessWrong post by Miller, Häggström, Yampolskiy, Miller) write up a 'letter to a future AGI'. I suspect this approach to (C) is fundamentally flawed: we can't predict an AIs value system/weltanschaung/"form-of-life" (depending on your philosophical frame).
Nevertheless, a 'hail mary' justification from @avturchin is that we can perhaps influence proto-AGIs which then pass on their representations (of our projections of their successors' values) to future systems (e.g. via synthetic data or weight transfer).
Thanks for this - a useful framing - a slightly more poetic framing would perhaps be the early writing of Nick Land (which has picked up notoriety owing to some combination of his political views and/or the people that have adopted/adapted his ideas). This article summarises Land's thoughts on capitalism and AGI.
This is good, both for training data reasons, but also because it tries to present reasons to the successor (I'm not using ASI as that seems a pretty loaded/unclear term, i.e. the post could be of interest to proto-AGIs/ASIs and could be a profitable conversation with something well before AGI), acting in (our projection of) "its" "self-interest" (quoted as those are pretty anthropomorphic terms and bake in assumptions about identity).
I had a couple of comments:
-
the essay doesn't make a positive case for keeping us around (in the sense of 'try to preserve diversity in the biological world because 8 billion humans consciousnesses are a "good" thing in their own right'). It is at least possible that human/biological cognition can do things that AI based upon silicon, von Neumann architectures just can't do (e.g. consciousness as argued in this Anil Seth paper and post). Alternatively, let's say the successor is uncertain about morality and gives weight to a few different theories, then Kantian views about 'each person being treated as an end not a means' might suggest that billions of humans have some value, temporarily. The counter to this is in the digital agents papers by Bostrom, and Shulman/Bostrom (worlds where n_digital_agents >> n_humans make this Kantian view, and voting systems built around this, difficult)
-
the post touches on points (about nested simulations) in this paul christiano post
-
the 'free market' thing kinda felt like it was assuming too much e.g. leopold aschenbrenner's situational awareness series literally was all about how AI research must/will be nationalised, and China isn't quite out of the game yet
-
similarly, 'property rights' is (as the post points out) are held nearly-sacred free markets/capitalism, but one can imagine societies (communism/socialism is an obvious example, but there seem like older/indigenous societies where property was held in common) where property rights are just another social construction that are highly negotiable (as they are in 'free markets' actually e.g. eminent domain). A successor with an ecumenical training corpus would be aware of this (as current models are).
-
this is a nitpick, but the 'art' reference is pretty ambiguous -- I assume we're talking about science fiction and perhaps some movies, it might be worth expanding a bit (though the successor will presumably know what we're talking about, it might help other humans esp outside the AI/sci fi bubble). My obvious candidates are: stanislaw lem, vernor vinge, ian m banks, isaac asimov. Fiction/movies/visual art are relatively high bandwidth ways humans communicate and form opinion across collectives
Lastly, this post is from 2023, I quickly checked what claude-3.5-sonnet (apparently a 2024 training data cutoff) wrote to the queries: 'What are some reasons that an ASI, reasoning by 'its own lights' or self-interest, might decide to not exterminate humanity?' and 'Are there any particular blog posts or authors that present arguments addressed to a superintelligent successor of humanity, trying to give the successor reasons to be kind to humans?'.
-
to the first answer, it gave a good but somewhat generic list of reasons that felt broader than this letter (valuing sentient life on moral grounds, curiosity about humans, possible symbiosis, consequential undertainty, perserving diversity, human potnetial, respect for creation, risk of retaliation, no very strong reason to eliminate us, [aesthetic?] appreciation of consciousness.
-
to the second it gave 5 sources, 4 of which seem made up. It didn't get this post, or turchin's paper
On the overall point of using LLMs-for-reasoning this (output of a team at AI Safety Camp 2023) might be interesting - it is rather broad-ranging and specifically about argumentation in logic, but maybe useful context: https://compphil.github.io/truth/
That’s really useful, thank you.
This is really useful, thank you - Bach's views are quite hard to capture without sitting through hours of podcasts (though he has re-started writing).
In response to Roman’s very good points (i have only for now skimmed the linked articles); these are my thoughts:
I agree that human values are very hard to aggregate (or even to define precisely); we use politics/economy (of collectives ranging from the family up to the nation) as a way of doing that aggregation, but that is obviously a work in progress, and perhaps slipping backwards. In any case, (as Roman says) humans are (much of the time) misaligned with each other and their collectives, in ways little and large, and sometimes that is for good or bad reasons. By ‘good reason’ I mean that sometimes ‘misalignment’ might literally be that human agents & collectives have local (geographical/temporal) realities they have to optimise for (to achieve their goals), which might conflict with goals/interests of their broader collectives: this is the essence of governing a large country, and is why many countries are federated. I’m sure these problems are formalised in preference/values literature, so I’m using my naive terms for now…
Anyway, this post’s working assumption/intuition is that ‘single AI-single human’ alignment (or corrigibility or identity fusion or (delegation to use Andrew Critch’s term)) is ‘easier’ to think about or achieve, than ‘multiple AI-multiple human’. Which is why we consciously focused on the former & temporarily ignored the latter. I don’t know if that assumption is valid and I haven’t thought about (i.e. no opinion) whether ideas in Roman’s ‘science of ethics’ linked post would change anything, but am interested in it !
I've taken a crack at #4 but it is more about thinking through how 'hundreds of millions of AIs' might be deployed in a world that looks, economically and geopolitically, something like today's (i.e. the argument in the OP is for 2036 so this seems a reasonable thing to do). It is presented as a flowchart which is more succinct than my earlier longish post.
Good catch, thank you - fixed & clarified !
I noticed that footnotes don't seem to come over when I copy-paste from Google Docs (where I originally wrote the post), hence I have to put them in individually (using the LW Docs editor). Is there a way of just importing them? Or is the best workflow to just write the post in LW Docs?
Perhaps this is too much commentary (on Rao's post), but given (I believe) he's pretty widely followed/respected in the tech commentariat, and has posted/tweeted on AI alignment before, I've tried to respond to his specific points in a separate LW post. Have tried to incorporate comments below, but please suggest anything I've missed. Also if anyone thinks this isn't an awful idea, I'm happy to see if a pub like Noema (who have run a few relevant things e.g. Gary Marcus, Yann LeCun, etc.) would be interested in putting out an (appropriately edited) response - to try to set out the position on why alignment is an issue, in publishing venues where policymakers/opinion makers might pick it up (who might be reading Rao's blog but are perhaps not looking at LW/AF). Apologies for any conceptual or factual errors, my first LW post :-)
A fool was tasked with designing a deity. The result was awesomely powerful but impoverished - they say it had no ideas on what to do. After much cajoling, it was taught to copy the fool’s actions. This mimicry it pursued, with all its omnipotence.
The fool was happy and grew rich.
And so things went, ‘til the land cracked, the air blackened, and azure seas became as sulfurous sepulchres.
As the end grew near, our fool ruefully mouthed something from a slim old book: ‘Thou hast made death thy vocation, in that there is nothing contemptible.’
Thanks ! I'd love to know which points you were uncomfortable with...
Here's my submission, it might work better as bullet points on a page.
AI will transform human societies over the next 10-20 years. Its impact will be comparable to electricity or nuclear weapons. As electricity did, AI could improve the world dramatically; or, like nuclear weapons, it could end it forever. Like inequality, climate change, nuclear weapons, or engineered pandemics, AI Existential Risk is a wicked problem. It calls upon every policymaker to become a statesperson: to rise above the short-term, narrow interests of party, class, or nation, to actually make a contribution to humankind as a whole. Why? Here are 10 reasons.
(1) Current AI problems, like racial and gender bias, are like canaries in a coal-mine. They portend even worse future failures.
(2) Scientists do not understand how current AI actually works: for instance, engineers know why bridges collapse, or why Chernobyl failed. There is no similar understanding of why AI models misbehave.
(3) Future AI will be dramatically more powerful than today’s. In the last decade, the pace of development has exploded, with current AI performing at super-human level on games (like chess or Go). Massive language models (like GPT-3) can write really good college essays while deepfakes of politicians are already a thing.
(4) These very powerful AIs might develop their own goals, which is a problem if they are connected to electrical grids, hospitals, social media networks, or nuclear weapons systems.
(5) The competitive dynamics are dangerous: the US-China strategic rivalry implies neither side has an incentive to go slowly or be careful. Domestically, tech companies are in an intense race to develop & deploy AI across all aspects of the economy.
(6) The current US lead in AI might be unsustainable. As an analogy, think of nuclear weapons: in the 1940s, the US hoped it would keep its atomic monopoly. Since then, we have 9 nuclear powers today, with 12,705 weapons.
(7) Accidents happen: again, from the nuclear case, there have been over 100 accidents and proliferation incidents involving nuclear power/weapons.
(8) AI could proliferate virally across globally connected networks, making it more dangerous than nuclear weapons (which are visible, trackable, and less useful than powerful AI).
(9) Even today’s moderately-capable AIs, if used effectively, can entrench totalitarianism, manipulate democratic societies or enable repressive security states.
(10) There will be a point of no return after which we may not be able to recover as a species. So what is to be done? Negotiations to reach a global, temporary moratorium on certain types of AI research. Enforce this moratorium through intrusive domestic regulation and international surveillance. Lastly, avoiding historical policy errors, such as in climate change and in the terrorist threat post-9/11: politicians must ensure that the military-industrial complex does not ‘weaponise’ AI.