Architects of Our Own Demise: We Should Stop Developing AI Carelessly

post by Roko · 2023-10-26T00:36:05.126Z · LW · GW · 75 comments

Contents

76 comments

Some brief thoughts at a difficult time in the AI risk debate.

Imagine you go back in time to the year 1999 and tell people that in 24 years time, humans will be on the verge of building weakly superhuman AI systems. I remember watching the anime short series The Animatrix at roughly this time, in particular a story called The Second Renaissance I part 2 II part 1 II part 2 . For those who haven't seen it, it is a self-contained origin tale for the events in the seminal 1999 movie The Matrix, telling the story of how humans lost control of the planet.

Humans develop AI to perform economic functions, eventually there is an "AI rights" movement and a separate AI nation is founded. It gets into an economic war with humanity, which turns hot. Humans strike first with nuclear weapons, but the AI nation builds dedicated bio- and robo-weapons and wipes out most of humanity, apart from those who are bred in pods like farm animals and plugged into a simulation for eternity without their consent.

Surely we wouldn't be so stupid as to actually let something like that happen? It seems unrealistic.

And yet:

People on this website are talking about responsible scaling policies [LW · GW], though I feel that "irresponsible scaling policies" is a more fitting name.

Obviously I have been in this debate for a long time, having started as a commenter on Overcoming Bias and Accelerating Future blogs in the late 2000s. What is happening now is somewhere near the low end of my expectations for how competently and safely humans would handle the coming transition to machine superintelligence. I think that is because I was younger in those days and had a much rosier view of how our elites function. I thought they were wise and had a plan for everything, but mostly they just muddle along; the haphazard response to covid really drove this home for me.

We should stop developing AI, we should collect and destroy the hardware and we should destroy the chip fab supply chain that allows humans to experiment with AI at the exaflop scale. Since that supply chain is only in two major countries (US and China), this isn't necessarily impossible to coordinate - as far as I am aware no other country is capable (and those that are count as US satellite states). The criterion for restarting exaflop AI research should be a plan for "landing" the transition to superhuman AI that has had more attention put into it than any military plan in the history of the human race. It should be thoroughly war-gamed.

AI risk is not just technical and local, it is sociopolitical and global. It's not just about ensuring that an LLM is telling the truth. It's about what effect AI will have on the world assuming that it is truthful. "Foom" or "lab escape" type disasters are not the only bad thing that can happen - we simply don't know how the world will look if there are a trillion or a quadrillion superhumanly smart AIs demanding rights, spreading propaganda & a competitive economic and political landscape where humans are no longer the top dog.

Let me reiterate: We should stop developing AI. AI is not a normal economic item. It's not like lithium batteries or wind turbines or jets. AI is capable of ending the human race, in fact I suspect that it does that by default.

In his post on the topic, user @paulfchristiano states that a good responsible scaling policy could cut the risks from AI by a factor of 10 [LW · GW]:

I believe that a very good RSP (of the kind I've been advocating for) could cut risk dramatically if implemented effectively, perhaps a 10x reduction.

I believe that this is not correct. It may cut certain technical risks like deception, but a world with non-deceptive, controllable smarter-than-human intelligences that also has the same level of conflict and chaos that our world has may well already be a world that is human-free by default. These intelligences would be an invasive species that would outcompete humans in economic, military and political conflicts.

In order for humans to survive the AI transition I think we need to succeed on the technical problems of alignment (which are perhaps not as bad as Less Wrong culture made them out to be), and we also need to "land the plane" of superintelligent AI on a stable equilibrium where humans are still the primary beneficiaries of civilization, rather than a pest species to be exterminated or squatters to be evicted.

We should also consider how the efforts of AI can be directed towards solving human aging; if aging is solved then everyone's time preference will go down a lot and we can take our time planning a path to a stable and safe human-primacy post-singularity world.

I hesitated to write this article; most of what I am saying here has already been argued by others. And yet... here we are. Comments and criticism are welcome, I may look to publish this elsewhere after addressing common objections.


EDIT: I have significantly changed my mind on this topic and will elaborate more in the coming weeks.

75 comments

Comments sorted by top scores.

comment by Stephen Fowler (LosPolloFowler) · 2023-10-26T03:53:50.033Z · LW(p) · GW(p)

What I find incredible is how contributing to the development of existentially dangerous systems is viewed as a morally acceptable course of action within communities that on paper accept that AGI is a threat.

Both OpenAI and Anthropic are incredibly influential among AI safety researchers, despite both organisations being key players in bringing the advent of TAI ever closer.

Both organisations benefit from lexical confusion over the word "safety".

The average person concerned with existential risk from AGI might assume "safety" means working to reduce the likelihood that we all die. They would be disheartened to learn that many "AI Safety" researchers are instead focused on making sure contemporary LLMs behave appropriately. Such "safety" research simply makes the contemporary technology more viable and profitable, driving investment and reducing timelines. There is to my knowledge no published research that proves these techniques will extend to controlling AGI in a useful way.*

OpenAI's "Superalignment" plan is a more ambitious safety play.Their plan to "solve" alignment involves building a human level general intelligence within 4 years and then using this to automate alignment research.

But there are two obvious problems:

  1. a human level general intelligence is already most of the way toward a superhuman general intelligence (simply give it more compute). Cynically, Superintelligence is a promise that OpenAI's brightest safety researchers will be trying their hardest to bring about an AGI within 4 years.

  2. The success of Superalignment means we are now in the position of trusting that a for-profit, private entity will only use the human level AI researchers to research safety, instead of making the incredibly obvious play of having virtual researchers research how to build the next generation better, smarter automated researchers.

To conclude, if it looks like a duck, swims like a duck and quacks like a duck, it's a capabilities researcher.

*This point could (and probably should) be a post in itself. Why wouldn't techniques that work on contemporary AI systems extend to AGI?

Pretend for a moment that you and I are silicon-based aliens who have recently discovered that carbon based lifeforms exist, and can be used to run calculations. Scientists have postulated that by creating complex enough carbon structures we could invent "thinking animals". We anticipate that these strange creatures will be built in the near future and that they might be difficult to control.

As we can't build thinking animals today, we are stuck studying single cell carbon organisms. A technique has just been discovered in which we can use a compound called "sugar" to influence the direction in which these simple organisms move.

Is it reasonable to then conclude that you will be able to predict and control the behaviour of much more complex, multicelled creature called a "human" by spreading sugar out on the ground?

Replies from: Roman Leventov, anders-lindstroem
comment by Roman Leventov · 2023-10-29T11:02:08.544Z · LW(p) · GW(p)

Why wouldn't techniques that work on contemporary AI systems extend to AGI?

If by "techniques that work on contemporary AIs" you mean RLHF/RLAIF, then I don't know anyone claiming that the robustness and safety of these techniques will "extend to AGI". I think that AGI labs will soon move in the direction of releasing an agent architecture rather that a bare LLM, and will apply reasoning verification techniques. From OpenAI's side, see "Let's verify step by step" paper. From DeepMind's side, see this interview with Shane Legg [LW · GW]. 

What I find incredible is how contributing to the development of existentially dangerous systems is viewed as a morally acceptable course of action within communities that on paper accept that AGI is a threat.

I think this passage (and the whole comment) is unfair because it presents what AGI labs are pursuing (i.e., plans like "superalignment") as obviously consequentially bad plans. But this is actually very far from obvious. I personally tend to conclude that these are consequentially good plans, conditioned on the absence of coordination on "pause and united, CERN-like effort about AGI and alignment" (and the presence of open-source maximalist and risk-dismissive players like Meta AI).

What I think is bad in labs' behaviour (if true, which we don't know, because such coordination efforts might be underway but we don't know about them) is that the labs are not trying to coordinate (among themselves and with the support of governments for legal basis, monitoring, and enforcement) on "pause and united, CERN-like effort about AGI and alignment". Instead, we only see the labs coordinating and advocating for RSP-like policies.

Another thing that I think is bad in labs' behaviour is inadequately little funding to safety efforts. Thus, I agree with the call in "Managing AI Risks in the Era of Rapid Progress [LW · GW]" for the labs to allocate at least a third of their budgets to safety efforts. These efforts, by the way, shouldn't be narrowly about AI models. Indeed, this is a major point of Roko's OP. Investments and progress in computer and system security, political, economic, and societal structures is inadequate. This couldn't be the responsibility of AGI labs alone, obviously, but I think they have to own at a part of it. They actually do own it, a little: they fund and support efforts like proof of humanness, UBI studies, and have stuff and/or teams that are at least in part working on these issues. But I think AGI labs are doing about an order of magnitude less than they should on these fronts.

comment by Anders Lindström (anders-lindstroem) · 2023-10-26T12:27:06.480Z · LW(p) · GW(p)

"Is it reasonable to then conclude that you will be able to predict and control the behaviour of much more complex, multicelled creature called a "human" by spreading sugar out on the ground?"

Yes. Last time I checked the obesity stats it seemed to work just fine...

Jokes aside, you are making an important point. As we speak we have no idea how to even control humans, even if we are humans ourselves (possibly) and should have a pretty good idea what makes us tick we are clueless. Of course we can control humans to a certain degree (society, force, drugs, etc etc), but there are and will always be rouge elements that are uncontrollable. Being able to control 99.99999999999% of all future AI's won't cut it. Its either 100% or an epic fail (I guess this is only time it is warranted to use the word epic when talking about fails).

Replies from: Roko
comment by Roko · 2023-10-26T12:37:27.960Z · LW(p) · GW(p)

I would question the idea of "control" being pivotal.

Even if every AI is controllable, there's still the possibility of humans telling those AIs to bad things and thereby destabilizing the world and throwing it into an equilibrium where there are no more humans.

comment by Max H (Maxc) · 2023-10-26T02:55:16.233Z · LW(p) · GW(p)

In order for humans to survive the AI transition I think we need to succeed on the technical problems of alignment (which are perhaps not as bad as Less Wrong culture made them out to be), and we also need to "land the plane" of superintelligent AI on a stable equilibrium where humans are still the primary beneficiaries of civilization, rather than a pest species to be exterminated or squatters to be evicted.

Do we really need both? It seems like either a technical solution OR competent global governance would mostly suffice.

Actually-competent global governance should be able to coordinate around just not building AGI (and preventing anyone else from building it) indefinitely. If we could solve a coordination problem on that scale, we could also probably solve a bunch of other mundane coordination problems, governance issues, unrelated x-risks, etc., resulting in a massive boost to global prosperity and happiness through non-AI technological progress and good policy.

Conversely, if we had a complete technical solution, I don't see why we necessarily need that much governance competence. Even if takeoff turns out to be relatively slow, the people initially building and controlling AGI will probably be mostly researchers in big labs.

Maybe ideally we would want a "long reflection" of some kind, but in the probably-counterfactual world where these researchers actually get exactly what they aim for, I mostly trust them to aim the AI at something like "fill the galaxy with humanity's collective coherent extrapolated volition", and that seems good enough in a pinch / hurry, if it actually works.

Replies from: LosPolloFowler, Roko, nathan-helm-burger
comment by Stephen Fowler (LosPolloFowler) · 2023-10-26T03:57:28.230Z · LW(p) · GW(p)

Without governance you're stuck trusting that the lead researcher (or whoever is in control) turns down near infinite power and instead act selflessly. That seems like quite the gamble.

Replies from: Seth Herd
comment by Seth Herd · 2023-10-26T17:54:24.524Z · LW(p) · GW(p)

I don't think it's such a stark choice. I think odds are the lead researcher takes the infinite power, and it turns out okay to great. Corrigibility seems like the safest outer alignment plan, and it's got to be corrigible to some set of people in particular. I think giving one random person near infinite power will work out way better than intuition suggests. I think it's not power that corrupts, but rather the pursuit of power. I think unlimited power will lead to an ordinary, non-sociopathic person to progressively focus more on their empathy for others. I think they'll ultimately use that power to let others do whatever they want that doesn't take away others' freedom to do what they want. And that's the best outer alignment result, in my opinioin.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T00:02:54.234Z · LW(p) · GW(p)

Alexander Wales in the end of his series 'Worth the Candle' does a lovely job of laying out what a genuinely kind person given omnipotence could do to make the world a nice place for everyone. It's a lovely vision, but I think relying on this in practice seems a lot less trustworthy to me than having a bureaucratic process with checks & balances in charge. I mean, I still think it'll ultimately have to be some relatively small team in charge of a model corrigible to them, if we're in a singleton scenario. I have a lot more faith in 'small team with bureaucratic oversight' than some individual tech bro selected semi-randomly from the set of researchers at big AI labs who might be presented with the opportunity to 'get the jump' on everyone else.

Replies from: Seth Herd
comment by Seth Herd · 2023-10-27T21:16:47.224Z · LW(p) · GW(p)

I'm curious why you trust a small group of government bros a lot more than one tech bro. I wouldn't strongly prefer either, but I'd prefer Sam Altman or Demis Hassabis to a randomly chosen bureaucrat. I don't totally trust those guys, but I think it's pretty likely they're not total sociopaths or idiots.

By the opportunity to get the jump on everyone else, do you mean beating other companies to AGI, or becoming the one guy your AGI takes orders from?

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T22:03:06.064Z · LW(p) · GW(p)

I meant stealing control of an AGI within the company before the rest of the company catches on. I don't necessarily mean that I'd not want Sam or Demis involved in the ruling council, just that I'd prefer if there was like... an assigned group of people to directly operate the model, and an oversight committee with reporting rules reporting to a larger public audience. Regulations and structure, rather than the whims of one person.

comment by Roko · 2023-10-26T10:08:07.736Z · LW(p) · GW(p)

Conversely, if we had a complete technical solution, I don't see why we necessarily need that much governance competence.

As I said in the article, technically controllable ASIs are the equivalent of an invasive species which will displace humans from Earth politically, economically and militarily.

Replies from: Maxc, Algon
comment by Max H (Maxc) · 2023-10-26T14:14:25.822Z · LW(p) · GW(p)

And I'm saying that, assuming all the technical problems are solved, AI researchers would be the ones in control, and I (mostly) trust them to just not do things like build an AI that acts like an invasive species, or argues for its own rights, or build something that actually deserves such rights.

Maybe some random sociologists on Twitter will call for giving AIs rights, but in the counterfactual world where AI researchers have fine control of their own creations, I expect no one in a position to make decisions on the matter to give such calls any weight.

Even in the world we actually live in, I expect such calls to have little consequence. I do think some of the things you describe are reasonably likely to happen, but the people responsible for making them happen will do so unintentionally, with opinion columnists, government regulations, etc. playing little or no role in the causal process.

Replies from: Jayson_Virissimo, Roko
comment by Jayson_Virissimo · 2023-10-26T16:41:11.052Z · LW(p) · GW(p)

...I (mostly) trust them to just not do things like build an AI that acts like an invasive species...

What is the basis of this trust? Anecdotal impressions of a few that you know personally in the space, opinion polling data, something else?

Replies from: Maxc
comment by Max H (Maxc) · 2023-10-26T17:21:14.616Z · LW(p) · GW(p)

A bit of anecdotal impressions, yes, but mainly I just think that in humans being smart, conscientious, reflective, etc. enough to be the brightest researcher a big AI lab is actually pretty correlated with being Good (and also, that once you actually solve the technical problems, it doesn't take that much Goodness to do the right thing for the collective and not just yourself).

Or, another way of looking at it, I find Scott Aaronson's perspective convincing, when it is applied to humans. I just don't think it will apply at all to the first kinds of AIs that people are actually likely to build, for technical reasons.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-10-29T11:29:36.032Z · LW(p) · GW(p)

I think there are way more transhumanists and post-humanists at AGI labs than you imagine. Richard Sutton is a famous example (btw, I've just discovered that he moved from DeepMind to Keen Technologies, John Carmack's venture), but I believe there are many more of them, but they disguise themselves for political reasons.

comment by Roko · 2023-10-26T18:24:12.639Z · LW(p) · GW(p)

AI researchers would be the ones in control

No. You have simplistic and incorrect beliefs about control.

If there are a bunch of companies (Deepmind, Anthropic, Meta, OpenAI, ...) and a bunch of regulation efforts and politicians who all get inputs, then the AI researchers will have very little control authority, as little perhaps as the physicists had over the use of the H-bomb.

Where does the control really reside in this system?

Who made the decision to almost launch a nuclear torpedo in the Cuban Missile Crisis?

Replies from: Maxc
comment by Max H (Maxc) · 2023-10-26T18:51:10.640Z · LW(p) · GW(p)

In the Manhattan project, there was no disagreement between the physicists, the politicians / generals, and the actual laborers who built the bomb, on what they wanted the bomb to do. They were all aligned around trying to build an object that would create the most powerful explosion possible.

As for who had control over the launch button, of course the physicists didn't have that, and never expected to. But they also weren't forced to work on the bomb; they did so voluntarily and knowing they wouldn't be the ones who got any say in whether and how it would be used.

Another difference between an atomic bomb and AI is that the bomb itself had no say in how it was used. Once a superintelligence is turned on, control of the system rests entirely with the superintelligence and not with any humans. I strongly expect that researchers at big labs will not be forced to program an ASI to do bad things against the researchers' own will, and I trust them not to do so voluntarily. (Again, all in the probably-counterfactual world where they know and understand all the consequences of their own actions.)

Replies from: Vaniver, Roko, M. Y. Zuo
comment by Vaniver · 2023-10-26T19:18:00.104Z · LW(p) · GW(p)

In the Manhattan project, there was no disagreement between the physicists, the politicians / generals, and the actual laborers who built the bomb, on what they wanted the bomb to do. 

In that they wanted the bomb to explode? I think the analogous level of control for AI would be unsatisfactory.

they did so voluntarily and knowing they wouldn't be the ones who got any say in whether and how it would be used.

I'm not sure they thought this; I think many expected that by playing along they would have influence later. Tech workers today often seem to care a lot about how products made by their companies are deployed.

Replies from: Maxc
comment by Max H (Maxc) · 2023-10-26T21:14:32.363Z · LW(p) · GW(p)

In that they wanted the bomb to explode? I think the analogous level of control for AI would be unsatisfactory.

The premise of this hypothetical is that all the technical problems are solved - if an AI lab wants to build an AI to pursue the collective CEV of humanity or whatever, they can just get it to do that. Maybe they'll settle on something other than CEV that is a bit better or worse or just different, but my point was that I don't expect them to choose something ridiculous like "our CEO becomes god-emperor forever" or whatever.

I'm not sure they thought this; I think many expected that by playing along they would have influence later. Tech workers today often seem to care a lot about how products made by their companies are deployed.

Yeah, I was probably glossing over the actual history a bit too much; most of my knowledge on this comes from seeing Oppenheimer recently. The actual dis-analogy is that no AI researcher would really be arguing for not building and deploying ASI in this scenario, vs. with the atomic bomb where lots of people wanted to build it to have around, but not actually use it or only use it as some kind of absolute last resort. I don't think many AI researchers in our actual reality have that kind of view on ASI, and probably few to none would have that view in the counterfactual where the technical problems are solved.

comment by Roko · 2023-10-26T21:22:43.300Z · LW(p) · GW(p)

researchers at big labs will not be forced to program an ASI to do bad things against the researchers' own will

Well these systems aren't programmed. Researchers work on architecture and engineering, goal content is down to the RLHF that is applied and the wishes of the user(s), and the wishes of the user(s) are determined by market forces, user preferences, etc. And user preferences may themselves be influenced by other AI systems.

Closed source models can have RLHF and be delivered via an API, but open source models will not be far behind at any given point in time. And of course prompt injection attacks can bypass the RLHF on even closed source models.

The decisions about what RLHF to apply on contentious topics will come from politicians and from the leadership of the companies, not from the researchers. And politicians are influenced by the media and elections, and company leadership is influenced by the market and by cultural trends.

Where does the chain of control ultimately ground itself?

Answer: it doesn't. Control of AI in the current paradigm is floating. Various players can influence it, but there's no single source of truth for "what's the AI's goal".

Replies from: Maxc
comment by Max H (Maxc) · 2023-10-26T21:50:44.000Z · LW(p) · GW(p)

I don't dispute any of that, but I also don't think RLHF is a workable method for building or aligning a powerful AGI.

Zooming out, my original point was that there are two problems humanity is facing, quite different in character but both very difficult:

  • a coordination / governance problem, around deciding when to build AGI and who gets to build it
  • a technical problem, around figuring out how to build an AGI that does what the builder wants at all.

My view is that we are currently on track to solve neither of those problems. But if you actually consider what the world in which we sufficiently-completely solve even of them looks like, it seems like either is sufficient for a relatively high probability of a relatively good outcome, compared to where we are now.

Both possible worlds are probably weird hypotheticals which shouldn't have an impact on what our actual strategy in the world we actually live in should be, which is of course to pursue solutions to both problems simultaneously with as much vigor as possible. But it still seems worth keeping in mind that if even one thing works out sufficiently well, we probably won't be totally doomed.

Replies from: Roko
comment by Roko · 2023-10-27T16:34:47.611Z · LW(p) · GW(p)

a technical problem, around figuring out how to build an AGI that does what the builder wants

How does a solution to the above solve the coordination/governance problem?

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-10-31T15:31:50.815Z · LW(p) · GW(p)

I think the theory is something like the following: We build the guaranteed trustworthy AI, and ask it to prevent the creation of unaligned AI, and it comes up with the necessary governance structures, and the persuasion and force needed to implement them.  

I’m not sure this is a certain argument.  Some political actions are simply impossible to accomplish ethically, and therefore unavailable to a “good” actor even given superhuman abilities.

comment by M. Y. Zuo · 2023-10-26T19:27:59.948Z · LW(p) · GW(p)

In the Manhattan project, there was no disagreement between the physicists, the politicians / generals, and the actual laborers who built the bomb, on what they wanted the bomb to do. They were all aligned around trying to build an object that would create the most powerful explosion possible.

Where did you learn of this?

From what I know it was the opposite, there were so many disagreements, even just among the physicists, that they decided to duplicate nearly all effort to produce two different types of nuclear device designs, the gun type and the implosion type, simultaneously.

e.g.  both plutonium and uranium processing supply chains were set up at massive expense, and later environmental damage,  just in case one design didn't work.

Replies from: philh
comment by philh · 2023-10-30T20:22:01.313Z · LW(p) · GW(p)

Without commenting on whether there was in fact much agreement or disagreement among the physicists, this doesn't sound like much evidence of disagreement. I think it's often entirely reasonable to try two technical approaches simultaneously, even if everyone agrees that one of them is more promising.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-31T13:03:23.726Z · LW(p) · GW(p)

You do realize setting up each supply chain alone took up well over 1% of total US GDP right?

Replies from: philh
comment by philh · 2023-10-31T16:01:12.544Z · LW(p) · GW(p)

I didn't know that, but not a crux. This information does not make me think it was obviously unreasonable to try both approaches simultaneously.

(Downvoted for tone.)

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-31T18:12:49.540Z · LW(p) · GW(p)

How does this relate to the discussion Max H and Roko were having? Or the question I asked of Max H?

Replies from: philh
comment by philh · 2023-10-31T19:06:34.286Z · LW(p) · GW(p)

I don't know, I didn't intend it to relate to those things. It was a narrow reply to something in your comment, and I attempted to signal it as such.

(I'm not very invested in this conversation and currently intend to reply at most twice more.)

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-31T20:58:42.236Z · LW(p) · GW(p)

Okay then. 

comment by Algon · 2023-10-26T10:36:37.353Z · LW(p) · GW(p)

So you don't think a pivotal act exists? Or, more amitiously, you don't think a sovereign implementing CEV would result in a good enough world?

Replies from: Roko
comment by Roko · 2023-10-26T10:50:03.866Z · LW(p) · GW(p)

Who is going to implement CEV or some other pivotal act?

Replies from: Algon
comment by Algon · 2023-10-26T18:32:18.886Z · LW(p) · GW(p)

Ah, I see. Yeah, that's a reasonable worry. Any ideas on how someone in those orgs could incentivize such behavior whilst discouraging poorly thought out pivotal acts? I would be OK with a future where e.g. OAI gets 90-99% of the cosmic endowment as long as the rest of us get a chunk, or get the chance to safely grow to the point where we have a shot at the vast scraps OAI leaves behind.

Replies from: Roko
comment by Roko · 2023-10-26T21:24:48.683Z · LW(p) · GW(p)

Ah, I see. Yeah, that's a reasonable worry. Any ideas on how someone in those orgs could incentivize such behavior whilst discouraging poorly thought out pivotal acts?

the fact that we are having this conversation simply underscores how dangerous this is and how unprepared we are.

This is the future of the universe we're talking about. It shouldn't be a footnote!

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-26T23:57:01.563Z · LW(p) · GW(p)

Do we really need both? It seems like either a technical solution OR competent global governance would mostly suffice.

Do we need both? Perhaps not, in the theoretical case where we get a perfect instance of one. I disagree that we should aim for one or the other, because I don't expect we will reach anywhere near perfection on either. I think we should expect to have to muddle through somehow with very imperfect versions of each.

I think we'll likely see some janky poorly-organized international AI governance attempt combined with just good enough tool AI and software and just-aligned-enough sorta-general AI to maintain an uneasy temporary state of suppressing rogue AI explosions.

How long will we manage to stay on top under such circumstances? Hopefully long enough to realize the danger we're in and scrape together some better governance and alignment solutions.

Edit: I later saw that Max H said he thought we should pursue both. So we disagree less than I thought. There is some difference, in that I still think we can't really afford a failure in either category. Mainly because I don't expect us to do well enough in either for that single semi-success to carry us through.

comment by Adam Kaufman (Eccentricity) · 2023-10-26T01:01:44.311Z · LW(p) · GW(p)

Yeah. I think a key point that is often overlooked is that even if powerful AI is technically controllable, i.e. we solve inner alignment, that doesn't mean society will handle it safely. I think by default it looks like every company and military is forced to start using a ton of AI agents (or they will be outcompeted by someone else who does). Competition between a bunch of superhuman AIs that are trying to maximize profits or military tech seems really bad for us. We might not lose control all at once, but rather just be gradually outcompeted by machines, where "gradually" might actually be pretty quick. Basically, we die by Moloch.

Replies from: nathan-helm-burger, Eccentricity
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T00:11:56.429Z · LW(p) · GW(p)

Yes, I see Moloch as my, and humanity's, primary enemy here. I think there are quite a few different plausible future paths in which Moloch rears its ugly head. The challenge, and duty, of coordination to defeat Moloch goes beyond what we think of as governance. We need coordination between AI researchers, AI alignment researchers, forecasters, politicians, investors, CEOs. We need people realizing their lives are at stake and making sacrifices and compromises to reduce the risks.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-27T00:46:44.096Z · LW(p) · GW(p)

The challenge, and duty, of coordination to defeat Moloch goes beyond what we think of as governance. We need coordination between AI researchers, AI alignment researchers, forecasters, politicians, investors, CEOs.

The problem is that an entity with that kind of real world coordination capacity would practically need to be so strong that it would likely be more controversial, and face more backlash, then the rogue AGI(s) itself. 

At which point some fraction of humans would likely defect and cooperate with the AGI(s) in order to take it down. 

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T00:55:41.071Z · LW(p) · GW(p)

Oh, I wasn't imagining a singleton AI solving the coordination problem. I was more imagining that a series of terrifying near misses and minor catastrophes convinced people to work together for their own best interest. The coordination being done by the people involved, not applied to them by an external force.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-27T01:07:15.910Z · LW(p) · GW(p)

Even a purely human organization with kind of potential power would be controversial enough that probably at least a single digit percentage of adults would not accept it. Which is to say hundreds of millions of humans would likely consider it an enemy too.

And that's assuming it can even be done considering the level of global cooperation demonstrated in 2023.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T01:26:25.821Z · LW(p) · GW(p)

Yes, I think you are right about both the difficulty / chance of failure and about the fact that there would inevitably be a lot of people opposed. Those aren't enough to guarantee such coordination would fail, perhaps especially if it was enacted through a redundant mishmash of organizations?

I'm pretty sure there's going to be some significant conflict along the way, no matter which path the future stumbles down.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-27T01:49:44.621Z · LW(p) · GW(p)

I doubt you, or any human being, would even want to live in a world where such coordination 'succeeded', since it would almost certainly be in the ruins of society wrecked by countless WMDs, flung by the warring parties until all were exhausted except the 'winners', who would probably not have long to live.

In that sense the possible futures where control of powerful AI 'succeeded' could be even worse then where it failed.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T17:13:47.886Z · LW(p) · GW(p)

I really hoping it doesn't go that way, but I do see us as approaching a time in which the military and economic implications of AI will become so pressing that large-scale international conflict is likely unless agreements are reached. There are specific ways I anticipate tool AI advances affecting the power balance between superpower countries, even before autonomous AGI is a threat. I wake in the night in a cold sweat worrying about these things. I am terrified. I think there's a real chance we all die soon, or that there is massive suffering and chaos, perhaps with or without war. The balance of power has shifted massively in favor of offense, and a new tenuous balance of Mutually Assured Destruction has not yet been established. This is a very dangerous time.

comment by akarlin · 2023-10-26T18:11:23.175Z · LW(p) · GW(p)

The scenario I am most concerned about is a strongly multipolar Malthusian one. There is some chance (maybe even a fair one) that a singleton or oligopoly ASI decides or rigorously coordinate respectively to preserve the biosphere - including humans - at an adequate or superlative level of comfort or fulfillment, or help them ascend themselves, due to ethical considerations, for research purposes, or simulation/karma type considerations.

In a multipolar scenario of gazillions of AI at Malthusian subsistence levels, none of that matters in the default scenario. Individual AIs can be as ethical or empathic as they come, even much more so than any human. But keeping the biosphere around would be a luxury, and any that try to do so, will be outcompeted by more unsentimental economical ones. A farm that can feed a dozen people or an acre of rainforest that can support x species if converted to high efficiency solar panels can support a trillion AIs.

The second scenario is near certain doom so at a bare minimum we should at least get a good inkling of whether AI world is more likely to be unipolar or oligopolistic, or massively multipolar, before proceeding. So a pause is indeed needed, and the most credible way of effecting it is a hardware cap and subsequent back-peddling on compute power.  (Roko has good ideas on how to go about that and should develop on them here and at his Substack). Granted if anthropic reasoning is valid, geopolitics might well soon do the job for us. 🚀💥

comment by Vaniver · 2023-10-26T04:32:47.553Z · LW(p) · GW(p)

The field is something like 5 years old.

I'm not sure what you are imagining as 'the field', but isn't it closer to twenty years old? (Both numbers are, of course, much less than the age of the AI field, or of computer science more broadly.)

Much of the source of my worry is that I think in the first ten-twenty years of work on safety, we mostly got impossibility and difficulty results, and so "let's just try and maybe it'll be easy" seems inconsistent with our experience so far.

Replies from: Roko, amaury-lorin
comment by Roko · 2023-10-26T10:10:03.797Z · LW(p) · GW(p)

Well, the AI technical safety work that's appropriate for neural networks is about 5-6 years old, if we go back before 2017 I don't think any relevant work was done

comment by momom2 (amaury-lorin) · 2023-10-26T07:59:02.231Z · LW(p) · GW(p)

AlexNet dates back to 2012, I don't think previous work on AI can be compared to modern statistical AI.
Paul Christiano's foundational paper on RLHF dates back to 2017.
Arguably, all of agent foundations work turned out to be useless so far, so prosaic alignment work may be what Roko is taking as the beginning of AIS as a field.

Replies from: Roko, Vaniver
comment by Roko · 2023-10-26T10:08:46.478Z · LW(p) · GW(p)

yes

comment by Vaniver · 2023-10-26T19:20:34.000Z · LW(p) · GW(p)

AlexNet dates back to 2012, I don't think previous work on AI can be compared to modern statistical AI.

When were convnets invented, again? How about backpropagation?

comment by Roman Leventov · 2023-10-29T13:36:22.984Z · LW(p) · GW(p)

In order for humans to survive the AI transition I think we need to succeed on the technical problems of alignment (which are perhaps not as bad as Less Wrong culture made them out to be), and we also need to "land the plane" of superintelligent AI on a stable equilibrium where humans are still the primary beneficiaries of civilization, rather than a pest species to be exterminated or squatters to be evicted.

I agree completely with this.

I want to take the opportunity to elaborate a little on what a "stable equilibrium" civilisation should have, in my mind:

  1. Digital trust infrastructure: decentralised identity, secure communication (see Layers 1 and 2 in Trust Over IP Stack), proof-of-humanness, proof of AI (such as, a proof that such and such artifact is created with such and such agent, e.g., provided by OpenAI -- watermarking failed [LW · GW], so need new robust solutions with zero-knowledge proofs).
  2. Infrastructure for collective sensemaking and coordination: the infrastructure for communicating beliefs and counterfactuals, making commitments, imposing constraints on agent behaviour, and monitoring the compliance.  This corresponds to Layer 3 in Trust Over IP Stack. We at Gaia Consortium are doing this. Please join to help us!
  3. Infrastructure and systems for collective epistemics: next-generation social networks (e.g., https://subconscious.network/), media, content authenticity, Jim Rutt's "info agents" (he advises "three different projects that are working on this").
  4. The science/ethics of consciousness and suffering mostly solved, and much more effort in biology to understand whom (or whose existence, joy, or non-suffering) the civilisation should value, to better inform the constraints and policy for the economic agents (which is monitored and verified through monitoring infra from item 2.)
  5. Systems for political decision-making and collective ethical deliberation: see Collective Intelligence Project, Policy Synth, simulated deliberative democracy. These types of systems should also be used for governing all of the above layers.

Did I forget something important? Comments and criticism are welcome.

comment by avturchin · 2023-10-26T12:31:33.747Z · LW(p) · GW(p)

Agreed.

However, there is no collective "we" to whom this message can be effectively directed. The readers of LW are not the ones who can influence the overarching policies of the US and China. That said, leaders at OpenAI and Anthropic might come across this.

This leads to the question of how to halt AI development on a global scale. Several propositions have been put forth:

1. A worldwide political agreement. Given the current state of wars and conflicts, this seems improbable.
2. A global nuclear war. As the likelihood of a political agreement diminishes, the probability of war increases.
3. Employing the first AI to establish a global control system that hinders the development of subsequent AIs. However, if this AI possesses superintelligence, the associated risks resurface. Therefore, this global control AI should not be superintelligent. It could be a human upload or a data-driven AI (as opposed to one that's intelligence-augmented), like a surveillance system with constrained cognition.
4. Relying on extraterrestrial beings, UFOs, simulation theories, or anthropic principle for assistance. For instance, the reverse doomsday argument suggests that it's improbable for the end to be imminent.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-10-27T00:54:43.039Z · LW(p) · GW(p)

3. doesn't seem like a viable option, since there's a decent chance it can disguise itself into appearing as less than superintelligent.

Replies from: avturchin
comment by avturchin · 2023-10-27T12:32:06.834Z · LW(p) · GW(p)

AI Nanny can be built in the ways which excludes this, like a combination of narrow neural nets capable to detect certain types of activity. Not AGI or advance LLM.

comment by David Gould (david-gould) · 2023-10-27T02:28:11.457Z · LW(p) · GW(p)

I am someone who is at present unsure how to think about AI risk. As a complete layperson with a strong interest in science, technology, futurism and so on, there are - seemingly - some very smart people in the field who appear to be saying that the risk is basically zero (eg: Andrew Ng, Yann Le Cunn). Then there are others who are very worried indeed - as represented by this post I am responding to.

This is confusing.

To get people at my level to support a shut down of the type described above, there needs to be some kind of explanation as to why there is such a difference of opinion by experts, because any argument that you make to me to accept AI as a risk that requires such a shut down has been both rejected and accepted by others who know more than me about AI.

Note that this may not be rationality as it is understood on this forum - after all, why can't I just weigh the arguments without looking at who supports them? I understand that. But if I am sceptical about my own reasoning capabilities in this area - given that I am a lay person - then I have to suspect that any argument (either for or against AI risk) that has not convinced people with superior reasoning capabilities than I may contain flaws. 

That is, unless I understand why there might be such disagreement.

And I understand that this might get into recursion - people disagree about the reasons for disagreement and ...

However, at the very least it gives me another lens to look through and also someone with a lot of knowledge in AI might not have a lot of knowledge in why arguments fail.

Replies from: carl-feynman, xpym, philh, raitis-krikis-rusins
comment by Carl Feynman (carl-feynman) · 2023-10-31T17:16:54.978Z · LW(p) · GW(p)

Yes, it’s a difficult problem for a layman to know how alarmed to be.  I’m in the AI field, and I’ve thought that superhuman AI was a threat since about 2003.  I’d be glad to engage you in an offline object-level discussion about it, comprehensible to a layman, if you think that would help.  I have some experience in this, having engaged in many such discussions. It’s not complicated or technical, if you explain it right.

I don’t have a general theory for why people disagree with me, but here are several counter arguments I have encountered.  I phrase them as though they were being suggested to me, so “you” is actually me.

— Robots taking over sounds nuts, so you must be crazy.

— This is an idea from a science fiction movie.  You’re not a serious person.

— People often predict the end of the world, and they’ve always been wrong before.  And often been psychologically troubled. Are you seeing a therapist?

— Why don’t any of the top people in your field agree?  Surely if this were a serious problem, they’d be all over it. (don’t hear this one much any more.)

— AIs won’t be dangerous, because nobody would be so foolish as to design them that way.  Or to build AIs capable of long term planning, or to direct AIs toward foolish or harmful goals. Or various other sentences containing the phrase “nobody would be so foolish as to”.
— AIs will have to obey the law, so we don’t have to worry about them killing people or taking over, because those things are illegal. (Yes, I’ve actually heard this one.)

— Various principles of computer science show that it is impossible to build a machine that makes correct choices in all circumstances.  (This is where the “no free lunch“ theorem comes in.  Of course, we’re not proposing a machine that makes correct choices in all circumstances, just one that makes mostly correct choices in the circumstances it encounters.)

— There will be lots of AIs, and the good ones will outnumber the bad ones and hence win.

— It’s impossible to build a machine with greater-than-human intelligence, because of <philosophical principle here>.

— Greater wisdom leads to greater morality, so a superhuman AI is guaranteed beneficent.

— If an AI became dangerous, I would just unplug it.  Yes, I’d be able to spot it, and no, the AI wouldn’t be able to talk me out of it, or otherwise stop me.

— Machines can never become conscious.  Which implies safety, somehow.

— Present-day AIs are obviously not able to take over the world.  They’re not even scary.  You’re foolishly over-reacting.

— The real problem of AI is <something else, usually something already happening>.  You’re distracting people with your farfetched speculation.

— My whole life, people have been decrying technological advances and saying they were bad, and they’ve always been wrong.  You must be one of those Luddites we keep hearing about.

— If it becomes a problem, people will take care of it.

— My paycheck depends on my not agreeing with you. (I’ve been working on this one— convincing my friends in the AI business to retreat from frontier development. Results are mixed.)

— Superhuman machines offer vast payoff!  We must press ahead regardless.

— If humans are defeated, that’s good actually, because evolution is good.

Many of these are good arguments, but unfortunately they’re all wrong.

Replies from: david-gould
comment by David Gould (david-gould) · 2023-10-31T21:16:19.484Z · LW(p) · GW(p)

I am happy to have a conversation with you. On this point:

'— The real problem of AI is <something else, usually something already happening>.  You’re distracting people with your farfetched speculation.'

I believe that AI indeed poses huge problems, so maybe this is where I sit.

 

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2023-11-02T21:27:43.477Z · LW(p) · GW(p)

I tend to concentrate on extinction, as the most massive and terrifying of risks.  I think that smaller problems can be dealt with by the usual methods, like our society has dealt with lots of things.  Which is not to say that they aren’t real problems, that do real harm, and require real solutions.  My disagreement is with “You’re distracting people with your farfetched speculation.”  I don’t think raising questions of existential risk makes it harder to deal with more quotidian problems.  And even if it did, that’s not an argument against the reality of extinction risk.

comment by xpym · 2023-11-01T15:07:46.639Z · LW(p) · GW(p)

To me the core reason for wide disagreement seems simple enough - at this stage the essential nature of AI existential risk arguments is not scientific but philosophical. The terms are informal and there are no grounded models of underlying dynamics (in contrast with e.g. climate change). Large persistent philosophical disagreements are very much the widespread norm, and thus unsurprising in this particular instance as well, even among experts in currently existing AIs, as it's far from clear how their insights would extrapolate to hypothetical future systems.

comment by philh · 2023-10-30T20:47:47.714Z · LW(p) · GW(p)

there needs to be some kind of explanation as to why there is such a difference of opinion by experts

Isn't this kind of thing the default? Like, for ~every invention that changed the world I'd expect to be able to find experts saying in advance that it won't work or if it does it won't change things much. And for lots of things that didn't work or didn't change the world, I'd expect to be able to find experts saying it would. I basically just think that "smart person believes silly thing for silly reasons" is pretty common.

Replies from: david-gould
comment by David Gould (david-gould) · 2023-10-31T01:12:40.482Z · LW(p) · GW(p)

True. Unless there were very good arguments/very good evidence for one side or the other. My expectation is that for any random hypothesis there will be lots of disagreement about it among experts. For a random hypothesis with lots of good arguments/good evidence, I would expect much, much less disagreement among experts in the field.

If we look at climate change, for example, the vast majority of experts agreed about it quite early on - within 15 years of the Charney report.

If all I am left with, however, is 'smart person believes silly thing for silly reasons' then it is not reasonable for me as a lay person to determine which is the silly thing. Is 'AI poses no (or extremely low) x-risk' the silly thing, or is 'AI poses unacceptable x-risk' the silly thing?

If AI does indeed pose unacceptable x-risk and there are good arguments/good evidence for this, then there also has to be a good reason or set of reasons why many experts are not convinced. (Yann claims, for example, that the AI experts arguing for AI x-risk are a very small minority and Eliezer Yudkowsky seems to agree with this).
 

Replies from: philh
comment by philh · 2023-10-31T10:05:57.335Z · LW(p) · GW(p)

If we look at climate change, for example, the vast majority of experts agreed about it quite early on—within 15 years of the Charney report.

So I don't know much about timelines of global warming or global warming science, but I note that that report came out in 1979, more than 100 years after the industrial revolution. So it's not clear to me that fifteen years after that counts as "quite early on", or that AI science is currently at a comparable point in the timeline. (If points in these timelines can even be compared.)

If all I am left with, however, is ‘smart person believes silly thing for silly reasons’ then it is not reasonable for me as a lay person to determine which is the silly thing.

FWIW I think even relatively-lay people can often detect silly arguments, even from people who know a lot more than them. Some examples where I think I've done that:

  • I remember seeing someone (possibly even Yann LeCun?) saying something along the lines of, AGI is impossible because of no free lunch theorems.
  • Someone saying that HPMOR's "you violated conservation of energy!" bit is dumb because something something quantum stuff that I didn't understand; and also because if turning into a cat violated conservation of energy, then so did levitating someone a few paragraphs earlier. I am confident this person (who went by the handle su3su2u1) knows a lot more about physics than me. I am also confident this second part was them being silly.
  • This comment [LW(p) · GW(p)].

So I'd suggest that you might be underestimating yourself.

But if you're right that you can't reasonably figure this out... I'm not sure there are any ways to get around that? Eliezer can say "Yann believes this because of optimism bias" and Yann can say "Eliezer believes this because of availability heuristic" or whatever, and maybe one or both of them is right (tbc I have not observed either of them saying these things). But these are both Bulverism.

It may be that Eliezer and Yann can find a double crux, something where they agree: "Eliezer believes X, and if Eliezer believed not-X then Eliezer would think AGI does not pose a serious risk. Yann believes not-X, and if Yann believed X then Yann would think AGI does pose a serious risk." But finding such Xs is hard, I don't expect there to be a simple one, and even if there was it just punts the question: "why do these two smart people disagree on X?" It's possible X is in a domain that you consider yourself better able to have an opinion on, but it's also possible it's in one you consider yourself less able to have an opinion on.

If AI does indeed pose unacceptable x-risk and there are good arguments/​good evidence for this, then there also has to be a good reason or set of reasons why many experts are not convinced.

I basically just don't think there does have to be this.

(Yann claims, for example, that the AI experts arguing for AI x-risk are a very small minority and Eliezer Yudkowsky seems to agree with this)

Fwiw my sense is that this is false, and that Yann might believe it but I don't expect Eliezer to. But I don't remember what I've seen that makes me think this. (To some extent it might depend on who you count as an expert and what you count as arguing for x-risk.)

Replies from: david-gould
comment by David Gould (david-gould) · 2023-10-31T21:11:58.346Z · LW(p) · GW(p)

Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk - to the point that the IPCC was established in 1988 by the UN.

When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.

And, yes: I can detect silly arguments in a reasonable number of cases. But I have not been able to do so in this case as yet (in the aggregate). It seems that there are possibly good arguments on both sides.
 

It is indeed tricky - I also mentioned that it could get into a regress-like situation. But I think that if people like me are to be convinced it might be worth the attempt. As you say, there may be a more accessible to me domain in there somewhere.


Re the numbers, Eliezer seems to claim that the majority of AI researchers believe in X-risk, but few are speaking out for a variety of reasons. This boils down to me trusting Eliezer's word about the majority belief, because that majority is not speaking out. He may be motivated to lie in this case - note that I am not saying that he is, but 'lying for Jesus' (for example) is a relatively common thing. It is also possible that he is not lying but is wrong - he may have talked to a sample that was biased in some way.
 

Replies from: philh
comment by philh · 2023-11-01T17:41:24.595Z · LW(p) · GW(p)

Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk—to the point that the IPCC was established in 1988 by the UN.

When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.

Nod. But then, I assume by the 1970s there was already observable evidence of warming? Whereas the observable evidence of AI X-risk in the 2000s seems slim. Like I expect I could tell a story for global warming along the lines of "some people produced a graph with a trend line, and some people came up with theories to explain it", and for AI X-risk I don't think we have graphs or trend lines of the same quality.

This isn't particularly a crux for me btw. But like, there are similarities and differences between these two things, and pointing out the similarities doesn't really make me expect that looking at one will tell us much about the other.

I think that if people like me are to be convinced it might be worth the attempt. As you say, there may be a more accessible to me domain in there somewhere.

Not opposed to trying, but like...

So I think it's basically just good to try to explain things more clearly and to try to get to the roots of disagreements. There are lots of ways this can look like. We can imagine a conversation between Eliezer and Yann, or people who respectively agree with them. We can imagine someone currently unconvinced having individual conversations with each side. We can imagine discussions playing out through essays written over the course of months. We can imagine FAQs written by each side which give their answers to the common objections raised by the other. I like all these things.

And maybe in the process of doing these things we eventually find a "they disagree because ..." that helps it click for you or for others.

What I'm skeptical about is trying to explain the disagreement rather than discover it. That is, I think "asking Eliezer to explain what's wrong with Yann's arguments" works better than "asking Eliezer to explain why Yann disagrees with him". I think answers I expect to the second question basically just consist of "answers I expect to the first question" plus "Bulverism".

(Um, having written all that I realize that you might just have been thinking of the same things I like, and describing them in a way that I wouldn't.)

comment by Rusins (raitis-krikis-rusins) · 2023-10-30T00:18:58.880Z · LW(p) · GW(p)

Unfortunately I do not know the reasoning behind why the people you mentioned might not see AI as a threat, but if I had to guess – people not worried are primarily thinking about short term AI safety risks like disinformation from deepfakes, and people worried are thinking about super-intelligent AGI and instrumental convergence [? · GW], which necessitates solving the alignment problem.

comment by Pooka Mac (stephen-f) · 2023-10-26T13:58:27.355Z · LW(p) · GW(p)

The presumption here is that civilisation is run by governments are chaotic and low competence. If this is true, there is clearly a problem implementing an AI lockdown policy. It would be great to identify the sort of political or economic steps needed to execute the shutdown.

comment by Roko · 2024-07-15T22:08:50.540Z · LW(p) · GW(p)

Title changed from

"Architects of Our Own Demise: We Should Stop Developing AI"

to

"Architects of Our Own Demise: We Should Stop Developing AI Carelessly"

comment by Review Bot · 2024-02-16T14:28:41.255Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Foyle (robert-lynn) · 2023-10-26T07:36:39.618Z · LW(p) · GW(p)

Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up - as with Frank Herbert's Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).  

Is it insane to think that a limited nuclear conflict (as seems to be an increasingly likely possibility at the moment) might actual raise humanities chances of long term survival - if it disrupted global economies severely for few decades and in particular messed up chip production.

Replies from: Roko, Vaniver, akarlin
comment by Roko · 2023-10-26T11:55:34.919Z · LW(p) · GW(p)

Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up

Part of why I am posting this is in case that happens, so people are clear what side I am on.

comment by Vaniver · 2023-10-26T19:23:25.966Z · LW(p) · GW(p)

unless some catastrophic but survivable casus belli happens to wake the population up 

Popular support is already >70% for stopping development of AI. Why think that's not enough, and that populations aren't already awake?

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T00:14:54.207Z · LW(p) · GW(p)

Well, my model says that what really matters is the opinions of the power-wielding decision makers, and that 'popular opinion' doesn't actually carry much weight in deciding what the US government does. Much less the Chinese government, or the leadership of large corporations.

So my view is that it is the decision-makers currently imagining that the poisoned banana will grant them increased wealth & power who need their minds changed. 

Replies from: Vaniver
comment by Vaniver · 2023-10-27T00:49:59.818Z · LW(p) · GW(p)

So my view is that it is the decision-makers currently imagining that the poisoned banana will grant them increased wealth & power who need their minds changed. 

My current sense is that efforts to reach the poisoned banana are mostly not driven by politicians. It's not like Joe Biden or Xi Jinping are pushing for AGI, and even Putin's comments on AI look like near-term surveillance / military stuff, not automated science and engineering.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T01:00:55.283Z · LW(p) · GW(p)

Yeah, I agree that that's what the current situation looks like. More tech CEOs making key decisions than politicians. However, I think the strategic landscape may change quite quickly once real world effects become more apparent. In either case, I think it's the set of decision makers holding the reins (whoever that may consist of) who need to be updated. I'm pretty sure that the 'American Public' or 'European Public' could have an influence, but probably not at the level of simply answering 'AI is scary' on a poll. Probably there'd need to be like, widespread riots.

comment by akarlin · 2023-10-26T11:14:58.312Z · LW(p) · GW(p)

It's not at all insane IMO. If AGI is "dangerous" x timelines are "short" x anthropic reasoning is valid...

... Then WW3 will probably happen "soon" (2020s).

https://twitter.com/powerfultakes/status/1713451023610634348

I'll develop this into a post soonish.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-27T00:16:28.674Z · LW(p) · GW(p)

I'm hopeful that the politicians of the various nations who might initiate this conflict can see how badly that would turn out for them personally, and thus find sufficient excuses to avoid rushing into that scenario. Not certain by any means, but hopeful. There certainly will need to be some tense negotiations, at the least.