Why is violence against AI labs a taboo?
post by ArisC · 2023-05-26T08:00:59.314Z · LW · GW · 11 commentsThis is a question post.
Contents
Answers 59 jimrandomh 20 lmaowell 13 Jayson_Virissimo 10 David Hornbein 9 shminux 4 Dagon 3 Christopher King 2 Roko 2 Gerald Monroe 1 Rika -2 Chinese Room None 11 comments
People like Eliezer are annoyed when people suggest their rhetoric hints at violence against AI labs and researchers. But even if Eliezer & co don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?
(By violence I don't necessarily mean physical violence - I mean more general efforts to disrupt AI progress, including e.g. with coordinated cyber attacks.)
Answers
I think there is some value in exploring the philosophical foundations of ethics, and LessWrong culture is often up for that sort of thing. But, it's worth saying explicitly: the taboo against violence is correct, and has strong arguments for it from a wide variety of angles. People who think their case is an exception are nearly always wrong, and nearly always make things worse.
(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)
↑ comment by ArisC · 2023-05-27T06:39:00.059Z · LW(p) · GW(p)
Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).
↑ comment by M. Y. Zuo · 2023-05-27T00:44:15.971Z · LW(p) · GW(p)
(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)
Can you clarify on this? I think most people would agree that lawsuits do count as explicitly sanctioned violence beyond some low threshold, especially in a loser-pays jurisdiction. As in that's the intended purpose of the idea, to let the victor rely on the state's monopoly on violence instead of their private means.
Replies from: timothy-underwood-1↑ comment by Timothy Underwood (timothy-underwood-1) · 2023-05-29T11:41:33.548Z · LW(p) · GW(p)
You don't become generally viewed by society as a defector when you file a lawsuit. Private violence defines you in that way, and thus marks you as an enemy of ethical cooperators, which is unlikely to be a good long term strategy.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-29T18:22:46.509Z · LW(p) · GW(p)
Someone or some group with objectionable moral/ethical/social positions, seen by a substantial fraction of society, can nonetheless win lawsuits and rely on the state's violent means for enforcement of the awards.
e.g. a major oil company winning a lawsuit against activists, forcing environmental degradation of some degree.
or vice versa,
e.g. nudist lifestyle and porn activists winning lawsuits against widely supported restrictions on virtual child porn, forcing a huge expansion in the effective grey area of child porn.
The losing side being punished may even be the more efficient, effective, engaged, etc., 'ethical cooperators' in relative comparison, and yet they nonetheless receive the violence without any noticeable change in public sentiment regarding the judiciary.
It's not that violence against AI labs is a taboo... it's that violence is a taboo.
↑ comment by the gears to ascension (lahwran) · 2023-05-26T12:02:01.807Z · LW(p) · GW(p)
This is a commonly cited failure of deontology and in particular classical liberalism. Whether physical violence is morally justified, whether it's justified by local law, whether it's justified by international rules of war, whether it's effective, and whether it's a mechanistically understandable response from victims of harm a behaviorist perspective, are all different questions. I typically answer that most violence is ineffective and yet that the motivations can be mechanistically understood as arising from locally reasonable mechanisms of thought; most violence is illegal, but most legal systems commit large amounts of physical violence to which any form of retaliation at all is legally unavailable; and most legal systems are implemented by people who disobey their own laws regularly. Does this morally justify violence? I abstain, violence is always a tragedy even if morally justified and effective; but it morally compels immense effort to build a healthier network of locally empowered personal control of personal outcomes, which is absolutely contrary to how current orgs are designed, and orgs should fear mass violence from morally ambiguous but mechanistically understandable mass retaliation should mass unemployment result in mass death. Altman has even said similarly recently, though less directly!
I doubt violence is effective now, but we shouldn't encourage people to tie their hands behind their backs either, as strategic ambiguity on the part of the world's population is a critical component of the game theoretic pressure on labs to prevent harms themselves.
↑ comment by ArisC · 2023-05-26T09:44:04.927Z · LW(p) · GW(p)
So, you would have advocated against war with Nazi Germany?
Replies from: lowell-weisbord↑ comment by lmaowell (lowell-weisbord) · 2023-05-26T09:47:53.687Z · LW(p) · GW(p)
I'm sorry if my point wasn't made clearly. Things are taboos because of social customs & contexts, my point wasn't meant to be normative — just point out that the taboo isn't against violence against ai labs, it's against violence more broadly.
Replies from: ArisC↑ comment by ArisC · 2023-05-26T10:59:14.423Z · LW(p) · GW(p)
Yes but what I'm saying is that this isn't true - few people are absolute pacifists. So violence in general isn't taboo - I doubt most people object to things like laws (which ultimately rely on the threat of violence).
So why is it that violence in this specific context is taboo?
Replies from: Waldvogel↑ comment by Waldvogel · 2023-05-26T19:47:39.341Z · LW(p) · GW(p)
Because it's illegal.
Replies from: ArisC↑ comment by ArisC · 2023-05-26T21:50:48.898Z · LW(p) · GW(p)
This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?
Replies from: Waldvogel↑ comment by Waldvogel · 2023-05-26T22:54:06.618Z · LW(p) · GW(p)
You asked why this sort of violence is taboo, not whether we should break that taboo or not. I'm merely answering your question ("Why is violence in this specific context taboo?"). The answer is because it's illegal. Everyone understands, either implicitly or explicitly, that the state has a monopoly on violence. Therefore all extralegal violence is taboo. This is a separate issue from whether that violence is moral, just, necessary, etc.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-27T01:12:29.193Z · LW(p) · GW(p)
Everyone understands, either implicitly or explicitly, that the state has a monopoly on violence.
Not true.
For example, many organizations in Mexico do not recognize that the Mexican state has a monopoly on violence. And they actively bring violence upon those who try to claim it on behalf of the state, sometimes successfully.
Consider the following rhetorical question:
Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?
Do we expect the answer to this to be any different for vegans than for AI-risk worriers?
↑ comment by ArisC · 2023-05-26T19:04:33.015Z · LW(p) · GW(p)
Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)
Replies from: simon↑ comment by simon · 2023-05-26T19:47:51.652Z · LW(p) · GW(p)
If you hypothetically have a situation where it's a 100% clear that the human race will go extinct unless a violent act is committed, and it's seems likely that the violent act would prevent human extinction, then, in that hypothetical case, that would be a strong consideration in favour of committing the violent act.
In reality though, this clarity is extremely unlikely, and unilateral actions are likely to have negative side effects. Moreover, even if you think you have such clarity, it's likely that you are mistaken, and the negative side effects still apply no matter how well justified you personally thought your actions were, if others don't agree.
Replies from: ArisC↑ comment by ArisC · 2023-05-26T21:52:53.114Z · LW(p) · GW(p)
OK, so then AI doomers admit it's likely they're mistaken?
(Re side effects, no matter how negative they are, they're better than the alternative; and it doesn't even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)
Replies from: Raemon, simon, lahwran↑ comment by Raemon · 2023-05-27T02:04:49.696Z · LW(p) · GW(p)
You're assuming "the violence might or might not stop extinction, but then there will be some side-effects (that are unrelated to extinction)". But, my concrete belief is that most acts of violence you could try to commit would probably make extinction more likely, not less, because a) they wouldn't work, and b) they destroy the trust and coordination mechanisms necessary for the world to actually deal with the problem.
To spell out a concrete example: someone tries bombing an AI lab. Maybe they succeed, maybe they don't. Either way, they didn't actually stop the development of AI because other labs will still continue the work. But now, when people are considering who to listen to about AI safety, the "AI risk is high" people get lumped in with crazy terrorists and sidelined.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T05:21:44.152Z · LW(p) · GW(p)
But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-05-27T06:49:49.532Z · LW(p) · GW(p)
Well... Yeah? Would any of us care to build knowledge that improves our odds if our odds were immovably terrible?
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:59:23.681Z · LW(p) · GW(p)
I don't know! I've certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn't stop progress, but would slow it down - and if you think there is a chance alignment will be solved, the more time you buy the better.
Replies from: timothy-underwood-1↑ comment by Timothy Underwood (timothy-underwood-1) · 2023-05-29T11:44:56.311Z · LW(p) · GW(p)
If you think P(doom) is 1, you probably don't believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.
↑ comment by simon · 2023-05-27T01:42:54.099Z · LW(p) · GW(p)
I am not an extreme doomer, but part of that is that I expect that people will face things more realistically over time - something that violence, introducing partisanship and division, would set back considerably. But even for an actual doomer, the "make things better through violence" option is not an especially real option.
You may have a fantasy of choosing between these options:
- doom
- heroically struggle against the doom through glorious violence
But you are actually choosing between:
- a dynamic that's likely by default to lead to doom at some indefinite time in the future by some pathway we can't predict the details of until it's too late
- make the situation even messier through violence, stirring up negative attitudes towards your cause, especially among AI researchers but also among the public, making it harder to achieve any collective solution later, sealing the fate of humanity even more thoroughly
Let me put it this way. To the extent that you have p(doom) = 1 - epsilon, where is epsilon coming from? If it's coming from "terrorist attacks successully stop capability research" then I guess violence might make sense from that perspective but I would question your sanity. If relatively more of that epsilon is coming from things like "international agreements to stop AI capabilities" or "AI companies start taking x-risk more seriously", which I would think would be more realistic, then don't ruin the chances of that through violence.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:44:20.612Z · LW(p) · GW(p)
Except that violence doesn't have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?
Replies from: simon, lahwran↑ comment by simon · 2023-05-27T15:02:01.857Z · LW(p) · GW(p)
I think you are overestimating the efficacy and underestimating the side effects of such things. How much do you expect a cyber attack to slow things down? Maybe a week if it's very successful? Meanwhile it still stirs up opposition and division, and puts diplomatic efforts back years.
As the gears to ascension notes, non-injurious acts of aggression share many game theoretic properties as physical violence. I would express the key issue here as legitimacy; if you don't have legitimacy, acting unilaterally puts you in conflict with the rest of humanity and doesn't get you legitimacy, but once you do have legitimacy you don't need to act unilaterally, you can get a ritual done that causes words to be written on a piece of paper where people with badges and guns will come to shut down labs that do things forbidden by those words. Cool huh? But if someone just goes ahead and takes illegitimate unilateral action, or appears to be too willing to do so, that puts them into a conflict position where they and people associated with them won't get to do the legitimate thing.
↑ comment by the gears to ascension (lahwran) · 2023-05-27T07:00:01.227Z · LW(p) · GW(p)
Everyone has been replying as though you mean physical violence; non-injurious acts of aggression don't qualify as violence unambiguously, but share many game theoretic properties. If classical liberal coordination can be achieved even temporarily it's likely to be much more effective at preventing doom.
↑ comment by the gears to ascension (lahwran) · 2023-05-27T00:08:48.952Z · LW(p) · GW(p)
Even in a crowd of ai doomers, no one person speaks for ai doomers. But plenty think it likely they're mistaken somehow. I personally just think the big labs aren't disproportionately likely to be the cause of an extinction strength ai, so violence is overdeterminedly off the table as an effective strategy, before even considering whether it's justified, legal, or understandable. The only way we solve this is by constructing the better world.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:42:31.788Z · LW(p) · GW(p)
If it's true AI labs aren't likely to be the cause of extinction, why is everyone upset at the arms race they've begun?
You can't have it both ways: either the progress these labs are making is scary - in which case anything that disrupts them (and hence slows them down even if it doesn't stop them) is good - or they're on the wrong track, in which case we're all fine.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-05-27T06:56:10.961Z · LW(p) · GW(p)
I refer back to the first sentence of the message you're replying to. I'm not having it both ways, you're confusing different people's opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn't undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person's doom model anyway.
You can imagine an argument that goes "Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI." I have only ever heard people say that someone else's views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.
Yudkowsky's position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI. See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)
↑ comment by ArisC · 2023-05-27T06:48:35.394Z · LW(p) · GW(p)
Well, it's clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI - in which case, every day they're not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made - or you think they're not making progress anyway, so why are you worried?
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-05-27T09:51:29.056Z · LW(p) · GW(p)
I strongly disagree with "clearly not true" because there are indirect effects too. It is often the case that indirect effects of violence are much more impactful than direct effects, e.g. compare 9/11 with the resulting wars in Afghanistan & Iraq.
I addressed a general question like that in https://www.lesswrong.com/posts/p2Qq4WWQnEokgjimy/respect-chesterton-schelling-fences [LW · GW]
Basically, guardrails exist for a reason, and you are generally not smart enough to predict the consequences of removing them. This applies to most suggestions of the form "why don't we just <do some violent thing> to make the world better". There are narrow exceptions where breaking a guardrail has actual rather than imaginary benefits, but finding them requires a lot of careful analysis and modeling.
↑ comment by ArisC · 2023-05-26T19:05:22.174Z · LW(p) · GW(p)
Isn't the prevention of the human race one of those exceptions?
Replies from: shminux, lahwran, Waldvogel↑ comment by Shmi (shminux) · 2023-05-27T00:37:42.084Z · LW(p) · GW(p)
You don't know enough to accurately decide whether there is a high risk of extinction. You don't know enough to accurately decide whether a specific measure you advocate would increase or decrease it. Use epistemic modesty to guide your actions. Being sure of something you cannot derive from first principles, as opposed to from parroting select other people's arguments is a good sign that you are not qualified.
One classic example is the environmentalist movement accelerating anthropogenic global climate change by being anti-nuclear energy. If you think you are smarter now about AI dangers than they were back then about climate, it is a red flag.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:46:30.626Z · LW(p) · GW(p)
But AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-05-27T08:36:33.742Z · LW(p) · GW(p)
Eliezer discussed it multiple times, quite recently on Twitter and on various podcasts. Other people did, too.
↑ comment by the gears to ascension (lahwran) · 2023-05-27T07:04:27.998Z · LW(p) · GW(p)
I think you accidentally humanity
↑ comment by Waldvogel · 2023-05-26T19:40:50.891Z · LW(p) · GW(p)
If you have perfect foresight and you know that action X is the only thing that will prevent the human race from going extinct, then maybe action X is justified. But none of those conditions apply.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:45:33.169Z · LW(p) · GW(p)
That's not true - we don't make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing - it's a matter of expected value.
Umm, because individual non-government-sanctioned violence is horrific, and generally results in severe punishment which prevents longer-term action. Oh, wait, that's why it's not used, not why it's taboo to even disuss.
It's taboo for discussion because serious planning for violence is a direct crime (conspiracy) itself. Don't do that. Open advocacy of violence also signals that, by your rules, it's OK for others to target you for violence if they disagree strongly enough. I don't recommend that, either (especially if you think your opponents are better at violence than you are).
↑ comment by ArisC · 2023-05-27T06:40:30.297Z · LW(p) · GW(p)
Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-05-27T06:44:07.288Z · LW(p) · GW(p)
Do you think this comparison to be a good specific exemplar for the ai case, such that you'd suggest they should have the same answer, or do you bring it up simply to check calibration? I do agree that it's a valid calibration to check, but I'm curious whether you're claiming capabilities research is the same order of magnitude horrific.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:57:51.331Z · LW(p) · GW(p)
I am bringing it up for calibration. As to whether it's the same magnitude of horrific: in some ways, it's higher magnitude, no? Even Nazis weren't going to cause human extinction - of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn't easily forgive a drunk driver who runs over a child...
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-05-27T07:11:01.708Z · LW(p) · GW(p)
No, but intentional malice is much harder to dissuade nonviolently.
Because it's anti-social (in most cases; things like law enforcement are usually fine), and the only good timelines (by any metric) are pro-social.
Consider if it became like the Irish troubles. Do you think alignment gets solved in this environment? No. What you get is people creating AI war machines. And they don't bother with alignment because they are trying to get an advantage over the enemy, not benefit everyone. Everyone is incentivised to push capabilities as far as they can, except past the singularity threshold. And there's not even a disincentive for going past it, you're just neutral on it. So the dangerous bit isn't even that the AI are war machines, it's that they are unaligned.
It's a general principle that anti-social acts tend to harm utility overall due to second-order effects that wash out the short-sighted first-order effects. Alignment is an explicitly pro-social endeavor!
I think violence helps unaligned AI more than it helps aligned AI.
If the research all goes underground it will slow it down but it will also make it basically guaranteed that there's a competitive, uncoordinated transition to superintelligence.
When Eliezer proposes "turn all the GPUs to Rubik's cubes", this pivotal act I think IS outright violence. Nanotechnology doesn't work that way (something something local forces dominate). What DOES work is having nearly unlimited drones because they were manufactured by robots that made themselves exponentially, making ASI equipped parties have more industrial resources than the entire worlds capacity right now.
Whoever has "nearly unlimited drones" is a State, and is committing State Sponsored Violence which is OK. (By the international law of "whatcha gonna do about it")
So the winners of an AI race with their "aligned" allied superintelligence actually manufactured enough automated weapons to destroy everyone else's AI labs and to place the surviving human population under arrest.
That's how an AI war actually ends. If this is how it goes (and remember this is a future humans "won") this is what happens.
The amount of violence before the outcome depends on the relative resources of the warring sides.
ASI singleton case : nobody has to be killed, billions of drones using advanced technology attack everywhere on the planet at once. Decision makers are bloodlessly placed under arrest, guards are tranquilized, the drones have perfect aim so guns are shot out of hands and engines on military machines hit with small shaped charges. The only violence where humans die is in the assaults on nuclear weapons facilities, since math.
Some nukes may be fired on the territory of the nation hosting the ASI, this kills a few million tops, "depending on the breaks".
Two warring parties case, one party's ASI or industrial resources are significantly weaker : nuclear war and prolonged endless series of battles between drones. Millions or billions of humans killed as collateral damage, battlefields littered with nuclear blast craters and destroyed hardware. "Minor inconvenience" for the winning side since they have exponentially built robotics, the cleanup is rapid.
Free for all, everyone gets ASI, it's not actually all that strong in utility terms : Outcomes range from a world of international treaties similar to now and a stable equilibria or a world war that consumes the earth, most humans don't survive. Again, it's a minor inconvenience for the winners. No digital data is lost, exponentially replicated robotics mean the only long term cost is a few years to clean up.
↑ comment by the gears to ascension (lahwran) · 2023-05-27T07:01:47.689Z · LW(p) · GW(p)
I'd suggest reading deepmind's recent inter-org paper on model evaluation for extreme risks. What you describe as the success case I agree is necessary for success but without sufficient alignment of each person's personal asi to actually guarantee it will in fact defend against malicious and aggressive misuse of ai by others, you're just describing filling the world with loose gunpowder.
If someone thinks that violence against AI labs is bad, then they will make it a taboo because they think it is bad, and they don't want violent ideas to spread.
There are a lot of interesting discussions to be had on why one believes this category of violence to be bad, and you can argue against these perspectives in a fairly neutral-sounding, non-stressful way, quite easily, if you know how to phrase yourself well.
A lot (although not all) people are fairly open to this.
If someone thinks that violence against AI labs is good, then they probably really wouldn't want you talking about it on a publicly accessible, fairly well-known website. It's a very bad strategy from most pro-violence perspectives.
I'm going to quite strongly suggest, regardless of anyone's perspectives on this topic, that you probably shouldn't discuss it here - there are very few angles from which this could be imagined to be a good thing for any rationalism-associated person/movement. Or at least that you put a lot of thought into how you talk about it. Optics are a real and valuable thing, as annoying as that is.
Even certain styles of discussing anti-violence can come across as optically weird if you phrase yourself in certain ways.
Perhaps they prefer not to be held responsible when it happens
11 comments
Comments sorted by top scores.
comment by Mitchell_Porter · 2023-05-27T06:22:43.833Z · LW(p) · GW(p)
I try to adhere to the principle that "there are no stupid questions", but this question, if not necessarily stupid, is definitely annoying.
Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence?
The world is full of things which are terrible, or which someone believes to be terrible. If someone, whether through action or inaction, is enabling a process that you think might kill you or cripple you or otherwise harm you, or people you care about - et cetera - then yes, violence naturally comes to mind.
But there are obvious reasons to be cautious about it, and to be cautious about talking about it. If you do it, you may end up dead or in jail. Despite your emotions, your reason may tell you that a single act of violence won't actually make any difference. You may be afraid of unleashing something that goes in a completely different direction - violence, once unleashed, has a way of doing that.
On top of that, if you're a civilized person, you don't ever want to resort to violence in the first place.
... OK, with that off my chest: if I do try to empathize with the spirit in which this question might have been asked, I imagine it as a young man's question, someone for whom the world is still their oyster, and someone who, while not an aggressive thug, is governed more by their private ethical code and their private sense of what is right and wrong, than by fear of the law or fear of social judgment or fear of unintended consequences. Willing to consider anything, and trusting their own discernment.
And then they stumble into this interesting milieu where people are really worked up about something. And the questioner, while remaining agnostic about the topic, is willing to think about it. But they notice that in all the discussion about this supposedly world-threatening matter, no one is talking about just killing the people who are the root of the problem, or blowing up their data centers, or whatever. And so the questioner says, hey guys, if this thing is really such a great danger, why aren't you brainstorming how to carry out these kind of direct actions too?
I've already provided a few reasons why one might not go down that path. But the other side of the coin is, if there are people on that path, they won't be talking about it in public. We'll just wake up one day, and the "unthinkable" will have happened, the same way that we all woke up one day and Russia had invaded Ukraine, or the ex PM of Japan had been assassinated.
Replies from: ArisC↑ comment by ArisC · 2023-05-27T06:55:09.774Z · LW(p) · GW(p)
Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence?
They're not the same. None of these are extinction events; if preventing the extinction of the human race doesn't legitimise violence, what does? (And if you say nothing, does that mean you don't believe in the enforcement of laws?)
Basically, I can't see a coherent argument against violence that's not predicated either on a God, or on humanity's quest for 'truth' or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn't ban violence to prevent this outcome.
↑ comment by Mitchell_Porter · 2023-05-27T07:59:53.039Z · LW(p) · GW(p)
OK, well, if people want to discuss sabotage and other illegal or violent methods of slowing the advance of AI, they now know to contact you.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-05-27T13:35:32.745Z · LW(p) · GW(p)
As do law enforcement.
comment by Vladimir_Nesov · 2023-05-27T00:57:00.166Z · LW(p) · GW(p)
You write:
it does seem like violence is the logical conclusion of their worldview
It's not expected to be effective, as has been repeatedly pointed out, it's not a valid conclusion. Only state-backed law/treaty enforcement has the staying power to coerce history. The question of why [LW(p) · GW(p)] it's taboo [LW(p) · GW(p)] is separate, but before that there is an issue with the premise.
Replies from: ArisCcomment by the gears to ascension (lahwran) · 2023-05-26T11:59:08.755Z · LW(p) · GW(p)
This site and community generally operates on classical liberal principles, in particular a heavy focus on norms about individual acts which are forbidden. Whether that's good is up to debate; folks here are very consequentialist within some constraints. There are also consequentialist arguments for nonviolence I've heard, in particular check out critch's recent post.
https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more [LW · GW]
comment by green_leaf · 2023-05-27T10:22:04.370Z · LW(p) · GW(p)
Because surviving worlds don't look like someone cyberattacking AI labs until AI alignment has been solved, they look like someone solving AI alignment in time before the world has been destroyed.
Replies from: ArisCcomment by benjaminikuta · 2023-06-29T22:41:48.844Z · LW(p) · GW(p)
Related, but, I've talked with multiple rats who, after some convincing, basically admitted, "Yeah, assuming it would actually work, I suppose I actually would push the nuclear button, but I would never admit it, because saying so would have various negative effects."