Posts

Comments

Comment by mhampton on [deleted post] 2024-10-30T21:22:24.492Z

This is a comprehensive, nuanced, and well-written post. A few questions:

How likely do you think it is that, under a Harris administration, AI labs will successfully lobby Democrats to kill safety-oriented policies, as happened with SB 1047 on the state level? Even if Harris is on net better than Trump this could greatly reduce the expected value of her presidency from an x-risk perspective.

Related to the above, is it fair to say that under either party, there will need to be advocacy/lobbying for safety-focused policies on AI? If so, how do you make tradeoffs between this and the election? i.e. if someone has $x to donate, what percentage should they give to the election vs. other AI safety causes?

How much of your assessment of the difference in AI risk between Harris and Trump is due to the concrete AI policies you expect each of them to push, vs. how much is due to differences in competence and respect for democracy? 

I can't find much information about the Movement Labs quiz and how it helps Harris win. Could you elaborate, privately if needed? If the quiz is simply matching voters with the candidate who best matches their values, is it because it will be distributed to voters who lean Democrat, or does its effectiveness come through a different path?

Comment by mhampton on [deleted post] 2024-10-26T16:14:34.022Z

Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?

Also, what do you make of the author's comment below?

  • In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear).
Comment by mhampton on Apocalypse insurance, and the hardline libertarian take on AI risk · 2024-06-07T20:25:20.436Z · LW · GW

Great post. I agree with almost all of this. What I am uncertain about is the idea that AI existential risk is a rights violation under the most strict understanding of libertarianism

As another commenter has suggested, we can't claim that any externality creates rights to stop or punish a given behavior, or libertarianism turns into safetyism.[1] If we take the Non-Aggression Principle as a common standard for a hardline libertarian view of what harms give you a right to restitution or retaliation, it seems that x-risk does not fit this definition. 

1.The most clear evidence seems to be that Murray Rothbard wrote the following:

“lt is important to insist [...] that the threat of aggression be palpable, immediate, and direct; in short, that it be embodied in the initiation of an overt act. [...] Once we bring in "threats" to person and property that are vague and future--i.e., are not overt and immediate--then all manner of tyranny becomes excusable.” (The Ethics of Liberty p. 78

X-risk by its very nature falls into the category of "vague and future."

 2. To take your specific example of flying planes over someone's house, a follower of Rothbard, Walter Block, has argued that this exact risk is not a violation of the non-aggression principle. He also states that risks from nuclear power are "legitimate under libertarian law." (p. 295)[2] If we consider AI analogous to these two risks, it would seem Block would not agree that there is a right to seek compensation for x-risk.

3. Matt Zwolinski criticized the NAP for having an "all-or-nothing attitude toward risk" as it does not indicate what level of risk constitutes aggression. Another libertarian writer responded that a risk that constitutes a direct "threat" is aggression, (i.e. pointing a pistol at someone, even if this doesn't result in the victim being shot) but risks of accidental damage are not aggression unlese these risks are imposed with threats of violence: 

"If you don’t wish to assume the risk of driving, then don’t drive. And if you don’t want to run the risk of an airplane crashing into your house, then move to a safer location. (You don’t own the airspace used by planes, after all.)" 

This implies to me that Zwolinski's criticism is accurate with regards to accidents, which would rule out x-risk as a NAP violation.
 

Conclusion

This shows that at least some libertarians' understanding of rights does not include x-risk as a violation. I consider this to be a point against their theory of rights, not an argument against pursuing AI safety. The most basic moral instinct suggests that creating a significant risk of destroying all of humanity and its light-cone is a violation of the rights of each member of humanity.[3] 

While I think that not including AI x-risk (and other risks/accidental harms) in its definition of proscribable harms means that the NAP is too narrow, the question still stands as to where to draw the line as to what externalities or risks give victims a right to payment, and which do not. I'm curious where you draw the line. 

It is possible that I am misunderstanding something about libertarianism or x-risk that contradicts the interpretation I have drawn here.

Anyway, thanks for articulating this proposal. 
 

  1. ^

    See also this argument by Alexander Volokh:

    "Some people’s happiness depends on whether they live in a drug-​free world, how income is distributed, or whether the Grand Canyon is developed. Given such moral or ideological tastes, any human activity can generate externalities […] Free expression, for instance, will inevitably offend some, but such offense generally does not justify regulation in the libertarian framework for any of several reasons: because there exists a natural right of free expression, because offense cannot be accurately measured and is easy to falsify, because private bargaining may be more effective inasmuch as such regulation may make government dangerously powerful, and because such regulation may improperly encourage future feelings of offense among citizens."

  2. ^

    Block argues that it would be wrong for individuals to own nuclear weapons, but he does not make clear why this is a meaningful distinction.

  3. ^

     And any extraterrestrials in our light-cone, if they have rights. But that's a whole other post. 

Comment by mhampton on St. Louis ACX Meetups Everywhere Spring 2024 · 2024-04-14T15:58:55.388Z · LW · GW

Hi, I'm interested in attending but a bit unclear about the date and time based on how it is listed. Is the spring ACX meetup taking place starting at 2:30 p.m. on May 11?