Posts

To the average human, controlled AI is just as lethal as 'misaligned' AI 2024-03-14T14:52:43.570Z
Winners-take-how-much? 2023-05-29T21:56:45.505Z

Comments

Comment by YonatanK (jonathan-kallay) on Neutrality · 2024-11-18T01:52:18.487Z · LW · GW

There are two kinds of beliefs, those that can be affirmed individually (true independently of what others do) and those that depend on others acting as if they believe the same thing. They are, in other words, agreements. One should be careful not to conflate the two.

What you describe as "neutrality" to me seems to be a particular way of framing institutional forbearance and similar terms of cooperation in the face of the possibility of unrestrained competition and mutual destruction. When agreements collapse, it is not because these terms were unworkable (except for in the trivial sense that, well, they weren't invulnerable to gaming and do on) but because cooperation between humans can always break down.

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-18T16:26:28.884Z · LW · GW

@AnthonyC I may be mistaken, but I took @M. Y. Zuo to be offering a reductio ad absurdum response to your comment about not being indifferent between the two ways of dying. The 'which is a worse way to die' debate doesn't respond to what I wrote. I said

With respect to the survival prospects for the average human, this [whether or not the dying occurs by AGI] seems to me to be a minor detail.

I did not say that no one should care about the difference. 

But the two risks are not in competition, they are complementary. If your concern about misalignment is based on caring about the continuation of the human species, and you don't actually care how many humans other humans would kill in a successful alignment(-as-defined-here) scenario, a credible humans-kill-most-humans risk is still really helpful to your cause, because you can ally yourself with the many rational humans who don't want to be killed either way to prevent both outcomes by killing AI in its cradle.

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-15T03:33:49.144Z · LW · GW

You have a later response to some clarifying comments from me, so this may be moot, but I want to call out that my emphasis is on the behavior of human agents who are empowered by automation that may fall well short of AGI. A "pivotal act" is a very germane idea, but rather than the pivotal act of the first AGI eliminating would-be AGI competitors, this act is carried out by humans taking out their human rivals.

It is pivotal because once the target population size has been achieved, competition ends, and further development of the AI technology can be halted as unnecessarily risky.

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-15T03:10:20.053Z · LW · GW

If an unaligned AI by itself can do near-world-ending damage, an identically powerful AI that is instead alignable to a specific person can do the same damage.

If you mean that as the simplified version of my claim, I don't agree that it is equivalent.

Your starting point, with a powerful AI that can do damage by itself, is wrong. My starting point is groups of people whom we would not currently consider to be sources of risk, who become very dangerous as novel weaponry, along with changes in relations of economic production, unlock the means and the motive to kill very large numbers of people.

And (as I've tried to clarify in my other responses) the comparison of this scenario to misaligned AI cases is not the point, it's the threat from both sides of the alignment question. 

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-14T21:56:57.046Z · LW · GW

Thanks, title changed.

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-14T21:49:22.684Z · LW · GW

I agree, and I attempted to emphasize the winner-take-all aspect of AI in my original post.

The intended emphasis isn't on which of the two outcomes is preferable, or how to comparatively allocate resources to prevent them. It's on the fact that there is no difference between alignment and misalignment with respect to the survival expectations of the average person.

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-14T21:24:03.758Z · LW · GW

The title was intended as an ironic allusion to a slogan from the National Rifle Association in the U.S., to dismiss calls for tighter restrictions on gun ownership. I expected this allusion to be easily recognizable, but see now that it was probably a mistake.

Comment by YonatanK (jonathan-kallay) on To the average human, controlled AI is just as lethal as 'misaligned' AI · 2024-03-14T21:16:43.334Z · LW · GW

An argument for danger of human-directed misuse doesn't work as an argument against dangers of AI-directed agentic activity.

 

I agree. But I was not trying to argue against dangers of AI-directed agentic activity. The thesis is not that "alignment risk" is overblown, nor is the comparison of the risks the point, it's that those risks accumulate such that the technology is guaranteed to be lethal for the average person. This is significant because the risk of misalignment is typically thought to be accepted because of rewards that will be broadly shared. "You or your children are likely to be killed by this technology, whether it works as designed or not" is a very different story from "there is a chance this will go badly for everyone, but if it doesn't it will be really great for everyone."

Comment by YonatanK (jonathan-kallay) on How can the world handle the HAMAS situation? · 2023-12-31T23:05:59.592Z · LW · GW

I'm surprised by the lack of follow-up to this post and the accompanying thread, which took place in the immediate aftermath of the October 7th massacre. A lot has happened since then -- new data against which the original thinking could be evaluated. Also time has provided opportunity to self-educate about the conflict, which a few people admitted to not knowing a lot about. Given the human misery that has only worsened since the OP started asking questions, I would think that a follow-up would be a worthy exercise. @Annapurna ?

Comment by YonatanK (jonathan-kallay) on Constellations are Younger than Continents · 2023-12-31T22:48:02.606Z · LW · GW

Ever since first hearing the music of the Disney movie "Encanto" I've been sneering at the lyrics "stars don't shine they burn/ and constellations shift" because, no, of course constellations don't shift, without really stopping to think about it. Caught in my epistemic arrogance again!

Comment by YonatanK (jonathan-kallay) on How can the world handle the HAMAS situation? · 2023-10-14T21:39:52.838Z · LW · GW

Oops, Samuel beat me to the punch by 2 minutes.

Comment by YonatanK (jonathan-kallay) on How can the world handle the HAMAS situation? · 2023-10-14T21:39:14.732Z · LW · GW

You've already noted that it doesn't really matter, but I thought I'd help fill in the blanks.

The current global regime of sovereign nation-states that we take for granted is the product of the 20th century. It's not like an existing sovereign nation-state belonging to the Palestinians was carved up by external powers and arbitrarily handed to Jews. Rather, the disintegration of empires created opportunities for local nationalist movements to arise, creating new countries based on varying and competing unifying or dividing factors such as language, tribal associations, and sect. Palestinians and Zionist Jews both had nationalist aspirations during this period, and for various reasons the Zionists came out on top.

The idea that "the Palestinians were there first" is not particularly meaningful or accurate, especially given the historical fact of Judea and Israel as the birthplace of Judaism and the continuous presence of Jewish communities in the region, despite the many events contributing to the creation of a Jewish diaspora.

Comment by YonatanK (jonathan-kallay) on How can the world handle the HAMAS situation? · 2023-10-14T20:55:36.574Z · LW · GW

This response is not unreasonable, but the description of "WW2-style solution" seems ignorant of the fact that Israel did occupy Gaza for decades, and had something very similar to a "puppet government" there, in the form of the Fatah party in control of the Palestinian Authority in the West Bank. Israel unilaterally withdrew in 2005, and Hamas violently took over in 2007.

The rest of it operates under the hypothesis that Hamas is opposed to the objective interests of the Palestinians of Gaza. This ends up being tautological if objective self-interest is defined as 'not being killed during Israeli retaliation.' But this is a very narrow definition. There is a common saying that 'it is better to die on one's feet than to live on one's knees.' One need not drag in religious extremism or poor education as explanatory factors for an average Gazan viewing their own death under Israeli bombardment as an acceptable alternative to living with the indignity of the perpetual Israeli blockade of Gaza, to say nothing of the evisceration of dreams of Palestinian sovereignty.

Note that the preceding was not an endorsement of last week's attack, I'm just calling out the weaknesses in the depiction of Gazans as nothing but uneducated bomb fodder to the Hamas regime.

Comment by YonatanK (jonathan-kallay) on Winners-take-how-much? · 2023-05-30T22:14:16.958Z · LW · GW

A helpful counterpoint.

Comment by YonatanK (jonathan-kallay) on Minimum Viable Exterminator · 2023-05-29T22:02:46.357Z · LW · GW

Why would the human beings have to be suicidal, if they can also have the AI provide them with a vaccine?

Comment by YonatanK (jonathan-kallay) on Open & Welcome Thread - May 2023 · 2023-05-25T15:58:15.075Z · LW · GW

Thank you. If I understand your explanation correctly, you are saying that there are alignment solutions that are rooted in more general avoidance of harm to currently living humans. If these turn out to be the only feasible solutions to the not-killing-all-humans problem, then they will produce not-killing-most-humans as a side-effect. Nuke analogy: if we cannot build/test a bomb without igniting the whole atmosphere, we'll pass on bombs altogether and stick to peaceful nuclear energy generation.

It seems clear that such limiting approaches would be avoided by rational actors under winner-take-all dynamics, so long as other approaches remain that have not yet been falsified.

Follow-up Question: does the "any meaningfully useful AI is also potentially lethal to its operator" assertion hold under the significantly different usefulness requirements of a much smaller human population? I'm imagining limited AI that can only just "get the (hard) job done" of killing most people under the direction of its operators, and then support a "good enough" future for the remaining population, which isn't the hard part because the Earth itself is pretty good at supporting small human populations.

Comment by YonatanK (jonathan-kallay) on Open & Welcome Thread - May 2023 · 2023-05-21T21:28:48.839Z · LW · GW

Hi all, writing here to introduce myself and to test the waters for a longer post.

I am an experienced software developer by trade. I have an undergraduate degree in mathematics and a graduate degree in a field of applied moral and political philosophy. I am a Rawlsian by temperament but an Olsonian by observation of how the world seems to actually work. My knowledge of real-world AI is in the neighborhood of the layman's. I have "Learning Tensorflow.js" on my nightstand, which I promised my spouse not to start into until after the garage has been tidied.

Now for what brings me here: I have a strong suspicion that the AI Risk community is under-reporting the highly asymmetric risks from AI in favor of symmetric ones. Not to deny that a misaligned AI that kills everyone is a scary problem that people should be worrying about. But it seems to me that the distribution of the "reward" in the risk-reward trade-off that gives rise to the misalignment problem in the first place needs more elucidation. If, for most people, the difference between AI developers "getting it right" and "getting it wrong" is being killed at the prompting of people instead of the self-direction of AI, the likelihood of the latter vs. the former is rather academic, is it not?

To use the "AI is like nukes" analogy, if misalignment is analogous to the risk of the whole atmosphere igniting in the first fission bomb test (in terms of a really bad outcomes for everyone -- yes, I know the probabilities aren't equivalent), then successful alignment is the analogue of a large number of people getting a working-exactly-as-intended bomb dropped on their city, which, one would think, on July 15th 1945 would have been predicted as the likely follow-up to a successful Trinity test.

The disutility of a large human population in a world where human labor is becoming fungible with automation seems to me to be the fly in the ointment for anyone hoping to enjoy the benefits of all that automation.

That's the gist of it, but I can write a longer post. Apologies if I missed a discussion in which this concern was already falsified.