post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by weverka · 2023-02-05T14:30:40.740Z · LW(p) · GW(p)

The ML engineer is developing an automation technology for coding and is aware of AI risks .  The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.

  Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead's realization that you don't need humans to cut rubylith film to form each transistor.

    You haven't shown an argument that this project will accelerate the scenario you describe.  Perhaps the engineer is brushing you off because your  reasoning is broad enough to apply to all improvements in computing technology.   You will get more traction if you can show more specifically how this project is "bad for the world".

Replies from: WilliamKiely
comment by WilliamKiely · 2023-02-06T03:55:56.216Z · LW(p) · GW(p)

Thanks for the response and for the concern. To be clear, the purpose of this post was to explore how much a typical, small AI project would affect AI timelines and AI risk in expectation. It was not intended as a response to the ML engineer, and as such I did not send it or any of its contents to him, nor comment on the quoted thread. I understand how inappropriate it would be to reply to the engineer's polite acknowledgment of the concerns with my long analysis of how many additional people will die in expectation due to the project accelerating AI timelines.

I also refrained from linking to the quoted thread specifically because again this post is not a contribution to that discussion. The thread merely inspired me to take a quantitative look at what the expected impacts of a typical ML project actually are. I included the details of the project for context in case others wanted to take them into account when forecasting the impact.

I also included Jim and Raymond's comments because this post takes their claims as givens. While I understand the ML engineer may have been skeptical of their claims, and so elaborating on why the project is expected to accelerate AI timelines (and therefore increase AI risk) would be necessary to persuade them that their project is bad for the world, again that aim is outside of the scope of this post.

I've edited the heading after "The trigger for this post" from "My response" to "My thoughts on whether small ML projects significantly affect AI timelines" to make clear that the contents are not intended as a response to the ML engineer, but rather are just my thoughts about the claim made by the ML engineer. I assume that heading is what led you to interpret this post as a response to the ML engineer, but if there's anything else that led you to interpret it that way, I'd appreciate you letting me know so I can improve it for others who might read it. Thanks again for reading and offering your thoughts.

comment by weverka · 2023-02-06T14:13:33.742Z · LW(p) · GW(p)

Why didn't you also compute the expectation this project contributes towards human flourishing?

If you only count the negative contributions, you will find that the expectation value of everything is negative. 

Replies from: WilliamKiely
comment by WilliamKiely · 2023-02-07T18:32:32.196Z · LW(p) · GW(p)

The main benefits of the project are presumably known to the engineer engaging in it. It was the harm of the project (specifically the harm arising from how the project accelerates AI timelines) that the engineer was skeptical was significant that I wanted to look at more closely to determine whether it was large enough to make it questionable whether engaging in the project was good for the world.

Given my finding that a 400-hour ML project (I stipulated the project takes 0.2 years of FTE work) would, via its effects on shortening AI timelines, shorten the lives of existing people by around 17 years, it seems like this harm is not only trivial, but likely dominates the expected value of engaging in the project. This works out to shortening peoples' lives by around 370 hours for every hour worked on the project.

If someone thinks the known benefits of working on the project are being drastically underestimated as well, I'd be interested in seeing an analysis of the expected value of those benefits, and in particular and am curious which benefits that person thinks are surprisingly huge. Given the lack of safety angle to the project, I don't see what other benefit (or harm) would come close in magnitude to the harm caused via accelerating AI timelines and increasing extinction risk, but of course would love to hear if you have any idea.

Replies from: weverka
comment by weverka · 2023-02-08T13:59:48.820Z · LW(p) · GW(p)

You said nothing about positive contributions.  When you throw away the positives, everything is negative.  

comment by WilliamKiely · 2023-02-24T19:27:10.965Z · LW(p) · GW(p)

I just thought of a flaw in my analysis, which is that if it's intractable to make AI alignment more or less likely (and intractable to make the development of transformative AI more or less safe), then accelerating AI timelines actually seems good because the benefits to people post-AGI if it goes well (utopian civilization for longer) seem to outweigh the harms to people pre-AGI if goes badly (everyone on Earth dies sooner). Will think about this more.