It's (not) how you use it

post by Eleni Angelou (ea-1) · 2022-09-07T17:15:52.493Z · LW · GW · 1 comments

Contents

1 comment

Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/LwhzE3scZTqxERtNn/it-s-not-how-you-use-it [EA · GW] 

 

The phrase "technology isn't bad in itself, it's just how you use it" is commonplace and contains some truth. But I think it's a mistake to go straight into judging the usage of technological products and not think about their design. Sure, it's intuitive to suppose that the choices humans make with how they interact with technologies play a decisive role in what purpose the technology ends up serving. My argument is that these choices are to be made earlier in the design and production of a certain technology; they're not choices humans find themselves making once they've acquired a technology. At that point, it's usually too late.

In History & Philosophy of Science (HPS) studies, this approach broadly falls into the camp of Marxist theories about the history of technology in the sense that the technological product has a "purpose", an "end" and it can have intrinsic risks. These risks, for this type of theorizing primarily concern the inscription of social norms and regularities that change the dynamics within society. Translated into the EA framework, these might be existential or suffering, and cost us the continuation of our species. It is, as a result, careless and irresponsible to create technologies without having clarity on what they'll be good for and how they could lead to catastrophic scenarios.

In the book Human Compatible, Stuart Russell shows how this irresponsibility applies to the development of ML. The analogy is simple: it's like preparing a mission to another planet without considering in advance how your crew is going to survive once they're on the new planet. If you expect them to deal with whatever risks and problems the environment of the new planet might have for humans after landing there, then you're not taking seriously the inherent dangers of your project, and quite frankly, the project itself. In other words, this is not about using, let's say, a spaceship carelessly; it's about missing crucial parts in the agenda and set up of your mission.

Obviously, the same argument applies to our current situation: what we have been observing is fast AI progress and most likely, not enough time, care, and deliberation to ensure AI safety, despite the efforts of the safety research community. And to my point: it's not that AI will be harmful if we use it in a harmful way. The technology carries inherent dangers we need to take precautions for and incorporate into the design before the product becomes available. For example, training models with machine learning has its own uncertainties which start early on when you begin the process. They're, in a way, inherent in the technology. It'd be unfair to suddenly start playing a game of blameworthiness once an advanced product is out and someone uses it in ways that increase risk. 

Just to be clear, I'm not saying human agents shouldn't be careful with the various products of technology. My argument is that we have to ensure our carefulness, attention, and sensitivity, don't suddenly strike as important when a very difficult-to-understand/predict product is out there. 

It may look like I simply described the need to solve the alignment problem once again. But that's only part of my intention. What I want to emphasize is that we need to reconceptualize the way we think about technology. Narratives about technologies have historically been just as dangerous as the technologies themselves. The AI safety community has an impressively clear narrative, mostly due to the rationality schema that supports it. But my concern is that for many scholars and the public, clarity tends to come in hindsight, e.g., the Manhattan Project and the atomic bomb.  

So, remember: the "how-you-use-it" bit starts very early on in the design of a technology. Technologies can be intrinsically dangerous in a non-Luddistic sense, especially when they're developed with multiple parameters of uncertainty. 

1 comments

Comments sorted by top scores.

comment by Noosphere89 (sharmake-farah) · 2022-09-07T17:38:07.923Z · LW(p) · GW(p)

From my perspective, the meme of "technology isn't bad in itself, it's just how you use it" was mostly true so long as non-agentic technology was discovered. Unfortunately for us, the EV of that meme has now flipped sign and become extremely negative due to AGI.

One bad decision can overwhelm all good decisions, and one good decision can overwhelm all bad decisions.