Against Conflating Expertise: Distinguishing AI Development from AI Implication Analysis

post by Ratios · 2023-05-31T09:50:50.157Z · LW · GW · 4 comments

Contents

4 comments

When discussing AI, we often associate technological proficiency with an understanding of AI's broader implications. This notion is as misguided as believing that a master printing press mechanic from Gutenberg's era would have been best equipped to predict the impacts of the printing press.

The AI development field, represented by experts like Geoffrey Hinton (proponent of safety measures and potential regulation) or Yann LeCun (an advocate of AI acceleration with fewer restrictions), primarily concerns itself with technical prowess and the creation of AI algorithms. In contrast, understanding the expansive implications of AI necessitates a comprehensive, multidisciplinary approach encompassing areas like warfare, game theory, sociology, philosophy, ethics, economics, and law, among others.

In this context, the debate between Yann LeCun and historian-author Yuval Noah Harari offers a glimpse into these diverging standpoints. While there's an understandable tendency to lend more credence to LeCun's perspective because of his technical accolades, this thought process might be a misstep when discussing the broader implications of AI.

On the other hand, Harari brings forth a more holistic perspective shaped by his wide-ranging understanding of human history, sociology, and philosophy. With his expertise in scrutinizing historical shifts and societal changes, Harari may be better prepared to explore the implications of AI because understanding implications and creating AI systems require different skill sets.

Those who can fully grasp AI's overarching impacts might not be technical experts but rather polymaths.

This post is a response to an observed bias in discussions surrounding AI: the undue weight given to technical experts' perspectives regarding the broad implications of AI. (Like in the last AI risk letter), while downplaying or ignoring the views of non-technical experts.

4 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2023-05-31T18:33:27.779Z · LW(p) · GW(p)

Other than the printing press, do you have other members of the reference class you are constructing, where outside holistic experts are better at predicting the consequences of a new invention than the inventors themselves?

Replies from: Ratios
comment by Ratios · 2023-06-01T19:29:30.130Z · LW(p) · GW(p)

Every other important technological breakthrough. The Internet and nuclear weapons are specific examples if you want any.

Replies from: shminux
comment by Shmi (shminux) · 2023-06-01T19:34:59.323Z · LW(p) · GW(p)

Are you saying that outside experts were better at understanding potential consequences in these cases? I have trouble believing it.

Replies from: Ratios
comment by Ratios · 2023-06-02T13:10:21.536Z · LW(p) · GW(p)

Why though? How does understanding the physics that makes nukes work help someone understand their implications? Game theory seems a much better background than physics to predict the future in this case. For example, the idea of Mutually assured destruction as a civilizing force was first proposed first by Wilkie Collins, an English novelist, and playwright.