The problem with the media presentation of “believing in AI”

post by Roman Leventov · 2022-09-14T21:05:10.234Z · LW · GW · 0 comments

Contents

No comments

In my impression, in the last several years, in the media coverage of the progress in AI, journalists often called “believers” the pundits (interviewed or quoted by the journalists) who thought that AI can in principle take over all intellectual work from humans, or that AI is “intelligent” in some sense: for example, AI does really “understand language”, or that AI is sentient. This phrasing can be revealed in questions such as “Do you believe AI can understand human language?”, or flashy titles such as “A Google engineer believes an AI has become sentient” (a little more on this at the end of the post).

I think this wording may create in the minds of the laypeople an impression that these pundits “believe in AI” in some quasi-religious way. The laypeople may also conflate the “AI believers” with adherents of some actual churches of AI (I think these churches will become better known in the future, as the AI progress continues).

To prevent this unfair impression from solidifying, I think expert “AI believers” should use the following strategy: whenever a journalist asks them whether they “believe” AI can be sentient (understand language, understand human emotion, make a moral judgement, etc.), instead of simply answering “Yes” or “No”, they should note that the answer depends on the functional theory of consciousness/language understanding/morality/etc., and for almost all such theories (except those which explicitly assume uncomputability, such as Penrose’s theory of consciousness), the answer will automatically be “Yes, AI can hit this mark in principle”, so the question should be recast as a timeline prediction.

Then, if the format permits, the expert may note that the side which may be more appropriately called “believers” in the given argument are those who say that AI will not be able to do X, because such claims explicitly or implicitly rest on an unfalsifiable theory of X which essentially says “X is something that people can do and AIs can’t”.

Back to the coverage of the Blake Lemoine story: in this case, it is indeed appropriate to write that he “believed” that AI is sentient because he didn’t assume any functional theory of sentience. But I’m afraid that journalists will paint future, more principled statements with the same brush.

0 comments

Comments sorted by top scores.