Critique of 'Many People Fear A.I. They Shouldn't' by David Brooks.

post by Axel Ahlqvist (axelahlqvist1995@gmail.com) · 2024-08-15T18:38:13.437Z · LW · GW · 8 comments

Contents

8 comments

This is my critique of David Brooks opinion piece in the New York Times.

 

Tl;dr: Brooks believes that AI will never replace human intelligence but does not describe any testable capabilities that he predicts AI will never possess.

David Brooks argues that artificial intelligence will never replace human intelligence. I believe it will. The fundamental distinction is that human intelligence emerged through evolution, while AI is being designed by humans. For AI to never match human intelligence, there would need to be a point where progress in AI becomes impossible. This would require the existence of a capability that evolution managed to develop, but that science could never replicate. Given enough computing power, why would we not be able to replicate this capability by simulating a human brain? Alternatively, we could simulate evolution inside a sufficiently complex environment. Does Brooks believe that certain functionalities can only be realized through biology? While this seems unlikely, if it were the cause, we could create biological AI. Why does Brooks believe that AI has limits that carbon based brains produced by evolution does not have? It is possible that he is referring to a more narrow definition of AI, like silicon based intelligence based on the currently popular machine learning paradigm, but the article doesn’t specify what AIs Brooks is talking about.

In fact, one of my main concerns with the article is that Brooks' arguments rely on several ambiguous terms without explaining what he means by them. For example:

The A.I. ’mind’ lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences.

Most of these terms are associated with the subjective non-material phenomena of consciousness (i.e., ‘What it is like to be something’). However, AIs that possess all the testable capabilities of humans but lack consciousness would still be able to perform human jobs. After all, you are paid for your output and not your experiences. Therefore, I believe we should avoid focusing on the nebulous concept of consciousness and instead concentrate on testable capabilities. If Brooks believes that certain capabilities require conscious experience, I would be interested to know what those capabilities are. Demonstrating such capabilities should, in that case, be enough to convince Brooks that an entity is conscious.

Take, for example, the term ‘self-awareness’. If we focus on the testable capability this term implies, I would argue that current AI systems already exhibit it. If you ask ChatGPT-4o 'What are you?', it provides an accurate answer. We assess whether elephants are self-aware by marking their body in a place they cannot see and then test whether they can identify the mark with the help of a mirror.

I suggest that Brooks supplement these ambiguous terms with concrete tests that he believes AI will never be able to pass. Additionally, it would be helpful if he could clarify why he believes science will never be able to replicate these capabilities, despite evolution having achieved them.

On a broader level, this reminds me of how people once believed that the universe revolved around Earth simply because Earth was the celestial body that mattered most to them. Just because it feels, from our human perspective, that we are special does not mean that we are. The universe is vast, and Earth occupies no significant place within it beyond being our home. Similarly, the space of potential minds and intelligences is vast. It would be very surprising if our carbon-based brains shaped by evolution occupied an insurmountable peak in this space.

In his opening paragraph, Brooks claims to acknowledge the dangers of AI, yet the only potential harm he mentions is misuse. I would argue that the most critical risks associated with AI are existential risks and we arguably have a concensus among the experts of the field that this is a serious risk. Consider the views of the four most-cited AI researchers on this topic—Hinton, Bengio, and Sutskever have all expressed significant concerns about existential risks posed by AI, while LeCun does not believe in such risks. The leaders of the top three AI labs—Altman, Amodei, and Hassabis—have also voiced concerns about existential risks. I understand that the article is intended for a liberal arts audience, but I still find it unreasonable that John Keats is quoted before any AI experts.

In summary, the article is vague and lacks the specificity needed for a thorough critique. Mostly I interpret it as Brooks finding it difficult to imagine that something so different from our human mind as an AI could ever be conscious. As a result, he concludes that there are capabilities that AI will never possess. The heading of the article is unearned as the article does not even address certain concerns voiced by experts in the field, like the existential risks posed by not being able to align [? · GW] Artificial General Intelligence.

8 comments

Comments sorted by top scores.

comment by Axel Ahlqvist (axelahlqvist1995@gmail.com) · 2024-08-16T14:43:43.956Z · LW(p) · GW(p)

I've received a significant ratio of downvotes on the post. Since this is my first post on LW, I would greatly appreciate feedback on why readers did not find the post of sufficient quality for the site.

I believe even broad pointers could be very helpful.  Was it mostly about sloppy argumentation, the tone, the language, etc?

Replies from: Raemon, Richard_Kennaway
comment by Raemon · 2024-08-20T01:28:55.575Z · LW(p) · GW(p)

I didn't downvote, but my first take was "this seems sort of preaching to the choir responding to some random guy who's wrong in kinda boring ways." (sort of similar response to Richard Kennaway)

comment by Richard_Kennaway · 2024-08-16T19:36:17.191Z · LW(p) · GW(p)

Who is David Brooks, that LW should care what he says? I glanced at the linked article and it looks to me like extruded journalism product by someone with no particular knowledge about AI.

His Wikipedia bio describes him as a "cultural commentator" but it is not clear what he is qualified to comment on. Regarding a book of his, it says that "[a critic of Brooks] reported Brooks as insisting that the book was not intended to be factual but to report his impressions of what he believed an area to be like".

"Reporting his impressions of what he believes an area to be like." I suppose that's what a cultural commentator does for a living, and it is what he has done in that article.

Replies from: tgb
comment by tgb · 2024-08-20T01:50:14.861Z · LW(p) · GW(p)

He's influential and it's worth knowing what his opinion is because it will become the opinion of many of his readers. Hes also representative of what a lot of other people are (independently) thinking.

What's Scott Alexander qualified to comment on? Should we not care about the opinion of Joe Biden because he has no particular knowledge about AI? Sure, I'm doubt we learn anything from rebutting his arguments, but once upon a time LW cared about changing the public opinion on this matter and so should absolutely care about reading that public opinion.

Honestly, I embarrassed for us that this needs to be said.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-08-20T14:21:18.965Z · LW(p) · GW(p)

Scott Alexander is, obviously, qualified to write on psychology, psychiatry, related pharmaceuticals, and the ways that US government agencies screw up everything they touch in those areas. When writing outside his professional expertise, he takes care to read thoroughly, lay out his evidence, cite sources, and say how confident he is in the conclusions he draws.

I see none of this in David Brooks' article. He is writing sermons to the readership of the NYT. They are not addressed to the sort of audience we have here. I doubt that his audience are likely to read LessWrong.

Replies from: tgb, axelahlqvist1995@gmail.com
comment by tgb · 2024-08-21T13:53:15.066Z · LW(p) · GW(p)

Again, why wouldn't you want to read things addressed to other sorts of audiences if you thought altering public opinion on that topic was important? Maybe you don't care about altering public opinion but a large number of people here say they do care.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-08-21T14:07:23.816Z · LW(p) · GW(p)

I just don’t think David Brooks, from what I know of him, is worth spending any time on. The snippets I could access at the NYT give no impression of substance, The criticisms of him on Wikipedia are similar to those I have already seen on Andrew Gelman’s blog: he is more concerned to write witty, urbane prose without much concern for actual truth than to do the sort of thing that, say, Scott Alexander does.

Btw, I have not voted positively or negatively on the OP.

comment by Axel Ahlqvist (axelahlqvist1995@gmail.com) · 2024-08-20T18:11:40.085Z · LW(p) · GW(p)

Definitely guilty of preaching to the choir :).

So people feel that LW should be focussed on other things than critiquing influential but unqualified opinions. I am sympathetic to this. It is somewhat of a Sisyphus task to weed out bad opinions from public discourse and responding on LW is probably not the most effient way of doing it in any case.

Personally, when I am convinced of something, I try to find the strongest critiques of that belief. For instance, I've looked for criticisms of Yudkowsky and even read a little on r/SneerClub to evaluate whether I've been duped by internet lunatics :). If other people acted the same, it would be valuable to have critiques of bad opinions, even if they are posted where the intended audience otherwise never visits. But I suspect few people act like that.

I would be interested in if you have suggestions for what are better ways to effect public opinion than posts like this one. I guess the rationality project of raising the global sanity level is partly aimed at this.