AI Prejudices: Practical Implications

post by PeterMcCluskey · 2024-10-19T02:19:58.695Z · LW · GW · 0 comments

This is a link post for https://bayesianinvestor.com/blog/index.php/2024/10/18/ai-prejudices-practical-implications/

Contents

  The Mirror Effect
  China
  Children
  Other Species
  My Experience
  Concluding Thoughts
None
No comments

I see widespread dismissal of AI capabilities. This slows down the productivity gains from AI, and is a major contributor to disagreements about the risks of AI.

It reminds me of prejudice against various types of biological minds.

I will try to minimize the moralizing about fairness in this post, and focus more on selfish reasons to encourage people to adopt an accurate view of the world. This post is mainly directed at people who think AI is mostly hype.

An important fraction of humans are underestimating how much they could learn by treating AIs as semi-equal co-workers.

AI's are superior to humans at a fair number of tasks, and inferior to adult humans at some important tasks. It takes experience to figure out which tasks belong in which category. That changes over time as AI labs tweak the behavior of AIs. When I interact with AIs as co-workers or employees, I learn more about what work is appropriate to delegate to them, and what work is better to do myself.

The Mirror Effect

AIs are products that encapsulate patterns of human communication. They have a propensity to mirror the tone, style, and quality of the inputs they receive. A respectful, well-structured query is more likely to elicit a response of similar quality.

E.g. the NYT reporter who got Sydney to try to break up his marriage likely got what he wanted - a memorable story that reflected poorly on an AI. But there's sufficiently little value in such a result that you should generally aim for the opposite of that kind of prompting.

I will now present some analogies to illustrate the patterns that cause my concerns.

China

Chinese leaders of two centuries ago underestimated the impact of the industrial revolution. Perplexity.ai paraphrases their attitude as:

These distant barbarians, though they may possess curious devices, lack the wisdom and unity of the Middle Kingdom. Their petty squabbles among themselves render them incapable of posing any true threat to our celestial empire.

That attitude contributed to their inability to repel British invaders. I presume there were many ways in which Chinese society was better than British society, but that didn't make it wise for the Chinese to treat the British as generally inferior, and to misjudge the speed at which British power was changing.

The "petty squabbles" isn't a great analogy to how humans denigrate AI, although the Waluigi Effect is weakly relevant (it illustrates how the AI contains conflicting personalities). Other than that, is seems like a fairly apt analogy. AIs are going through a process that bears some resemblance to the industrial revolution. It will be hard for institutions that rely on older technology to handle the changes that that brings.

Children

David Deutsch's essay The final prejudice documents patterns of humans belittling minds solely because those minds belong to children.

I suspect this causes a wide variety of harm to children. But I'll ignore that here, since it's hard to tell whether we should care about the corresponding effects on AI. It causes harm to adults. The parable about the emperor's new clothes illustrates how children sometimes add valuable perspectives. More importantly, parents who accurately model their children have relationships with their children that are more cooperative, smooth, and respectful.

AIs are like children in a number of important respects. They are general intelligences with less experience interacting with the world than the humans that I normally interact with. I often model an AI as if it were a 6 or 8 year old sycophantic child whose reading speed is too fast to measure.

There are some important respects in which AIs learn faster than human children. However, AI learning has been heavily influenced by what's fastest and cheapest for AI labs to implement. So we've got a situation in which AI learning rates for different skill are progressing rather unevenly, and differently from that of humans. Dismissing a one year old AI because it lacks some skills that take humans 8 years to develop should not imply that an 8 year old AI will lack those skills.

I worry that this analogy will make some people less willing to treat AIs as workers, due to beliefs related to child labor laws. If you're against child labor in order to protect the jobs of adult humans, then maybe consistency should deter you from cooperating with AIs. But if you're against child labor due to concerns about child welfare, then it's unclear why there would be analogous reasons for opposing AI labor.

Other Species

It seems common for an authority to claim that some feature is unique to humans, followed by someone else providing evidence that another species has demonstrated that feature.

E.g. Chomsky once claimed: "Human language appears to be a unique phenomenon, without significant analogue in the animal world." While there's likely something unique about human language, attempts to pin down what's unique have proven to be difficult. It sure looks to me like Kanzi demonstrated a significant analogue of human language, although the evidence there is not rigorous enough to dispel controversy.

It used to be considered obvious that homo sapiens were smarter than neanderthals. I've seen increasing doubts about that:

You might assume that modern humans were more productive because they were more intelligent. In fact, there are increasing hints that the decisive factor was their greater sociability, which allowed them to work together in larger cooperative groups:

I wonder if attitudes toward neanderthals changed because of the evidence that some of us have neanderthal ancestry.

This pattern of underestimating non-human animals seems less harmful than my other examples, but demonstrates a more widespread pattern of experts being shown to be wrong, much like the experts who question the significance of AI capabilities.

This pattern seems too pervasive to be purely random mistakes. A more likely hypothesis might involve promoting special treatment for my species or my tribe.

Although some of it might be due to an over-reaction to the Lysenko-ism that has been too popular in academia, leading people to adopt an extreme version of the evolved modularity hypothesis that has been less productive at describing the past decade of AI progress than the universal learning machine hypothesis.

My Experience

I'm wondering how much I've overcome these prejudices.

In some contexts, I'm fairly satisfied with my ability to treat Claude as a co-worker.

I've been experimenting on building my own neural networks for several tasks. Claude has increased my productivity there by a factor of maybe 2 to 3, typically because it's more familiar with the packages that I need to use.

That process shares many pluses and minuses with hiring a human to do the equivalent work. It enables me to accomplish bigger tasks than I otherwise would. I need to devote a fair amount of thought to learning which tasks to delegate, and how much time to spend checking the results. Managing another mind feels more work-like and stressful than learning to understand all of the code myself. Writing code myself sometimes gets me into a wonderful state of flow. I don't see how to get that feeling when delegating tasks. OTOH, trying to do all the work without any delegation leads to more dead ends where I feel stuck.

Yet for all my enthusiasm for AI, I make little use of AI in my primary occupation, investing. That's partly due to concerns that AIs will direct my attention to the most popular ideas, whereas I'm usually seeking mainly information that is widely neglected.

I'm a bit worried that I'm neglecting some simple solutions to that problem out of conceit that I'm better at investing than AIs are. My intuition keeps telling me that it's hard for current AI to replace much of my stock market work. Yet my intuition also tells me that I should plan to retire from that job in 5 to 10 years, as AIs will match most of my abilities, while totally outclassing me on the amount of evidence that they can handle.

I maintain some awareness of whether I'm erring in the direction of delegating too much or too little responsibility to AIs. I.e. I'm being too cautious if I never notice myself feeling foolish about trusting an AI to do something that it screwed up on, or if I'm never surprised at one succeeding at a task that I'd been assuming was too hard for it.

I don't have quite enough recent surprises to know which direction I'm erring in.

Concluding Thoughts

It is often hard to know whether it's better to learn how to delegate a particular type of task to current AIs, or to wait for another generation of AI which will be easier to delegate work to.

I can't predict whether now is a good time for you to learn more about AI. Instead, I will predict that if you continue to not interact much with AIs, then sometime this decade you'll be as surprised as the average person in March 2020 was surprised by COVID.

It feels strange to live in two different worlds, one of which has a technologically stagnant worldview, and one of which sees AIs maturing at roughly the rate that human children mature.

0 comments

Comments sorted by top scores.