Estimating Returns to Intelligence vs Numbers, Strength and Looks

post by TruePath · 2023-12-31T10:03:38.780Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    4 RogerDearnaley
None
1 comment

A key assumption in most x-risk arguments for AI is that the ability of an agent to exert control over the world increases rapidly with intelligence. After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.

What these arguments require is that a single highly (but not infinitely) intelligent agent will have be able to overwhelm the advantages humans might retain in terms of numbers, looks and computational power either by manipulating people to do it's bidding or hacking other systems. However, I've yet to see any attempt to quantify the relationship between intelligence and control assumed in these arguments.

It occurs to me that we have information about these relationships that can inform such assumptions. For instance, if we wish to estimate the returns to intelligence in hacking we look at how the number of exploits discovered by researchers varies with their intelligence.

To estimate the returns to intelligence in terms of manipulation we could at the distribution of intelligence in highly effective politicians/media personalities and compare it to other traits like height or looks. Or even, if we assume that evolution largely selects for ability to influence others, look at the distribution of these traits in the population.

I realize that doing this would probably require a number of substantial assumptions but I'm curious if anyone has tried. And yes I realize this entirely ignores the issue of defining intelligence beyond human capability (though if the notion has any validity we could probably use something like the rate at which unknown theorems, weighted by importance, can be proved).

Answers

answer by RogerDearnaley · 2023-12-31T10:36:33.233Z · LW(p) · GW(p)

Not that relevant, but an observation specifically on politicians: having met several national-level politicians in person and heard them speak, they are uniformly far more charismatic in person than I had expected from having only seeing them on TV. That old adage about the camera adding fifteen pounds: it also reduces charisma, significantly. Even politicians you thought were gray and dull from seeing them on TV are actually very charismatic in person. So my impression is that one of the primary requirements to be an effective politician at a national level is being extremely charismatic — way more charismatic than you think they are. Which statistically, offhand, suggests perhaps they're mostly not also amazingly smart (probably mostly just somewhat smart), since the odds of lightning striking twice are low. (This wouldn't necessarily be true if intelligence and charisma were strongly correlated, but I've met enough STEM professors to know that definitely isn't true.)

comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-12-31T16:14:30.746Z · LW(p) · GW(p)

I am distantly related to a powerful political family, and am apparently somewhat charismatic in person, in a way that to me just feels like basic empathy and social skills. If there's a way to turn that into more productivity for software development or alignment research, let me know.

Replies from: localdeity
comment by localdeity · 2023-12-31T23:23:31.650Z · LW(p) · GW(p)

It could make you better at: managing a team, advocating for a certain project, mediating discussions and conflicts, keeping meetings productive, giving advice to individuals about their social or socially mediated problems, etc.  I don't think it would directly enhance your productivity as a researcher, but it could let you act as a force multiplier for others.

Replies from: sheikh-abdur-raheem-ali
comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2024-01-01T00:39:46.100Z · LW(p) · GW(p)

Thanks, that matches my experience. At the end of the day everyone’s got to make the most of the hand they’ve been dealt, if my gift is meant for the benefit of others, then I’m grateful for that, and I’ll utilize it as best as I can.

Replies from: roger-d-1
comment by RogerDearnaley (roger-d-1) · 2024-01-01T03:03:25.225Z · LW(p) · GW(p)

In general, in software companies, the people most likely to fit that profile are Product /Project Managers. Which requires empathy with the users, imagination, and communication and social skills to communicate to and coordinate teams. Not quite as necessary specifically in Alignment work.

1 comment

Comments sorted by top scores.

comment by lc · 2024-01-01T00:49:18.818Z · LW(p) · GW(p)

After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.

Why?