How Does Cognitive Performance Translate to Real World Capability?
post by DragonGod · 2022-06-07T17:39:40.858Z · LW · GW · No commentsThis is a question post.
Contents
Answers 5 tailcalled None No comments
How does cognitive/intellectual performance (e.g. as measured by ) translate to real-world capability? Do linear increases in cognitive performance result in linear increases in capability? I don't know. I did think of a way we could maybe investigate that:
Hmm, I guess can you describe a model for how moving across the following credences (in an arbitrary proposition):
- 90%
- 99%
- 99.9%
- 99.99%
- 99.999%
- 99.9999%
Could be exploited to offer linear (monetary) returns across each step. And then it's a question of how many real world scenarios look exploitable like that?
I would be very interested in answers to this. This could significantly change my views on how much real-world capability you can buy with increasing cognitive performance.
Of course, predictive accuracy is not the only measure of cognitive performance (especially, prediction enhanced finance is not the only way that superhuman AI could leverage its greater intelligence in the real world).
This could be thought of as a starter to investigate the issue of translating cognitive performance into capability in the real world. Predictive power is a good starting point because it's a simple and straightforward measure. It's just a proxy, but as a first attempt, I think it's fine enough.
Monetary returns seem like a pretty robust measure of real-world capability.
Answers
I think the hypothetical %-based approach you mention isn't a good approach. The issue is that you do indeed get exponentially diminishing returns from improvement to any one question, but then there are other questions of higher "difficulty" which you tend to start improving on, once you saturate your current level of difficulty.
Within psychometrics, this is formally studied under the phrase "item response theory". There, they fit sigmoidal curves to responses to surveys or tests, and as one probability curve flattens out, another curve tends to pick up steam. See e.g. this example (via):
↑ comment by DragonGod · 2022-06-08T06:01:22.776Z · LW(p) · GW(p)
Exponentially diminishing returns was what I found in the concrete examples I thought of (e.g. offering insurance policies, betting on events, etc.).
It seems to me that an AI that linearly increased its predictive accuracy on a particular topic would see exponentially diminishing returns.
The question is if this return on investment of predictive accuracy generalises.
If I were to instead suppose that the agent in question was well calibrated, and got any binary question it could assign 90% accuracy to it or its inverse.
If the accuracy was raised to 99%.
Then 99.9%. Then 99.99%. Then 99.999% ...
Is there a strategy that allows the agent to make consistent linear returns across each step.
And how many scenarios are there where such a strategy is available (vs just making exponentially diminishing returns as seems to be the default).
Your answer seems interesting as a response to how intelligence manifests among humans in the real world (I up voted). But it's leaving my toy model for thinking about how to model the capability an AI could purchase with increasing predictive accuracy (one measure of intelligence).
Maybe I could make a new question and clarify more strongly what exactly I'm trying to investigate and think about.
Replies from: tailcalled↑ comment by tailcalled · 2022-06-08T07:26:58.328Z · LW(p) · GW(p)
My point is that as predictive accuracy for one question goes from 99.9% to 99.99%, predictive accuracy for another question might be going from 0.1% to 10%. So one shouldn't focus on the 99.9% questions increasing (well, sometimes one should, e.g. for self-driving cars where extremely high reliability is important), but instead on whether there is a big supply of other questions with an accuracy close to 0 that one has space for improving on.
Replies from: JBlack↑ comment by JBlack · 2022-06-09T06:05:19.620Z · LW(p) · GW(p)
For some things (like survival across repeated trials), 99.99% is indeed immensely better than 99%. There are quite a few types of scenario where intelligence does make that sort of difference. Also, 0.1% vs 10% can also make a huge difference, e.g. in the odds of successfully creating something very much more valuable than usual.
Similar odds ratios also get you from 1% to 99%, which I think is extremely valuable in almost all situations where one outcome is substantially better than the other.
Replies from: DragonGod↑ comment by DragonGod · 2022-06-09T12:09:27.981Z · LW(p) · GW(p)
The inquiry is about sustained linear returns to increases in predictive accuracy.
It's not enough to show that a jump from 99% predictive accuracy to 99.99% is good.
You have to show that a jump from 99% accuracy to 99.99% is as good as a jump from 99.99% to 99.9999% accuracy.
You aren't properly engaging with the inquiry I posited.
Replies from: JBlack↑ comment by JBlack · 2022-06-10T02:02:28.485Z · LW(p) · GW(p)
The original inquiry was about returns on cognitive performance. One source of return is the sort of thing you're talking about here: moving from 99.9999% to 99.999999% accuracy.
A different, but overall much more valuable source of return is increasing the scope of things for which you move from 1% to 99%. It's much more valuable because for any practically possible level of cognitive capability, there are a lot more things at the low end of predictability than at the high end.
If you want to restrict the discussion only to the first class though, that's fine.
No comments
Comments sorted by top scores.