The metaphor/myth of general intelligence
post by Stuart_Armstrong · 2014-08-18T16:04:48.245Z · LW · GW · Legacy · 49 commentsContents
EDIT: Model/example of what a lack of general intelligence could look like. None 49 comments
Thanks for Kaj for making me think along these lines.
It's agreed on this list that general intelligences - those that are capable of displaying high cognitive performance across a whole range of domains - are those that we need to be worrying about. This is rational: the most worrying AIs are those with truly general intelligences, and so those should be the focus of our worries and work.
But I'm wondering if we're overestimating the probability of general intelligences, and whether we shouldn't adjust against this.
First of all, the concept of general intelligence is a simple one - perhaps too simple. It's an intelligence that is generally "good" at everything, so we can collapse its various abilities across many domains into "it's intelligent", and leave it at that. It's significant to note that since the very beginning of the field, AI people have been thinking in terms of general intelligences.
And their expectations have been constantly frustrated. We've made great progress in narrow areas, very little in general intelligences. Chess was solved without "understanding"; Jeopardy! was defeated without general intelligence; cars can navigate our cluttered roads while being able to do little else. If we started with a prior in 1956 about the feasibility of general intelligence, then we should be adjusting that prior downwards.
But what do I mean by "feasibility of general intelligence"? There are several things this could mean, not least the ease with which such an intelligence could be constructed. But I'd prefer to look at another assumption: the idea that a general intelligence will really be formidable in multiple domains, and that one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise.
First of all, humans are very far from being general intelligences. We can solve a lot of problems when the problems are presented in particular, easy to understand formats that allow good human-style learning. But if we picked a random complicated Turing machine from the space of such machines, we'd probably be pretty hopeless at predicting its behaviour. We would probably score very low on the scale of intelligence used to construct the AIXI. The general intelligence, "g", is a misnomer - it designates the fact that the various human intelligences are correlated, not that humans are generally intelligent across all domains.
Humans with computers, and humans in societies and organisations, are certainly closer to general intelligences than individual humans. But institutions have their own blind spots and weakness, as does the human-computer combination. Now, there are various reasons advanced for why this is the case - game theory and incentives for institutions, human-computer interfaces and misunderstandings for the second example. But what if these reasons, and other ones we can come up with, were mere symptoms of a more universal problem: that generalising intelligence is actually very hard?
There are no free lunch theorems that show that no computable intelligences can perform well in all environments. As far as they go, these theorems are uninteresting, as we don't need intelligences that perform well in all environments, just in almost all/most. But what if a more general restrictive theorem were true? What if it was very hard to produce an intelligence that was of high performance across many domains? What if the performance of a generalist was pitifully inadequate as compared with a specialist. What if every computable version of AIXI was actually doomed to poor performance?
There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists (this is my standard mental image/argument for AI risk), you could construct an entity that was very good at programming specific sub-programs, or you could approximate AIXI. But we are making some assumptions here - namely, that we can network together very different intelligences (the human-computer interfaces hints at some of the problems), and that a general programming ability can even exist in the first place (for a start, it might require a general understanding of problems that is akin to general intelligence in the first place). And we haven't had great success building effective AIXI approximations so far (which should reduce, possibly slightly, our belief that effective general intelligences are possible).
Now, I remain convinced that general intelligence is possible, and that it's worthy of the most worry. But I think it's worth inspecting the concept more closely, and at least be open to the possibility that general intelligence might be a lot harder than we imagine.
EDIT: Model/example of what a lack of general intelligence could look like.
Imagine there are three types of intelligence - social, spacial and scientific, all on a 0-100 scale. For any combinations of the three intelligences - eg (0,42,98) - there is an effort level E (how hard is that intelligence to build, in terms of time, resources, man-hours, etc...) and a power level P (how powerful is that intelligence compared to others, on a single convenient scale of comparison).
Wei Dai's evolutionary comment implies that any being of very low intelligence on one of the scale would be overpowered by a being of more general intelligence. So let's set power as simply the product of all three intelligences.
This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns - but we haven't included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.
But is it plausible that those could be of equal difficulty? It could be, if we assume that high social intelligence isn't so difficult, but is specialised. ie you can increase the spacial intelligence of a social intelligence, but that messes up the delicate balance in its social brain. Or maybe recursive self-improvement happens more easily in narrow domains. Further assume that intelligences of different types cannot be easily networked together (eg combining (100,5,5) and (5,100,5) in the same brain gives an overall performance of (21,21,5)). This doesn't seem impossible.
So let's caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.
49 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2014-08-18T17:37:17.079Z · LW(p) · GW(p)
I suspect that the closest thing to a "general intelligence module" might be a "skill acquisition module" - e.g. not something that would be generally intelligent by itself, but rather something that could generate more specialized modules that were optimized for some specific domain.
E.g. humans are capable of acquiring a wide variety of very different and specialized skills with sufficient practice and instruction, but we're probably constrained by whether the domain can be easily mapped into one of our native architectures. My hunch is that if you could give a baseline human access to a few more domains that they could natively think in and also enabled them to map new concepts into those domains (and vice versa), they could easily come across as superintelligent by being capable of coming up with modes of thought that were completely unfamiliar to us and applying them to problems that weren't easily handled with normal modes of thought. (On the other hand, they would have to come up with those mappings by themselves, since there was nobody around to give them instructions that was communicated in terms of those domains.)
comment by Toby_Ord · 2014-08-19T16:11:55.315Z · LW(p) · GW(p)
This is a really nice and useful article. I particularly like the list of problems AI experts assumed would be AI-complete, but turned out not to be.
I'd add that if we are trying to reach the conclusion that "we should be more worried about non-general intelligences than we currently are", then you don't need it to be true that general intelligences are really difficult. It would be enough that "there is a reasonable chance we will encounter a dangerous non-general one before a dangerous general one". I'd be inclined to believe that even without any of the theorising about possibility.
I think one reason for the focus on 'general' in the AI Safety community is that it is a stand in for the observation that we are not worried about path planners or chess programs or self-driving cars etc. One way to say this is that these are specialised systems, not general ones. But you rightly point out that it doesn't follow that we should only be worried about completely general systems.
comment by cousin_it · 2014-08-18T16:54:22.133Z · LW(p) · GW(p)
Most of your post is well reasoned, but I disagree with the opening paragraph:
It's agreed on this list that general intelligences - those that are capable of displaying high cognitive performance across a whole range of domains - are those that we need to be worrying about. This is rational: the most worrying AIs are those with truly general intelligences, and so those should be the focus of our worries and work.
A dangerous intelligence doesn't have to be smart in all domains, it can just be smart in a single dangerous domain. For example, I'd say that a group of uploaded minds running at 1000x speed is an existential risk.
Replies from: Stuart_Armstrong, TheAncientGeek↑ comment by Stuart_Armstrong · 2014-08-19T12:03:33.359Z · LW(p) · GW(p)
it can just be smart in a single dangerous domain.
Possibly. If that domain can overwhelm other areas. But its still the general intelligences - those capable of using their weather prediction modules to be socially seductive instead - that have the most potential to go wrong.
There are some ways of taking over human society that are much easier than others (though we might not know which are easiest at the moment). A narrow intelligence gets to try one thing, and that has to work, while a general intelligence can search through many different approaches.
Replies from: cousin_it↑ comment by cousin_it · 2014-08-19T15:34:44.320Z · LW(p) · GW(p)
Yeah, I agree that a truly general intelligence would be the most powerful thing, if it could exist. But that doesn't mean it's the main thing to worry about, because non-general intelligences can be powerful enough to kill everyone, and higher degrees of power probably don't matter as much.
For example, fast uploads aren't general by your definition, because they're only good at the same things that humans are good at, but that's enough to be dangerous. And even a narrow tool AI can be dangerous, if the domain is something like designing weapons or viruses or nanotech. Sure, a tool AI is only dangerous in the wrong hands, but it will fall into wrong hands eventually, if something like FAI doesn't happen first.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-20T10:48:59.647Z · LW(p) · GW(p)
We seem to have drifted into agreement.
↑ comment by TheAncientGeek · 2014-08-18T20:53:07.363Z · LW(p) · GW(p)
I'm not worrying about 1000x chicken minds. Do you think that the only thing stopping an average person from taking over the world is the fact that they don't live to 75000?
Replies from: TheMajor↑ comment by TheMajor · 2014-08-18T22:23:29.550Z · LW(p) · GW(p)
First of all a mind running at 1000x the speed is quite different from a person living to 75000. Imagine boxing (with gloves and all) with somebody while you could move at twice the speed - this is quite different from boxing with somebody while you have twice his stamina (in the latter case you will win after a long and equal fight where your opponent gets tired. In the former case I'd be surprised if your opponent managed to land a hit).
Secondly: If I have 75000 years to waste I might as well take over the world at some point. Seems like a good return on investment. And really, how hard can it be? Maybe 300 years, tops?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-19T09:10:40.007Z · LW(p) · GW(p)
Sounds like a good premise for a humorous science fiction story-- someone tries that, and discovers the world has such a wide range of behavior that every effort to take it over has unmanageable side effects. Unmanageable but harmless side effects, since this is a humorous story.
comment by Wei Dai (Wei_Dai) · 2014-08-19T19:39:11.119Z · LW(p) · GW(p)
This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns - but we haven't included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.
The (100,5,5) AI seems kind of like a Hitler-AI, very good at manipulating people and taking power over human societies, but stupid about what to do once it takes over. We can imagine lots of narrow intelligences that are better at destruction than helping us reach a positive Singularity (or any kind of Singularity). We already know that FAI is harder than AGI, and if such narrow intelligences are easier than AGI, then we're even more screwed.
So let's caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.
I want to point out that a purely narrow intelligence (without even a bare minimum amount of general intelligence, i.e., a Tool-AI), becomes this type of intelligence if you combine it with a human. This is why I don't think Tool-AIs are safe.
So I would summarize my current position as this: General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.
Replies from: cousin_it, Stuart_Armstrong↑ comment by cousin_it · 2014-08-20T10:59:41.041Z · LW(p) · GW(p)
Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can't we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can't we have a narrow AI that stops all competing AIs?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2014-08-20T18:05:40.303Z · LW(p) · GW(p)
I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it's just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it's easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it's hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.
For example, if we can have a narrow AI that kills all humans, why can't we have a narrow AI that stops all competing AIs?
It's hard to imagine a narrow AI that can stop all competing AIs, but can't be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don't know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger "attack AIs". Not a very good situation unless we're just trying to buy a little bit of time until FAI or IA is developed.
↑ comment by Stuart_Armstrong · 2014-08-20T10:43:36.761Z · LW(p) · GW(p)
General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.
Add a "may be" to the first sentence and I'm with you.
comment by Kaj_Sotala · 2014-08-18T17:25:23.523Z · LW(p) · GW(p)
Related paper: How Intelligible is Intelligence?
Abstract: If human-level AI is eventually created, it may have unprecedented positive or negative consequences for humanity. It is therefore worth constructing the best possible forecasts of policy-relevant aspects of AI development trajectories—even though, at this early stage, the unknowns must remain very large.
We propose that one factor that can usefully constrain models of AI development is the “intelligibility of intelligence”—the extent to which efficient algorithms for general intelligence follow from simple general principles, as in physics, as opposed to being necessarily a conglomerate of special case solutions. Specifically, we argue that estimates of the “intelligibility of intelligence” bear on:
Whether human-level AI will come about through a conceptual breakthrough, rather than through either the gradual addition of hardware, or the gradual accumulation of special-case hacks;
Whether the invention of human-level AI will, therefore, come without much warning;
Whether, if AI progress comes through neuroscience, neuroscientific knowledge will enable brain-inspired human-level intelligences (as researchers “see why the brain works”) before it enables whole brain emulation;
Whether superhuman AIs, once created, will have a large advantage over humans in designing still more powerful AI algorithms;
Whether AI intelligence growth may therefore be rather sudden past the human level; and
Whether it may be humanly possible to understand intelligence well enough, and to design powerful AI architectures that are sufficiently transparent, to create demonstrably safe AIs far above the human level.
The intelligibility of intelligence thus provides a means of constraining long-term AI forecasting by suggesting relationships between several unknowns in AI development trajectories. Also, we can improve our estimates of the intelligibility of intelligence, e.g. by examining the evolution of animal intelligence, and the track record of AI research to date.
comment by Pentashagon · 2014-08-20T05:48:40.907Z · LW(p) · GW(p)
If there is only specialized intelligence, then what would one call an intelligence that specializes in creating other specialized intelligences? Such an intelligence might be even more dangerous than a general intelligence or some other specialized intelligence if, for instance, it's really good at making lots of different X-maximizers (each of which is more efficient than a general intelligence) and terrible at deciding which Xs it should choose. Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-20T10:42:26.652Z · LW(p) · GW(p)
Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.
That is very unclear, and people's politics seems a good predictor of their opinions in "competing intelligences" scenarios, meaning that nobody really has a clue.
Replies from: Pentashagon↑ comment by Pentashagon · 2014-08-21T02:59:33.389Z · LW(p) · GW(p)
My intuition is that a single narrowly focused specialized intelligence might have enough flaws to be tricked or outmaneuvered by humanity, for example if an agent wanted to maximize production of paperclips but was average or poor at optimizing mining, exploration, and research it could be cornered and destroyed before it discovered nanotechnology or space travel and asteroids and other planets and spread out of control. Multiple competing intelligences would explore more avenues of optimization, making coordination against them much more difficult and likely interfering with many separate aspects of any coordinated human plan.
comment by Wei Dai (Wei_Dai) · 2014-08-18T18:50:03.068Z · LW(p) · GW(p)
But I'd prefer to look at another assumption: the idea that a general intelligence will really be formidable in multiple domains, and that one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise.
This assumption explains why evolution hasn't already filled all the economic and ecological niches with species of specialized intelligence, that humans are able to specialize into. If the assumption is false, then there would have to be another explanation for this fact. Are there any proposals on the table?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-18T20:18:27.933Z · LW(p) · GW(p)
Evolution hasn't produced a satisfactory general intelligence, so I'm not sure how much can be deduced from it. And the red queen race theory posits that human intelligence developed more from internal competition anyway. And most animal intelligences are extremely specialised.
Hum... I feel the analogy with evolution isn't that informative.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2014-08-19T00:33:20.910Z · LW(p) · GW(p)
Evolution hasn't produced a satisfactory general intelligence
I don't understand this. It seems to me that evolution has produced as satisfactory a general intelligence as it would be reasonable to expect. The only thing you cite in the OP as example for humans not being satisfactory general intelligence is "if we picked a random complicated Turing machine from the space of such machines, we'd probably be pretty hopeless at predicting its behaviour." But given limited computing power nothing can possibly predict the behaviour of random Turing machines.
On the other hand, humans are able to specialize into hundreds or thousands of domains like chemical engineering and programming, before evolution was able to produce specialized intelligence for doing those things. How do you explain this, if "one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise" is false?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-19T13:03:50.827Z · LW(p) · GW(p)
I'm not convinced that evolution is that good a metaphor for this. But since there are so few good metaphors or ideas anyway, lets go with it.
Evolution hasn't produced a satisfactory general intelligence in the AIXI form. As far as I can tell, all the non-anthropomorphic measures of intelligence ranks humans as not particularly high. So humans are poor general intelligences in any objective sense we can measure it.
Your point is that humans are extremely successful intelligences, which is valid. It seems that we can certainly get great performance out of some general intelligence ability. I see that as "a minimum of understanding and planning go a long way". And note that it took human society a long time to raise us to the level of power we have now; the additive nature of human intelligence (building on the past) was key there.
Addressed the more general point in the model added to the top post.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2014-08-19T18:54:02.403Z · LW(p) · GW(p)
So humans are poor general intelligences in any objective sense we can measure it.
This may be a logical consequence of "a minimum of understanding and planning go a long way". As evolution slowly increases the intelligence of some species, at some point a threshold is crossed and technological explosion happens. If "a minimum of understanding and planning go a long way" then this happens pretty early, when that species can still be considered poor general intelligences on an absolute scale. This is one of the reasons why Eliezer thinks that superhuman general intelligence may not be that hard to achieve, if I understand correctly.
Addressed the more general point in the model added to the top post.
The added part is interesting. I'll try to respond separately.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-20T10:45:43.242Z · LW(p) · GW(p)
This is one of the reasons why Eliezer thinks that superhuman general intelligence may not be that hard to achieve, if I understand correctly.
That needs a somewhat stronger result, "a minimum increment of understanding and planning go a long way further". And that's partially what I'm wondering about here.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2014-08-20T18:35:52.657Z · LW(p) · GW(p)
That needs a somewhat stronger result, "a minimum increment of understanding and planning go a long way further". And that's partially what I'm wondering about here.
The example of humans up to von Neumann shows there's not much diminishing returns to general intelligence in a fairly broad range. It would be surprising if diminishing returns sets in right above von Neumann's level, and if that's true I think there would have to be some explanation for it.
Replies from: Stuart_Armstrong, TheAncientGeek↑ comment by Stuart_Armstrong · 2014-08-21T12:56:51.529Z · LW(p) · GW(p)
Humans are known to have correlations between their different types of intelligence (the supposed "g"). But this seems to no be a genuine general intelligence (eg a mathematician using maths to successfully model human relations), but a correlation of specialised submodules. That correlation need not exist for AIs.
↑ comment by TheAncientGeek · 2014-08-20T21:33:39.392Z · LW(p) · GW(p)
vN maybe shows there is no hard limit, but statistically there seem to be quite a lot of crazy chess grandmasterses, crazy mathematicians , crazy composers, etc.
comment by Cyan · 2014-08-18T18:43:58.174Z · LW(p) · GW(p)
What if it was very hard to produce an intelligence that was of high performance across many domains?... There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists...
In fact, we already know the minimax optimal algorithm for combining "expert" predictions (here "expert" denotes an online sequence prediction algorithm of any variety); it's the weighted majority algorithm.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-19T12:09:02.052Z · LW(p) · GW(p)
The challenge is not to combine different algorithms in the same area, but in different areas. A social bot and a stock market predictor - how should they interface best? And how would you automate the construction of interfaces?
Replies from: Cyan, None, roystgnr↑ comment by Cyan · 2014-08-19T20:35:33.146Z · LW(p) · GW(p)
Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-20T10:40:55.172Z · LW(p) · GW(p)
And is thinking in terms of that principle leading us astray in practice? After all, humans don't learn social interactions by reducing it to bit sequence prediction...
Replies from: Cyan↑ comment by Cyan · 2014-08-20T14:33:30.277Z · LW(p) · GW(p)
No, we totally do... in principle.
↑ comment by [deleted] · 2014-08-20T03:30:49.213Z · LW(p) · GW(p)
Automatic construction of general interfaces would be tricky to say the least. It would surely have to depend on why agentA needs to interface with agentB in the first place - for general data transfer (location , status, random data) it would be fine, but unless both agents had the understanding of each others internal models/goals/thought processes it seems unlikely that they would benefit from a transfer except at an aggregate level
↑ comment by roystgnr · 2014-08-19T15:48:04.605Z · LW(p) · GW(p)
The theorem in Cyan's link assumes that the output of each predictor is a single prediction. If it were instead a probability distribution function over predictions, can we again find an optimal algorithm? If so then it would seem like the only remaining trick would be to get specialized algorithms to output higher uncertainty predictions when facing questions further from their "area".
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-20T10:37:11.747Z · LW(p) · GW(p)
Say you need to plan an expedition, like columbus. How much time should you spend shmoozing with royalty to get more money, how much time inspecting the ships, how much testing the crew, etc... and how do these all interact? The narrow predictors would domain specific questions, but you need to be able to meld and balance the info in some way.
comment by David_Gerard · 2014-08-18T16:59:43.454Z · LW(p) · GW(p)
I note also KnaveOfAllTrades' recent post about the analogous concept of a "sports quotient".
Replies from: gwern↑ comment by gwern · 2014-08-19T01:06:02.352Z · LW(p) · GW(p)
I think a sports quotient is a bad counterexample, because it's pretty obvious there is a sports quotient: take someone who weighs 500 pounds, and another person who weighs 150; who do you think is going to win most of the time if you have them play tennis, basketball, sprinting, crosscountry running, archery, soccer...? Similarly, if someone has a gimp leg, they're going to perform badly at pretty much any sport, from table tennis (gotta stand and move around) to boxing.
Replies from: NancyLebovitz, NancyLebovitz↑ comment by NancyLebovitz · 2014-08-19T21:32:26.552Z · LW(p) · GW(p)
Wide range of body types for Olympic athletes
A sports quotient isn't a totally crazy idea, but I think it makes more sense as "can play a number of sports reasonably well" measurement rather than measuring the likelihood of achieving excellence at any sport.
I recommend The Sports Gene for an overview of the physical qualities needed for excellent performance. It used to be believed that the ideal athlete was someone with a classic intermediate build, but the more modern approach is to look for athletes whose bodies are at an optimum for particular sports.
Replies from: gwern↑ comment by gwern · 2014-08-20T18:08:26.794Z · LW(p) · GW(p)
Wide range of body types for Olympic athletes...I recommend The Sports Gene for an overview of the physical qualities needed for excellent performance.
They have a wide range of body types... for elite world-class one-out-of-millions competition, where even the tiniest differences like favorable genetic mutations make a difference. This in no way disproves the idea of an SQ, any more than grades in a math graduate course outpredicting an IQ test for who will win a Fields Medal would disprove the idea of an IQ test.
A sports quotient isn't a totally crazy idea, but I think it makes more sense as "can play a number of sports reasonably well" measurement rather than measuring the likelihood of achieving excellence at any sport.
Generally speaking, it's usually possible to devise a test for a specific field which outperforms an IQ test, adds predictive value above and beyond IQ. What's interesting about IQ is how general it is, how early in life it starts being useful, and how most good field-specific tests will subsume or partially measure IQ as well.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-21T00:15:21.855Z · LW(p) · GW(p)
The other surprising thing about IQ is how early it was invented.
↑ comment by NancyLebovitz · 2014-08-19T09:13:20.706Z · LW(p) · GW(p)
The 500 pound person would win at wrestling. I think they'd win at boxing. I'm not sure about archery.
On the other hand, the gimp leg would be a handicap for every mainstream sport I can think of.
Replies from: gwern↑ comment by gwern · 2014-08-19T17:23:38.040Z · LW(p) · GW(p)
The 500 pound person would win at wrestling.
Maybe. I don't know much about wrestling. If it were like judo, I'd guess that the extra weight just makes him fall that much harder.
I'm not sure about archery.
They wouldn't win at archery, definitely not. One of the challenges in archery is that it's tiring to lift the bow and keep it steady enough for high accuracy - even pretty fit people who first try their hand at archery will find it kills their arm and backs after 20 or 30 minutes of shooting. A fat person would have this problem in spades, as their arms get tired almost immediately and their accuracy goes to hell.
I think they'd win at boxing.
Same thing with boxing. Let the fattie tire themselves out and you can hit them with impunity. I've seen this happen in other striking martial arts.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-19T19:27:33.371Z · LW(p) · GW(p)
Both judo and wrestling have weight classes.
Thanks for the information about archery.
Replies from: Nornagest↑ comment by Nornagest · 2014-08-19T23:05:37.112Z · LW(p) · GW(p)
At a given weight, jujitsu and judo, and I'd guess wrestling as well, somewhat favor stocky people of short-to-medium height: short because it's generally helpful to have a low center of gravity, and stocky because it makes you more powerful for a given height and more resistant to certain techniques. But they're also extremely physically tiring. A few extra pounds might be helpful for one of several reasons, but I'd expect a 500-pound wrestler of normal height to lose more in fatigue than they gain in mass; and indeed, we don't see many competitors at 300 pounds or heavier, despite the fact that heavyweight wrestling starts at 200 and has no upper bound.
(Source: am tall, lanky jujitsuka. I haven't studied wrestling, but I've sparred with a few wrestlers.)
comment by Agathodaimon · 2014-08-29T05:24:56.984Z · LW(p) · GW(p)
Brains are like cars. Some are trucks made for heavy hauling, but travel slow. Some are sedans: economical, fuel efficient, but not very exciting. Some are ferraris sleek, fast, and sexy but burn through resources like a mother. I'm sure you can come up with more of your own analogies.
comment by owencb · 2014-08-19T16:16:17.994Z · LW(p) · GW(p)
First of all, humans are very far from being general intelligences. But if we picked a random complicated Turing machine from the space of such machines, we'd probably be pretty hopeless at predicting its behaviour. We would probably score very low on the scale of intelligence used to construct the AIXI.
I wonder about this. It sounds plausible, but getting reasonable scores also seems plausible -- perhaps even more plausible to me if you allow a human with a computer. It is probably quite sensitive to permitted thinking time. (I'm assuming that the 'scale of intelligence' you talk about is Legg's AIQ.)
It is the kind of thing we could test empirically, but it's not clear that this would be a good use of resources. How decision-relevant is it for us whether humans are general intelligences?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-20T10:39:07.601Z · LW(p) · GW(p)
It's relevant to exposing some maybe unjustified metaphors. And, actually, if humans were generally intelligent, especially without computers, this would a) surprise me and b) be strong evidence for a single-ish scale of intelligence.
comment by [deleted] · 2014-11-12T07:49:09.670Z · LW(p) · GW(p)
There are no free lunch theorems that show that no computable intelligences can perform well in all environments. As far as they go, these theorems are uninteresting, as we don't need intelligences that perform well in all environments, just in almost all/most.
They are also, in an important sense, false: the No Free Lunch theorem for statistical learning assumes that any underlying reality is as likely as any other (uniform distribution). Marcus Hutter published a paper years ago showing that when you make a simple Occam's Razor assumption, using the Solomonoff Measure over reality functions instead of a uniform distribution, you do, in fact, achieve a free lunch.
And of course, the Occam's Razor assumption is well-justified by the whole line of thought going from entropy in statistical mechanics through to both information-theoretic entropy and Kolmogorov complexity, viz: a simpler macrostate ("reality function" for classification/concept learning) can "be implemented by", emerge from, many microstates, so Occam's Razor and the Solomonoff Measure work in reductionist ontologies.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-11-13T18:44:26.141Z · LW(p) · GW(p)
The full Solomonoff measure is uncomputable. So a real-world AI would have a computable approximation of that measure, meaning that there are (rare) worlds that punish it badly.
Replies from: None↑ comment by [deleted] · 2014-11-13T23:24:14.860Z · LW(p) · GW(p)
But you don't get the Free Lunch from the optimality of Solomonoff's Measure, but instead from the fact that it lets you avoid giving weight to the adversarial reality functions and distributions normally constructed in the proof of the NFL Theorem.