Richard Carrier on the Singularity

post by Peter Wildeford (peter_hurford) · 2011-12-05T06:14:25.171Z · LW · GW · Legacy · 51 comments

Recently I stumbled upon Richard Carrier's essay "Are We Doomed" (June 5, 2009), when asked to comment about the Singularity, said the following:

I agree the Singularity stuff is often muddled nonsense. I just don't know many advocates of it. Those who do advocate it are often unrealistic about the physical limits of technology, and particularly the nature of IQ. They base their "predictions" on two implausible assumptions: that advancement of IQ is potentially unlimited (I am fairly certain it will be bounded by complexity theory: at a certain point it just won't be possible to think any faster or sounder or more creatively) and that high IQ is predictive of accelerating technological advancement. History proves otherwise: even people ten times smarter than people like me produce no more extensive or revolutionary technological or scientific output, much less invent more technologies or make more discoveries--in fact, by some accounts they often produce less in those regards than people of more modest (though still high) intelligence.

However, Singularity fans are right about two things: machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years (if advocates predict this will happen sooner, then they are being unrealistic), and the pace of technological advancement will accelerate. However, this is already accounted for by existing models of technological advancement, e.g. Moore's Law holds that computers double in processing power every three years, Haik's Law holds that LED's double in efficiency every three years, and so on (similar laws probably hold for other technologies, these are just two that have been proven so far). Thus, that technological progress accelerates is already predicted. The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

It therefore doesn't predict anything remarkable, and certainly doesn't deserve such a pretentious name. Because there will be a limit, an end point, and it won't resemble a Singularity: there is a physical limit on how fast thoughts can be thunk and how fast manufacturing can occur, quantum mechanical limits that can never be overcome, by any technology. Once we reach that point, the pace of technological advancement will cease to be geometric and will become linear, or in some cases stop altogether. For instance, once we reach the quantum mechanical limit of computational speed and component size, no further advances will be possible in terms of Moore's Law (even Kurzweil's theory that it will continue in the form of expansion in size ignores the fact that we can already do this now, yet we don't see moon-sized computers anywhere--a fact that reveals an importantly overlooked reality: what things cost).

Ironically, the same has been discovered about actual singularities: they, too, don't really exist, and for the same quantum mechanical reasons (see my discussion here).

What do you think?

 

51 comments

Comments sorted by top scores.

comment by MixedNuts · 2011-12-05T07:32:44.879Z · LW(p) · GW(p)

Let's not jump down his throat. It's a current evaluation from shallow research, not an expert-level essay.

I will proceed to jump down his throat.

vague claims about technology, IQ having a fundamental bound, and IQ sucking as a metric anyway

That's rather too vague to analyze.

If being really smart won't help you (on real-life instances, not just asymptotically) because you're jumping up the hierarchy, there's still a lot to get from improving heuristics, looking into increasingly specialised heuristics, and just throwing more power at the problem. But we don't have a model detailed enough to provide a bound at all!

Singularity fans are right about two things: machines will outthink humans within fifty to a hundred years, and the pace of technological advancement will accelerate.

Okay, either he's agreeing with Singularitarians but doesn't want to admit it, or he expect tech will run into a wall really fast for no specified reason.

this is already accounted for by existing models of technological advancement, e.g. Moore's Law

...nobody is denying that surface laws like these exist. Singularitarians are claiming that there are deeper reasons why these models are and stay true. Next he's going to tell us that Newton's laws are useless because we already have a parabolic model of freefall.

The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

Ehn, two schools out of three ain't bad.

It therefore doesn't predict anything remarkable

If creating the smartest thing in the universe is unremarkable, I want to see what impresses Carrier.

certainly doesn't deserve such a pretentious name

I have to back him on that one.

there will be a limit, an end point

What is wrong with people that makes them understand "a bound exists" as "the bound is smallish"?

we can already do this now, yet we don't see moon-sized computers anywhere--a fact that reveals an importantly overlooked reality: what things cost

...yes, we don't see moon-sized computers because they're more expensive for the same performance gain than reducing and speeding up individual components. When those avenues are exhausted, it will become much more economically viable to make huge computers.

Replies from: None, lessdazed
comment by [deleted] · 2011-12-05T07:38:28.915Z · LW(p) · GW(p)

.

comment by lessdazed · 2011-12-06T20:09:24.648Z · LW(p) · GW(p)

What is wrong with people that makes them understand "a bound exists" as "the bound is smallish"?

Modus tollens: "no small bound exists" --> "no bound exists", e.g. life extension is immortality, (but immortality is physically impossible, so too must be life extension).

comment by Nisan · 2011-12-05T17:14:06.414Z · LW(p) · GW(p)

Hopefully, someday soon, all intellectuals will agree that

  • nonhuman things will greatly outperform biological human intelligences in all domains;
  • there is a significant chance that all important decisions until the end of time will be made by nonhuman intelligences;
  • these things will happen within decades or centuries, unless civilization collapses; and
  • that "Singularity" idea, whatever it was, was all wrong.
Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2011-12-05T20:26:07.504Z · LW(p) · GW(p)

I don't get it.

Replies from: Nisan
comment by Nisan · 2011-12-05T21:38:32.906Z · LW(p) · GW(p)

The first three bullet points in the grandparent are the most important ideas associated with the Singularity that everyone needs to know about. They accord with the Singularity Institute's predictions, with the core claims of Yudkowsky's three Singularity "schools", and Robin Hanson's ems scenario.

When I read Carrier's first sentence ("I agree the Singularity stuff is often muddled nonsense."), I assumed he would he would be skeptical of those three points in one of the usual ways. But instead he actually affirms two of them:

machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years

(He doesn't address my second bullet point.) It's not clear what the "Singularity" means to him; but from his criticisms, it seems to have something to do with IQ, and whether machine intelligences will be AIs or uploads or whatever.

My point is that if and when the Singularity Institute succeeds in convincing everyone of the first three bullet points in the grandparent, it will still be fashionable to dismiss the Singularity hypothesis, because everyone has their own strawman version of what the Singularity is.

Replies from: Nisan
comment by Nisan · 2011-12-05T21:57:52.195Z · LW(p) · GW(p)

I heard David Pearce speak recently, and he mentioned in passing that he is "not a starry-eyed Singularitarian", and by this he seemed to mean that he thought brain uploading was infeasible. But elsewhere in the talk he spoke casually of utilitronium shockwaves and jupiter brains and pleasure plasma.

I don't even know what the Singularity is anymore. In fact, I never did. I suspect that the disclaimer "I am not a Singularitarian" means "I am not Eliezer Yudkowsky circa 1999, although everything he says nowadays is quite reasonable."

Replies from: peter_hurford, cousin_it
comment by Peter Wildeford (peter_hurford) · 2011-12-06T00:20:26.733Z · LW(p) · GW(p)

Ah, I get it now! I think Carrier's argument was that computing power will not exceed that as already predicted by Moore's Law, though I'm not exactly sure why, or how that disproves the Singularity.

comment by cousin_it · 2011-12-06T12:41:09.546Z · LW(p) · GW(p)

and by this he seemed to mean that he thought brain uploading was infeasible. But elsewhere in the talk he spoke casually of utilitronium shockwaves and jupiter brains and pleasure plasma

Wait, what? Isn't brain uploading obviously easier than those other things?

Replies from: Nisan
comment by Nisan · 2011-12-06T16:27:57.937Z · LW(p) · GW(p)

I can't speak for him, but he possibly meant that brain uploading won't be feasible for a while, and the posthuman era will be ushered in with implants, gene therapy, drugs, etc.

comment by shminux · 2011-12-05T17:47:30.873Z · LW(p) · GW(p)

quantum mechanical limits that can never be overcome, by any technology

That's a confident Lord Kelvin-like statement about physics, and should be treated as a failure of imagination. Physicists agree that they still do not understand the elusive boundary between quantum and classical, so talking about some unmovable limit in that area is pretty silly. Then again, Richard Carrier is a historian, not a physicist.

For example, if you subscribe to the MWI model, gaining access to the googolplex of the worlds created every femtosecond and harnessing their computational resources would effectively remove anything resembling a computational speed limit.

Another example: we might discover that around the Planck scale the world consists of something like unparticles, and their scale-invariance would allow us to miniaturize without a bound.

Given that it took me two minutes to come up with two (admittedly far-fetched) examples of how the "no further advances will be possible in terms of Moore's Law" statement could be wrong, and I am not even an expert in the area, I would discount all his predictions as too poorly thought through to care about, until and unless proven otherwise.

Replies from: Manfred
comment by Manfred · 2011-12-05T23:37:47.037Z · LW(p) · GW(p)

For example, if you subscribe to the MWI model, gaining access to the googolplex of the worlds created every femtosecond and harnessing their computational resources would effectively remove anything resembling a computational speed limit.

So, you can think of this as what quantum computers do, and there's still a pretty normal speed limit. Because all (traditional) interpretations of quantum mechanics run off the exact same math, a good test to apply in these cases is that if it only seems to work in one interpretation, you've probably made a mistake.

And of course, unlike Lord Kelvin's famous claim, we didn't have to discover any new and unexpected physics to build heavier than air flying machines. Carrier's statement is statement literally correct, then - technology will not get you around quantum-mechanical limits, such as they are.

Replies from: shminux
comment by shminux · 2011-12-06T03:09:41.441Z · LW(p) · GW(p)

Because all (traditional) interpretations of quantum mechanics run off the exact same math, a good test to apply in these cases is that if it only seems to work in one interpretation, you've probably made a mistake.

True enough, I was referring to the next breakthrough in quantum physics, which, in my estimation, is likely to happen before we reach the current quantum limits, when the interpretations might actually become useful models.

we didn't have to discover any new and unexpected physics to build heavier than air flying machines.

Sometimes technological advances are also unexpected. Remember when 9600bps was considered the limit for phone lines? We are 20000 times faster than that now.

Replies from: Dentin
comment by Dentin · 2011-12-07T16:08:22.402Z · LW(p) · GW(p)

Actually, we're only about five times faster than that, and the real (Shannon) limit for analog phone lines is somewhere in the 60-100 kbps range. It's not fair to compare a modulation capable of using megahertz of bandwidth on a short unfiltered line with a modulation designed explicitly to be bounded by a 3 khz voice path and work across a pure analog channel hundreds or thousands of kilometers in length.

Some background:

9600 bps modems used around 2.7 khz of bandwidth, and had low grade error correction that allowed fairly reliable connectivity but didn't have a huge coding gain. This was extended up to 19200 bps using V.32ter, but that never actually caught on as a standard and work on V.34 began.

V.34, for all its problems and imperfections, to this day remains a work of art. It can use almost the entire available spectrum of an analog line (all of 3.5 khz or so), can compensate for many different types of line distortion, and automatically adjusts to changing line conditions to maintain its connection. It really is a piece of high technology software - and its primary design criteria is to push bits reliably through a comm channel that's not a whole lot better than two tin cans and some string.

(Note that V.90 and V.92 are faster than V.34, but they are digital standards, and make use of much stronger constraints on line quality. They also directly operate on digital data instead of having to do an extra A/D transform. The techniques and assumptions used in these standards are very different from V.34 and allow higher data rates, but when those assumptions are violated, V.90/92 fall back to V.34.)

The max data rate for V.34 is 33.6 kbps. There are a lot of improvements that could be made to V.34 with modern technology, the most significant of which would be use of better error correction. But even with all the resources of mankind thrown at the problem, I would be shocked if we could double the average data rate without loosening the channel constraints.

Replies from: shminux
comment by shminux · 2011-12-07T17:56:05.901Z · LW(p) · GW(p)

I agree with your caveats, but my point was, to quote Wikipedia, "For many years, most engineers considered this rate to be the limit of data communications over telephone networks." Yet it only took some extra technology, and no new advances in physics, to increase the effective throughput by 4 orders of magnitude (and counting). It might well happen that the apparent quantum limit on computer performance is only a technological obstacle, not a fundamental one.

comment by Logos01 · 2011-12-05T09:13:26.279Z · LW(p) · GW(p)

History proves otherwise: even people ten times smarter than people like me produce no more extensive or revolutionary technological or scientific output,

I will go out on a limb and assert that this man has a higher-than-average IQ. However, for his statement to be true he would have to be what some call "profoundly mentally retarded". That is, someone with an IQ below 25. To my knowledge, there have been an exceedingly small number of individuals in the range of 10x that IQ score -- amongst them the highest IQ yet recorded. So there are real problems of scale in his underlying assumptions.

Replies from: rwallace, cousin_it, None
comment by rwallace · 2011-12-05T15:56:07.087Z · LW(p) · GW(p)

Only if you take 'ten times smarter' to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that's not a meaningful operation; it's essentially a type error. The obvious interpretation of 'ten times smarter' within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.

And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in many domains, but being far above average seems to confer little or no additional advantage. (My guess at the explanation: first, beyond a certain point you have to start making trade-offs from areas of brain function that IQ doesn't measure; second, Amdahl's law.)

Replies from: Manfred, Cthulhoo, Emile, torekp
comment by Manfred · 2011-12-05T23:43:01.876Z · LW(p) · GW(p)

I agree, that's likely what Carrier was feeling when he wrote that sentence. But that doesn't let him off the hook, because that way is even worse than Logos'! He's using a definition of "times more intelligent" that is only really capable of differentiating between humans, and trying to apply it to something outside that domain.

comment by Cthulhoo · 2011-12-05T16:19:20.199Z · LW(p) · GW(p)

Amdahl's law

I'm not sure if the following could be already encompassed in Amdahl's law, but I think it was worth a comment. Very intelligent humans still need to operate through society to reach their goals. An IQ of 140 may be enough for you to discover and employ the best tools society puts at your disposal. An IQ of 180 (just an abstract example) may let you recognize new and more efficient patterns, but you then have to bend society to exploit them, and this usually means convincing people not as smart as you are, that may very well take a long time to grasp your ideas.

As an analogy, think being sent into the stone age. A Swiss knife here is a very useful tool. It's not a revolutionary concept, it's just better than stone knives in cutting meat and working with wood. On the other hand, a set of professional electrical tools, while in principle way more powerful, will be completely useless since you will have to find a way to charge their batteries before.

comment by Emile · 2011-12-06T10:07:14.049Z · LW(p) · GW(p)

Yup, that's the way I interpreted it too - going from top 1% to top 0.1%.

comment by torekp · 2011-12-06T02:27:40.953Z · LW(p) · GW(p)

To me a more natural interpretation from a mathematical POV would use log-odds. So if the author is at the 90% mark, someone 10 times as smart occurs at the frequency of around 1 in 3 billion.

But yeah. In context, your way makes more sense, if only because it's more charitable.

comment by cousin_it · 2011-12-05T13:08:46.479Z · LW(p) · GW(p)

IQ is renormalized to the bell curve by definition, so multiplying it by 10 isn't guaranteed to be a meaningful operation. And since we have no other way to measure intelligence, it's not clear what Carrier meant by "10 times smarter". For some easy interpretations (e.g. 10x serial speed or 10x parallelism) his claim seems trivially wrong.

Replies from: Richard_Kennaway, Manfred, Logos01
comment by Richard_Kennaway · 2011-12-05T16:29:49.221Z · LW(p) · GW(p)

"10 times" just means "a lot". I'm more curious about what Carrier meant by "smart".

Replies from: Desrtopa
comment by Desrtopa · 2011-12-05T19:11:07.001Z · LW(p) · GW(p)

It is a simple way of expressing "a lot," but it's also one that immediately raises the question "is there any meaningful sense in which anyone that smart has actually existed?"

Of course, when Carrier claims that the most remarkably intelligent people do not tend to be the most productive, while it's clear what kind of individuals he has in mind, the obvious next question is "can we design machines that use their intelligence more productively than humans?" Considering how human brains actually work, this sounds like much less of a tall order than making AI that are more intelligent in a humanlike way.

comment by Manfred · 2011-12-05T23:47:40.277Z · LW(p) · GW(p)

Well, central limit theorem says it's mostly a bell curve among humans (you could make a case for a bigger tail on the low end, but still mostly a bell curve). And you can always identify "0" with a random number generator. So multiplying by 10 seems okay to me.

Replies from: satt
comment by satt · 2011-12-06T03:31:59.830Z · LW(p) · GW(p)

Well, central limit theorem says it's mostly a bell curve among humans

Only subject to some major assumptions.

Replies from: Manfred
comment by Manfred · 2011-12-06T03:51:48.228Z · LW(p) · GW(p)

Not that major. The assumptions are that there are many small, independent things that affect intelligence. These assumptions are wrong, in that there are many things that do not have a small effect at all. But to the extent that these (mostly bad things) are rare, you'll just see a bell curve with slightly larger tails.

Replies from: cousin_it
comment by cousin_it · 2011-12-06T12:36:47.743Z · LW(p) · GW(p)

Why can we assume that all the little things affect intelligence independently? Are synergies obviously rare, and how rare do they have to be for the central limit theorem to apply? In the simplest alternative model I can think of, incremental advances could be multiplicative instead of additive, which gives a log-normal distribution instead of a bell curve. This case is uninteresting because you could just say you're measuring e^intelligence instead of intelligence, but I can imagine more complicated cases.

Replies from: DanielVarga, Manfred
comment by DanielVarga · 2011-12-06T22:37:35.047Z · LW(p) · GW(p)

Side note: I think it is not well known that for the quintessential normally distributed random variable, human height, the lognormal distribution is in fact an equally good fit. And on the other end of the variance spectrum: I became biased toward the lognormal distribution when I observed that it is a much better fit for social network degree distributions than the much-discussed power-law. It is a very versatile thing.

comment by Manfred · 2011-12-06T22:17:46.015Z · LW(p) · GW(p)

Good point.

comment by Logos01 · 2011-12-05T13:34:52.707Z · LW(p) · GW(p)

IQ is renormalized to the bell curve by definition, so multiplying it by 10 isn't guaranteed to be a meaningful operation. And since we have no other way to measure intelligence, it's not clear what Carrier meant by "10 times smarter".

Well... IQ is meant to be a direct quantification of raw "intellectual capacity". So while its distribution is relative given the history of tests thus far, it still remains a quantification. But, that being said, this only further exascerbates the point I'm really getting at here: the 'logic' the man used is... fuzzy.

Replies from: billswift
comment by billswift · 2011-12-06T00:06:07.270Z · LW(p) · GW(p)

IQ is meant to be a direct quantification of raw "intellectual capacity".

No it isn't; it is a framework for relative rankings. Developing some means of "direct quantification" would be a major intellectual achievement, which as a first step would require a good definition of intelligence. I have been thinking about this and while there are quite a few useful definitions of intelligence out there, they each have notable weaknesses, we are a long way from a good definition.

Just as thermometers are a tool that measures temperature as relative degrees, and a serious understanding of and definition of heat waited on the development of the statistical theory of molecular motions.

Replies from: Logos01
comment by Logos01 · 2011-12-06T02:20:03.827Z · LW(p) · GW(p)

Just as thermometers are a tool that measures temperature as relative degrees,

Amusing -- those are a direct quantification of temperature. Degrees Celsius for example goes to degrees Kelvin rather well. They use arbitrarily fixed points above zero K -- but the IQ scale does not do this.

Now, of course, IQ is not g. And we have no means of quantifying g.

I think maybe you are under the misapprehension that by "intellectual capacity" I was saying "intelligence". If I had meant "intelligence" I would have said "intelligence".

comment by [deleted] · 2011-12-05T13:57:22.305Z · LW(p) · GW(p)

This is a needlessly pedantic response to a comment which can be dissected in many other ways.

Replies from: Logos01
comment by Logos01 · 2011-12-05T14:54:06.145Z · LW(p) · GW(p)

It was the first thing that stood out to me, quite frankly, and it seemed a rather fundamental criticism of the clarity of thought of the author: the vast majority -- it seemed to me -- of his position rested upon a notion that was both faulty and exposed by my original posting.

Any other 'dissection' seems entirely unnecessary, in my eyes, given this.

Replies from: None
comment by [deleted] · 2011-12-07T09:40:36.674Z · LW(p) · GW(p)

It isn't obvious to you that this a fairly off the cuff response, and that "10 times" is used in a slightly colloqial way to mean "a lot more"?

Replies from: Logos01
comment by Logos01 · 2011-12-07T15:07:18.431Z · LW(p) · GW(p)

It isn't obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well -- if not more so -- as deeply-thought-out ones?

The very fact that "10 times as smart" is intelligible as merely "a lot more" requires certain underlying assumptions about the available space of intelligence, and that addresses the very fundamental assumptions of his writing.

Replies from: wedrifid
comment by wedrifid · 2011-12-07T15:31:23.665Z · LW(p) · GW(p)

It isn't obvious to you that off-the-cuff responses reveal underlying biases and assumptions equally as well -- if not more so -- as deeply-thought-out ones?

Declaring that "10 times as smart" must be a reference to IQ points and then proceeding to attempt to back that interpretation up despite the absurdity reveals something a whole lot more significant than a simple reference to "10 times as smart".

Replies from: Logos01
comment by Logos01 · 2011-12-07T15:58:03.306Z · LW(p) · GW(p)

Declaring that "10 times as smart" must be a reference to IQ points

  1. I did nothing of the sort.

  2. IQ is a standard measure of "smartness".

then proceeding to attempt to back up the rather absurd judgement reveal something a whole lot more significant than a simple reference to "10 times as smart".

Would you care to make a complete thought out of this?

Replies from: Desrtopa, wedrifid
comment by Desrtopa · 2011-12-07T16:14:12.484Z · LW(p) · GW(p)

What you said was

I will go out on a limb and assert that this man has a higher-than-average IQ. However, for his statement to be true he would have to be what some call "profoundly mentally retarded". That is, someone with an IQ below 25. To my knowledge, there have been an exceedingly small number of individuals in the range of 10x that IQ score -- amongst them the highest IQ yet recorded.

which suggests that you believed that "ten times as smart" must map to "ten times the IQ score." To go back to the thermometer reading issue you responded to earlier, yes, a thermometer reading corresponds directly to a scalar quantity, but ordinary thermometer readings aren't in Kelvin, and neither do IQ tests measure from zero intelligence at zero IQ. Even if we assume that intelligence is a quantity that progresses linearly along the IQ scale (unlikely,) mapping "ten times as smart as IQ 25" to "IQ 250" would be rather like mapping "ten times as hot as the reading of 12 degrees C on this thermometer" to "120 degrees C."

Replies from: wedrifid, Logos01
comment by wedrifid · 2011-12-07T16:34:25.492Z · LW(p) · GW(p)

mapping "ten times as smart as IQ 25" to "IQ 250" would be rather like mapping "ten times as hot as the reading of 12 degrees C on this thermometer" to "120 degrees C."

Or, for that matter, mapping "ten times as hot as a 6" to a 60.

comment by Logos01 · 2011-12-07T16:37:54.443Z · LW(p) · GW(p)

which suggests that you believed that "ten times as smart" must map to "ten times the IQ score."

To the extent that IQ is the only quantified form of intelligence yet known, yes, that's absolutely true.

, but ordinary thermometer readings aren't in Kelvin,

Celsius is Kelvin + a number. Fahrenheit is Kelvin + a number + ratio conversion. This is a total non-starter for your position.

and neither do IQ tests measure from zero intelligence at zero IQ.

0 IQ however does have an intelligible meaning. This again is a total non-starter. Nothing in the observable universe exists at 0K. We "measure" above 0K. No intelligent being actually has 0 IQ. We "measure" above 0 IQ.

Kelvin is quantified temperature relative to absolute zero. IQ is quantified intelligence relative to zero where the units are adjusted to fit the current history of measurements.

mapping "ten times as smart as IQ 25" to "IQ 250" would be rather like mapping "ten times as hot as the reading of 12 degrees C on this thermometer" to "120 degrees C."

No, it would be exactly like asserting that 120K is 10x the temperature of 12K.

If you'd read the link that I originally gave, you'll note that the "profoundly mentally retarded" goes from zero to twenty-five.

Zero, in this case, means without intellect at all.

Replies from: Desrtopa
comment by Desrtopa · 2011-12-07T17:34:04.973Z · LW(p) · GW(p)

An IQ of 0 corresponds to 6.66 standard deviations below the mean. It's functionally unmeasurable (when a person is too stupid to take even the tests specially calibrated for people with exceptionally low intelligence, their IQ is too low to quantify,) but an IQ of 0 does not correspond to "zero intellect." "Profound mental retardation" has no defined lower limit, only an upper one. You can also check the link you provided yourself, and you will find that it does not actually make any mention of a lower limit of zero, it defines profound mental retardation as being "<= 20-25"

An IQ of 100 does not correspond to 100 Intelligence Units, where an entity with zero Intelligence Units has no intelligence, it is simply the defined average of the population, and IQ tests are re-normed to have an average of 100 when the average intelligence changes. IQ points are meant to define where a person falls on the normal curve of human intelligence, not quantify intelligence on an absolute scale.

Replies from: wedrifid, Logos01
comment by wedrifid · 2011-12-07T17:38:59.614Z · LW(p) · GW(p)

"Profound mental retardation" has no defined lower limit, only an upper one.

I suppose that would have to be "Dead and all matter in a state of maximum entropy".

comment by Logos01 · 2011-12-07T18:10:32.294Z · LW(p) · GW(p)

You can also check the link you provided yourself, and you will find that it does not actually make any mention of a lower limit of zero, it defines profound mental retardation as being "<= 20-25"

Of course it does. That conforms to the same standard.

but an IQ of 0 does not correspond to "zero intellect." "Profound mental retardation" has no defined lower limit, only an upper one.

Do me a favor. Find an instance of a person with a zero or a negative IQ score. Then this will be meaningful.

An IQ of 0 corresponds to 6.66 standard deviations below the mean. It's functionally unmeasurable

Yup.

An IQ of 100 does not correspond to 100 Intelligence Units, where an entity with zero Intelligence Units has no intelligence, it is simply the defined average of the population, and IQ tests are re-normed to have an average of 100 when the average intelligence changes.

What exactly makes you believe these are mutually exclusive statements? That the quantification itself is adjusted speaks to the rule-standard, not to invalidity of the notion of the absolute-zero.

IQ points are meant to define where a person falls on the normal curve of human intelligence, not quantify intelligence on an absolute scale.

Again; what exactly gives you the notion that these are mutually exclusive?

Replies from: Desrtopa
comment by Desrtopa · 2011-12-07T18:46:41.370Z · LW(p) · GW(p)

The two are not mutually exclusive; if we knew the relationship between the null point for intelligence and the human average, we could norm the test so that average was defined as 100 and 0 was defined as no intelligence, but we don't, and if we did that then we would no longer have a definitional standard deviation of 15.

A less misleading way to express IQ scores would be to norm them to 0, to make it clear that they represent deviation above and below the mean and exist without reference to the null point for intelligence.

Do me a favor. Find an instance of a person with a zero or a negative IQ score. Then this will be meaningful.

Such an individual would be rarer than one in twenty billion, as would an individual with IQ over 200.

Replies from: Logos01
comment by Logos01 · 2011-12-07T19:18:58.628Z · LW(p) · GW(p)

Such an individual would be rarer than one in twenty billion, as would an individual with IQ over 200.

Multiple such individuals (of the latter category) are on record. I linked originally to a woman with an IQ of 230.

Why, then, have no 0 or negative individuals ever been recorded, if it is purely a question of how far one deviates from the 100 mark?

but we don't, and if we did that then we would no longer have a definitional standard deviation of 15.

That's absurd. We would always need a metric standard;; a 'measuring stick' against which to determine the units of quantification. That quantification is where the definition of 100 +/- one standard deviation comes from, and why it is useful. This is tiresome: 100 is average, and 0 is non-intelligent, and we do still need the definitional standard of the standard deviation. For the same reason that we also have a specific object that masses one newton. It's how the quantification is defined.

Or are you going to argue that because we use a class of observations (with error estimates) for mass, that means that an object with zero mass doesn't have no mass?

0 IQ "means" "no intelligence". A quotient is a term of quantification. Having a quotient score of zero means said object is quantified at zero.

That's a way of saying "none". IQ == 0 "means" "non-intelligent." They're synonymous expressions!

Replies from: Desrtopa
comment by Desrtopa · 2011-12-07T20:19:59.826Z · LW(p) · GW(p)

Multiple such individuals (of the latter category) are on record. I linked originally to a woman with an IQ of 230.

Such scores have been issued, but are widely regarded as nonsense, and Marilyn Vos Savant's 200+ score is no longer given credence in the record books. The old formula (mental age divided by chronological age x 100) allowed for a number of individuals with scores over 200, and did not allow for negative scores, but it was flawed in many ways and has been discarded, and no individual has received a score over 200 from a proper application of the IQ test since then.

This is tiresome: 100 is average, and 0 is non-intelligent, and we do still need the definitional standard of the standard deviation. For the same reason that we also have a specific object that masses one newton. It's how the quantification is defined.

Show me where such a definition is laid out.

Deviation measurements and absolute measurements both serve their purposes, but we don't have any absolute measurements for intelligence. The IQ test is not and was never intended to be an absolute measurement of intelligence in the way that newtons are a measurement of force. Comparing IQ to temperature, it's like defining the average particle kinetic energy in a vessel to be 100, with one standard deviation in kinetic energy being 15, without knowing what the temperature inside the vessel is.

comment by wedrifid · 2011-12-07T16:44:44.059Z · LW(p) · GW(p)

Would you care to make a complete thought out of this?

There was a missing "s" at the end of "reveal", apart from that it was correctly formed (if inelegant) as stated.

Replies from: Logos01
comment by Logos01 · 2011-12-07T16:51:51.288Z · LW(p) · GW(p)

Alright, then, a few questions.

  1. At what point did I assert that it "10 times as smart" must be a reference to IQ, as opposed to using IQ to illustrate the point made?

  2. What exactly is so absurd about even that? g and IQ are correlated, especially at the lower numbers of IQ.

  3. What exactly is this thing that is being revealed by this "absurdity"?

comment by lessdazed · 2011-12-06T20:25:12.177Z · LW(p) · GW(p)

History proves otherwise...(similar laws probably hold for other technologies, these are just two that have been proven so far)...that technological progress accelerates is already predicted. The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

Thinking in these terms would be confused. It's a bad sign that he's speaking in them. The patterns found at the very high level of abstraction here don't deserve the label and shouldn't be thought of as such.