Posts

Comments

Comment by Aeonios on Reply to Holden on The Singularity Institute · 2012-07-21T09:22:05.612Z · LW · GW

I reread that section, and you are correct, given that they don't tell you whether or not she is a feminist, it cannot be used as a criterion to determine whether or not she is a banker. However, I would say that the example, in typical public education style, is loaded and begs an incorrect answer. Since the only data you are given is insufficient to draw any conclusions, the participant is lead to speculate without understanding the limitations of the question.

As for "utility function", there are at least three reasons why it is not just wrong, but entirely impossible.

1: Utility is heterogeneous. Which gives you more "utility", a bowl of ice cream or a chair? The question itself is nonsensical, the quality/type of utility gained from a chair and a bowl of ice cream are entirely different.

2: Utility is complementary. If I own a field, the field by itself may be useless to me. Add a picnic table, and some food and suddenly the field gains utility beyond the food, table, and field individually. Perhaps I could run horses through the field, or add some labor and intelligent work and turn it into a garden, but the utility I get from it depends on my preferences (which may change) and the combination with other resources and a plan. Another example, a person who owns a yaht would probably get more "utility" out of going to the ocean than someone who does not.

3: Utility is marginal. For the first three scoops of ice cream, I'd say I get equal "utility" from each. The fourth scoop yields comparably less "utility" than the previous three, and by the fifth the utility becomes negative, as I feel sick afterwards. By six scoops I'm throwing away ice cream. On the other hand, if I have 99 horses, whether I gain or lose one would not make much difference as to the utility I get from them, but if I only have 2 horses, losing one could mean losing more than half of my utility. Different things have different useful quantities in different situations depending on how they are used.

4: Utility cannot be measured. This should be obvious. Even if we were to invent a magical brain scanner that could measure brain activity in high resolution in vivo, utility is not always the same for the same thing every time it is experienced, and you still have the apples-oranges problem that makes the comparison meaningless to begin with.

5: Human psychology is not a mere matter of using logic correctly or not. In this case, it is definitely a misapplication, but it seems the only psychology that gets any attention around here is anecdotes from college textbooks on decisions and some oversimplified mechanistic theorizing from neuroscience. You talk about anchoring like it's some horrible disease, when it's the same fundamental process required for memory and mastery of concepts. You've probably heard of dissociation but you probably wouldn't believe me if I told you that memory can be flipped on and off like a light switch at the whim of your unconscious.

That aside, treating intelligence as a machine that optimizes things is missing the entire point of intelligence. If you had ever read Douglas Hofstadter's "Godel Escher Bach", or Christopher Alexander's "The Nature of Order" series, you might have a greater appreciation for the role that abstract pattern recognition and metaphor plays in intelligence.

Finally, I read two "papers" from SI, and found them entirely unprofessional. They were both full of vague terminology and unjustified assertions and were written in a colloquial style that pretty much begs the reader to believe the crap they're spewing. You get lots of special graphs showing how a superhuman AI would be something like two orders of magnitude more intelligent than humans, but no justification for how these machines will magically be able to produce the economic resources to reach that level of development "overnight". Comparing modern "AIs" to mice is probably the most absurd fallacy I've seen thus far. Even the most sophisticated AI for driving cars cannot drive on a real road, its "intelligence" is overall still lacking in sophistication compared to a honey bee, and the equipment required to produce its rudimentary driving skills far outweigh the benefits. Computer hardware may improve regularly by Moore's Law, but the field of AI research does not, and there is no evidence that we will see a jump in computer intelligence from below insects to above orangutans any time soon. When we do, it will probably take them 50-100 years to leave us fully at orangutan level.

Comment by Aeonios on Reply to Holden on The Singularity Institute · 2012-07-20T06:23:27.654Z · LW · GW

I decided to read through the essays on facingthesingularity, and I have found more faults than I care to address. Also, I can see why you might think that the workings of the human mind are simple, given that the general attitude here is that you should go around maximizing your "utility function". That is utter and complete nonsense for reasons that deserve their own blog post. What I see more than anything is a bunch of ex-christians worshipping their newfound hypothetical machine god, and doing so by lowering themselves to the level of machine rather than raising machine to the level of man.

I'll give one good example to make clear what I mean: (from facingthesingularity) But that can’t possibly be correct. The probability of Linda being a bank teller can’t be less than the probability of her being a bank teller and a feminist.

This is my “Humans are crazy” Exhibit A: The laws of probability theory dictate that as a story gets more complicated, and depends on the truth of more and more claims, its probability of being true decreases. But for humans, a story often seems more likely as it is embellished with details that paint a compelling story: “Linda can’t be just a bank teller; look at her! She majored in philosophy and participated in antinuclear demonstrations. She’s probably a feminist bank teller.”


But, the thing is, context informs us that while a philosophy major is unlikely to work for a bank, a feminist is much more likely to work a "pink collar job" such as secretarial work or as a bank teller, where they can use the state to monger for positions, pay and benefits above and beyond what they deserve. A woman who otherwise would have no interest in business or finance, when indoctrinated by the feminist movement, will leap to take crappy office jobs so they can raise their fists in the air in onionistic fashion against the horrible man-oppression they righteously defeated with their superior women intellects. The simple fact that "philosophy" in a modern school amounts to "The History of Philosophy", and is utterly useless might also clue one in on the integrity or lack thereof that a person might have, although of course it isn't conclusive.

In short, impressive "logical" arguments about how probabilities of complements must be additive can only be justified in a vacuum without context, a situation that does not exist in the real world.

Comment by Aeonios on Reply to Holden on The Singularity Institute · 2012-07-19T03:41:14.453Z · LW · GW

There are several reasons why I agree with the "Pascal's Mugging" comment:

  1. Intelligence Explosion: There are several reasons why an intelligence explosion is highly unlikely. First, upgrading computer fabrication equipment requires on the order of 5-15 billion dollars. Second, intelligence is not measured in gigaflops or petaflops, and mere improvement of fabrication technology is insufficient to increase intelligence. Finally, the requisite variety that drives innovation and creation will be extremely difficult to produce in AIs of a limited quantity. Succeeding in engineering or science requires copious amounts of failure, and AIs are not immune to this either.

2.Computing Overhang: The very claim of "computing overhang" shows total ignorance of actual AI, and of the incredible complexity of human intelligence. The human brain is made up of numerous small regions which both "run programs" inside of themselves and communicate via synchronous signals with the rest of the brain in concert (in neural, and not transistor form). A human level AI would be the same, and could not simply be run on, say, your average web server, no matter how decked out it is. An AI that could run on "extra" hardware would probably be too primitive to reproduce itself on purpose, and if it did it would be a minor nuisance at worst.

  1. The idea that AIs can be "programmed" is mostly nonsense. Very simple AIs can be "programmed", sure, but neural networks require training by experience, just like humans. An AI with human level intelligence or greater would need to be taught like a child, and any "friendliness" that came of it would be the result of its "instincts" (I'm guessing we wouldn't want AIs with aggression) and of its experience. Additionally, as mentioned above, the need for variety in intelligence to produce real progress means that copying them will not be as economical as it might seem, not to mention not nearly as simple as you make it out to be.

  2. The timescales you present are absurd. Humans barely have an understanding of human psychology, and they do terrible at it with the knowledge they do have. We may have teraflops desktop computers in 20 years, but that does not imply that they will magically sprout intelligence! Technically, even with today's technology you could produce a program much more sophisticated than shrdlu was, and receive orders of magnitude better performance than the original did, but it is the complexity of programming something that learns that prevents it from occurring commonly. It will likely be a hundred maybe two hundred years before we have a sophisticated enough understanding of human intelligence to reproduce it in any meaningful way. We have only taken the bare first steps into the field thus far, and development has been much slower than for the rest of the computing industry.

In short, human stupidity that is occurring right now is a much greater threat to our future as a species than is any hypothetical superintelligent AI that might finally appear a hundred years or more in the future. If human civilization is even to maintain its integrity long enough to produce such a thing ever, then widespread ignorance of economics, spirituality/psychology, and general lack of sensitivity to culture and art must be dealt with first and foremost.