Posts

Comments

Comment by Felix2 on Building Something Smarter · 2008-11-02T20:02:03.000Z · LW · GW

It sounds like you're pegging "intelligence" to mean what I'd call a "universal predictor". That is, something that can predict the future (or an unknown) given some information. And that it can do so given a variety of types of sets of unknowns, where "variety" involves more than a little hand-waving.

Therefore, something that catches a fly ball ("knowing" the rules of parabolic movement) can predict the future, but is not particularly "intelligent" if that's all it can do. It may be even a wee bit more "intelligent" if it can also predict where a mortar shell lands. It is even more "intelligent" if it predicts how to land a rocket on the moon. It is even more "intelligent" if it predicts the odds that any given cannon ball will land on a fort's walls. Etc.

I agree with Brain that this is a narrow definition of "intelligence". But that doesn't stop it from being an appropriate goal for AI at this time. That the word, "intelligence" is chosen to denote this goal seems more a result of culture than anything else. AI people go through a filter that extols "intelligence". So ... (One is reminded of many years ago when some AI thinkers had the holy grail of creating a machine that would be able to do the highest order of thinking the AI thinkers could possibly imagine: proving theorems. Coincidently, this is what these thinkers did for a living.)

Here's a thought on pinning down that word, "variety".

First, it seems to me that a "predictor" can be optimized to predict one thing very well. Call it a "tall" predictor (accuracy in Y, problem-domain-ness in X) Or it can be built to predict a lot of things rather poorly, but better than a coin. Call it a "flat" predictor. The question is: How efficient is it? How much prediction-accuracy comes out of this "predictor" given the resources it consumes? Or, using the words, "tall" and "flat" graphically, what's the surface area covered by the predictor, given a fixed amount of resources?

Would not "intelligence", as you mean it, be slightly more accurately defined as how efficient a predictor is and, uh, it's gotta be really wide or we ignore it?

Comment by Felix2 on The Simple Math of Everything · 2007-11-18T01:34:50.000Z · LW · GW

Beautiful idea!

Is a Wiki separate from Wikipedia needed?

Similar problem: One thing I run in to often on Wikipedia is entries that use the field's particular mathematical notation for no reason other than particular symbols and expressions are the jargon of the field. They get in the way of understanding what the entry is saying, though.

Similar problem is there seem to be academic papers that have practical applications and yet the papers are written to be as unclear as possible - perhaps to take on that "important" sheen, perhaps simply because the authors are deep in their own jargon and assume all readers know everything they know. Consider papers in the AI field. :)

Comment by Felix2 on Cached Thoughts · 2007-10-13T07:40:24.000Z · LW · GW

Has anyone built the equivalent of a Turing machine using processor count and/or replicated input data as the cheap resource rather than time?

That is, what could a machine that does everything in one step do in the way of useful work? With or without restrictions on how many replications of the input data there are going in and where the output might come out?

OK, OK. "Dude, what are you smoking?", right? :)

Comment by Felix2 on We Change Our Minds Less Often Than We Think · 2007-10-03T22:54:59.000Z · LW · GW

Does this mean that if we cannot remember ever changing our minds, our minds are very good at removing clutter?

Or, consider a question that you've not made up your mind on: Does this mean that you're most likely to never make up your mind?

And, anyway, in light of those earlier posts concerning how well people estimate numeric probabilities, should it be any wonder that 66% = 96%?

Comment by Felix2 on Burdensome Details · 2007-09-22T08:58:50.000Z · LW · GW

Nick: Nice spin! :) Context would be important if Eliezer had not asserted as a given that many, many experiments have been done to preclude any influence of context. My extremely limited experience and knowledge of psychological experiments says that there is a 100% chance that such is not a valid assertion. Imagine a QA engineer trying to skate by with the setups of psych experiments you have run in to. But, personal, anecdotal experience aside, it's real easy to believe Eliezer's assertion is true. Most people might have a hard time tuning out context, though, and therefore might have a harder time, both with conjunction fallacy questionnaires and accepting Eliezer's assertion.

g: Yes, keeping in mind that I would be first in line to answer C, myself!

Choice (B) seems a poster boy for "representation". So, that a normal person would choose B is yet another example of this, "probability" question not being a question about probability, but about "representation". Which is the point. Why is it hard to imagine that the word, "probable" does not mean, in such questions' contexts, or even, perhaps, in normal human communication, "probable" as a gambler or statistician would think of its meaning? Or, put another way, g, "who try to answer the question they're asked rather..." is an assumptive close. I don't buy it. They were not asked the question you, me, Eliezer, the logician or the autistic thought. They were asked the question that they understood. And, they have the votes to prove it. :)

So far as people making simple logical errors in computing probabilities, as is implied by the word, "fallacy", well, yeah. Your computer can beat you in both logic and probabilities. Just as your calculator can multiply better than you.

Anyway, I believe that the functional equivalent of visual illusions are inherent in anything one might call a mind. I'm just not convinced that this conjunction fallacy is such a case. The experiments mentioned seem more to identify and wonderfully clarify an interesting communications issue - one that probably stands out simply because there are, in these times, many people who make a living answering C.

Comment by Felix2 on Burdensome Details · 2007-09-21T07:13:55.000Z · LW · GW

Ooooo! "Dice roll?" By, God, my good fellow, you mean, "coin flips!"

Comment by Felix2 on Burdensome Details · 2007-09-21T06:23:01.000Z · LW · GW

Here's a candidate for a question to illustrate a couple of related biases:

Given the following two dice roll records:

1: HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

2: THTTHTHHTHTTHHHTTHTHTTHHTHHTTTH

Which of the following is true:

A) 1 is more probable than 2.

B) 2 is more probable than 1.

C) Both are equally probable.

Now, I predict that there will be at least 1 "normal" person who answers C.

"Unbelievable," you say?

Stay tuned!

I will make a stronger prediction: If this question were posed to 1000 randomly selected, well-dressed, Nordic-looking people found purposely walking the downtown sidewalks during daytime in a large American city (with luck, eliminating the possibility that I cheat by selecting 1000 people from insane asylums or from people who know no English), I predict that there will be at least 1 person who answers C.

Why? Because it is a well known fact that there exist, in much larger numbers than 1 in 1000, people capable, willing, and even eager to use the "toilet paper tube fallacy". Any of such people combined with any of those who are susceptible to the "literalist fallacy" will answer C.

Let me make a stronger prediction. Even given a 4th choice, so slyly left out:

D) Beats me.

I predict that, still, at least one person will select C.

Now, list the following in order of probability:

a) That one person is a moron.

b) That one person is a computer programmer.

c) That one person is a card shark.

d) That one person believed that choice B was to be taken literally. That is, that B really (really!) means that the very first coin flip came out tails - NOT HEADS! - tails, the second heads, the third tails, and so on.

e) That one person ignored as much context around the dice roll question as he could. That is, that person pretended he was similar to a computer in seeing the world through what amounts to a toilet paper tube. Just the facts, Ma'am.

f) That one person is a card shark and a computer programmer.

g) b and c

h) d and e and f

i) All of the above.

"h", anyone? :)

But, a thought on this question: How to avoid the conjunction fallacy?

Perhaps a better way to do so than keying on the word "and", (which, as we all know, means "OR", but not "OR and not AND") is to key on the word "probability". That is, when you see that word (or sense its meaning) as a goal, hand the question to the modern equivalent of a four-function calculator and let it grind out the numbers. To do so otherwise would be like multiplying 10821 by 11409 in your head, wouldn't it?

Comment by Felix2 on Conjunction Controversy (Or, How They Nail It Down) · 2007-09-20T06:16:58.000Z · LW · GW

Arrrr. Shiver me timbers. I shore be curious what the rank be of "Linda is active in the feminist movement and is a bank teller" would be, seeing as how its meanin' is so far diff'rent from the larboard one aloft.

A tip 'o the cap to the swabbies what found a more accurate definition of "probability" (I be meanin' "representation".) than what logicians assert the meaning o' "probability" be. Does that mean, at a score of one to zero, all psychologists are better lexicographers than all logicians?

Comment by Felix2 on Say Not "Complexity" · 2007-08-29T07:23:23.000Z · LW · GW

Quote: "We think in words, "

No we don't. Apparently you do, though. No reason to believe otherwise. :)

Please keep up these postings! They are very enjoyable.

Going back to "explaining" something by naming it (from a couple of your earlier posts):

e.g. Q: Why does this block fall to the floor when I let go of it? ... A: Gravity!

I always thought that such explanations were common side-effects of thinking in words. Sort of like optical illusions are side-effects of how the visual system works. Perhaps not. One does not need to use words to think symbolically. There are, after all, other ways to do lossy compression than with symbols.

Anyway, I'll still assert that it's easier to fall for such an "explanation" if you think in words. ... An easy assertion, given how hard it is to count the times one does it!