Posts

Comments

Comment by Felix on Dunbar's Function · 2008-12-31T03:44:04.000Z · LW · GW
We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.

Implication being that we're wasting our time?

Hope not, as debating politics is also a way to learn and understand politics. National or international politics are the equivalent of, say, the weather - something we experience, can't affect, but which we surely want to understand.

Comment by Felix on Dunbar's Function · 2008-12-31T03:38:43.000Z · LW · GW

Riffing on Doug S.

Do you know any "horse people"?

It seems hard to explain why someone would pour all their resources, time and money, in to horses which they spend very little time riding, but a lot of time mucking for.

Unless they are bred for it.

It seems unlikely that any person whose ancestors come from horse country would not be bred for a world with horses.

Similarly, one could blame the Great Depression on people being bred to farming having a hard time adjusting to a manufacturing world.

Comment by Felix on Lawful Uncertainty · 2008-11-10T23:15:27.000Z · LW · GW

When your knowledge is incomplete - meaning that the world will seem to you to have an element of randomness - randomizing your actions doesn't solve the problem

Ants don't agree. Take away their food. They'll go in to random search mode.

As far as that experiment is concerned, it seems that AnneC hits the point: How was it framed? Were the subjects led to believe that they were searching for a pattern? Or were they told the pattern? Wild guess: the former.

Comment by Felix on Building Something Smarter · 2008-11-03T02:29:31.000Z · LW · GW

@James: If we want a robot that can navigate mazes, we could put some known pathfinding/search algorithms into it. Or we could put a neural network in it and run it through thousands of trials with slowly increasing levels of difficulty.

That evokes some loopy thinking. To wit:

It's always seemed that AI programs, striving for intelligence, can have their intelligence measured by how easy it is to get them to do something. E.g. It's easier to simply run that neural net through a bunch of trials than it is to painstakingly engineer an algorithm for a particular search problem.

So, does that mean that the definition of "intelligence" is: "How easy it is for me to get the intelligent being to do my bidding multiplied by the effect of their actions?"

Or is that a definition of "intelligence we want"? And the definition of "intelligence" is: "The ability to create "intelligence we want" and avoid "intelligence we don't want"?

Comment by Felix on Evolutions Are Stupid (But Work Anyway) · 2007-11-03T22:59:30.000Z · LW · GW

Nice calculations!

But don't these calculations establish a lower bound on how complex or adaptive genetic evolution is? But not an upper bound?

It would seem that, using the same approach toward a nervous system would lead one to calculate the adaptiveness of a dendrite - or less. Uh, what is a part of nervous system operation that seems comfortably "understood" to the same extent as AGTC operations? Whatever part that is, would, in a fair comparison, be what could be compared to the mechanism these calculations describe. Yes?

Anyway, isn't it premature to assert, "Natural selection, though not simple, is simpler than a human brain", given the current understanding of either?

And, please, let's not go too far along the road of "Look how smart we are! Evolution didn't produce diddly, while, in only 4 hundred years we have produced millions of My Little Pony dolls." Evolution produced cow pies, which we are still struggling with, after all. :)

Speculation of what nervous systems and genetic evolution do in common sure seems like fertile ground, though. It would be interesting to know, for instance, what's both necessary and sufficient to describe both.

Comment by Felix on Cached Thoughts · 2007-10-13T06:55:25.000Z · LW · GW

Is caching the best mental model of how these jillions of "100hz processors" operate?

An alternate: lossy decompression. Rather like, for instance, how dna information is expressed during an individual's life. (And, one cannot help but suspect, at a much larger scale than that of the lives of individuals.)

A reason to prefer "lossy compression" over "caching": "Caching" leads one to believe that the information is cached without loss. And, one tends to look around to find where the uncompressed bits can be stored.

But, I'll admit I've failed to put together the pieces of a general intelligence machine using a lossy compression model. So maybe it's a bogus model, too.

Comment by Felix on Two More Things to Unlearn from School · 2007-07-14T20:33:39.000Z · LW · GW

I never went to school. Bill Bullard seems to assume that without the indoctrinating influence of school, we'd be prissy self-effacing socialists. He's wrong, because I'm an individualist and I think his first two points are garbage.