Posts

Comments

Comment by Gregory_Conen on Recognizing Intelligence · 2008-11-08T23:31:22.000Z · LW · GW

You mentioned earlier that intelligence also optimizes for subgoals: tasks that indirectly lead to terminal value, without being directly tied to it. These subgoals would likely be easier to guess at than the ultimate terminal values.

For example, a high-amperage high temperature superconductor, especially with significant current flowing through it, is highly unlikely to have occurred by chance. It is also very good at carrying electrons from one place to another. Therefore, it seems useful to hypothesize that it is the product of an optimization process, aiming to transport electrons. It might be a terminal goal (because somebody programmed a superintelligent AI to "build this circuit"), or more likely it is a subgoal. Either way, it implies the presence of intelligence.

Comment by Gregory_Conen on BHTV: Jaron Lanier and Yudkowsky · 2008-11-02T08:08:40.000Z · LW · GW

It seemed to me that Lanier drifted between useful but poorly explained ideas and incoherence throughout it. And that the talk was mostly about his ideas.

Incidentally, Eliezer asked early on, and has asked in the past: Can you name a belief which is untrue, but which you nevertheless believe?

I think, on reflection, that I have one. I believe that my conscious perception of reality is more or less accurate.

Suppose that this universe (or any possible universe) ends in heat death, rather than a "big crunch", repeated inflationary periods, etc, which is a plausible outcome of the cosmological debate on the ultimate fate of the universe. In that case, there is a very high probability that my brain is a random fluctuation in a maximum entropy universe, rather than a meaningful reflection of reality. Nevertheless, I believe and act as though my memories and perceptions describe the universe around me.