Posts

Comments

Comment by anon15 on Total Nano Domination · 2008-11-27T11:56:30.000Z · LW · GW

Why is Eliezer assuming that sustainable cycles of self-improvement are necessary in order to build an UberTool that will take over most industries? The Japanese Fifth Generation Computing Project was a credible attempt to build such an UberTool, but it did not much rely on recursive self-improvement (apart from such things as using current computer systems to design next-generation electronics). Contrary to common misconceptions, it did not even rely on human level AI, let alone superhuman intelligence.

If this was a credible project (check the contemporary literature and you'll find extensive discussions about its political implications and the like) why not Douglas Engelbart's set of tools?

Comment by anon15 on Make an Extraordinary Effort · 2008-10-08T09:18:47.000Z · LW · GW

True, however, they didn't blow up their entire financial system and in turn that, seemingly, of the entire planet.

I'm not sure if this should be taken literally, but see wiki:Japanese asset price bubble and wiki:Carry trade#Currency.

Comment by anon15 on Optimization · 2008-09-14T16:20:45.000Z · LW · GW

Surely there is a transform that would convert the "hard" space into the terms of the "easy" space, so that the size of the targets could be compared apples to apples.

But isn't this the same as computing a different measure (i.e. not the counting measure) on the "hard" space? If so, you could normalize this to a probability measure, and then compute its Kullback-Leibler divergence to obtain a measure of information gain.

Comment by anon15 on Three Fallacies of Teleology · 2008-08-26T07:45:56.000Z · LW · GW

Rational choice theory is probably the closest analogue to teleological thinking in modern academic research. Regarding all such reasoning as fallacious seems to be an extreme position; to what extent do they regard the "three fallacies" of teleology as genuine fallacies of reasoning as opposed to useful heuristics?

Comment by anon15 on No Universally Compelling Arguments · 2008-06-26T11:30:30.000Z · LW · GW

Many philosophers are convinced that because you can in-principle construct a prior that updates to any given conclusion on a stream of evidence, therefore, Bayesian reasoning must be "arbitrary", and the whole schema of Bayesianism flawed, because it relies on "unjustifiable" assumptions, and indeed "unscientific", because you cannot force any possible journal editor in mindspace to agree with you.

Could you clarify what you mean here? From the POV of your own argument, Bayesian updating is simply one of many possible belief-revision systems. What's the difference between calling Bayesian reasoning an "engine of accuracy" because of its information-theoretic properties as you've done in the past and saying that any argument based on it ought to be universally compelling?