Posts

Comments

Comment by manuel_moertelmaier on The Weighted Majority Algorithm · 2008-11-13T08:19:41.000Z · score: 1 (1 votes) · LW · GW

@ comingstorm: Quasi Monte Carlo often outperforms Monte Carlo integration for problems of small dimensionality involving "smooth" integrands. This is, however, not yet rigorously understood. (The proven bounds on performance for medium dimensionality seem to be extremely loose.)

Besides, MC doesn't require randomness in the "Kolmogorov complexity == length" sense, but in the "passes statistical randomness tests" sense. Eliezer has, as far as I can see, not talked about the various definitions of randomness.

Comment by manuel_moertelmaier on Efficient Cross-Domain Optimization · 2008-10-28T22:01:22.000Z · score: 0 (0 votes) · LW · GW

http://www.google.com/search?hl=en&q=tigers+climb+trees

On a more serious note, you may be interested in Marcus Hutter's 2007 paper "The Loss Rank Principle for Model Selection". It's about modeling, not about action selection, but there's a loss function involved, so there's a pragmatist viewpoint here, too.

Comment by manuel_moertelmaier on The Level Above Mine · 2008-09-26T09:59:09.000Z · score: 15 (30 votes) · LW · GW

In a few years, you will be as embarrassed of these posts as you are today of your former claims of being an Algernon, or that a logical paradox would make an AI go gaga, the tMoL argumentation you mentioned the last days, the Workarounds for the Laws of Physics, Love and Life Just Before the Singularity and so on and so forth. Ask yourself: Will I have to delete this, too ?

And the person who told you to go to college was probably well-meaning, and not too far from the truth. Was it Ben Goertzel ?

Comment by manuel_moertelmaier on Magical Categories · 2008-08-25T05:51:41.000Z · score: 0 (0 votes) · LW · GW

In contrast to Eliezer I think it's (remotely) possible to train an AI to reliably recognize human mind states underlying expressions of happiness. But this would still not imply that the machine's primary, innate emotion is unconditional love for all humans. The machines would merely be addicted to watching happy humans.

Personally, I'd rather not be an object of some quirky fetishism.

Monthy Python has, of course, realized it long ago:

http://www.youtube.com/watch?v=HoRY3ZjiNLU http://www.youtube.com/watch?v=JTMXtJvFV6E

Comment by manuel_moertelmaier on Invisible Frameworks · 2008-08-22T06:05:26.000Z · score: 0 (0 votes) · LW · GW

I strongly second Marcello here. When you wrote "The fact that a subgoal is convergent [] doesn't lend the subgoal magical powers in any specific goal system" in CFAI that about settled the matter in a single sentence. Why the long, "lay audience" posts, now, eight years later ?