Posts

Comments

Comment by Andrew_Ducker on Formative Youth · 2009-02-25T11:09:31.000Z · LW · GW

My first answer to this would be "Of course!"

It's obvious that morality is purely a matter of aesthetics, and that these are largely based on the culture you're exposed to during your formative years.

Rationalism can help train you out of things that are contradicted by the evidence, but when it comes to pure values there's no evidence to base them on. Moral values can contradict each other, but not reality.

Comment by Andrew_Ducker on Epilogue: Atonement (8/8) · 2009-02-06T12:32:55.000Z · LW · GW

Now it's finished, any chance of getting it into EPUB or PDF format?

Comment by Andrew_Ducker on Three Worlds Collide (0/8) · 2009-01-30T13:44:31.000Z · LW · GW

When it's done, is there any chance you'll stick it online in an ereader compatible format? PDF is ok, but EPUB would be better.

I don't tend to read very long things on a computer, so having it in a more friendly format would be nice.

Comment by Andrew_Ducker on OB Status Update · 2009-01-27T10:49:42.000Z · LW · GW

Of course anything that didn't mean waiting for a moderator to approve a comment would be good and increase the chances of discussion in the comments.

Comment by Andrew_Ducker on OB Status Update · 2009-01-27T10:32:06.000Z · LW · GW

Any chance of supporting OpenID for logging in?

Comment by Andrew_Ducker on Higher Purpose · 2009-01-23T11:34:19.000Z · LW · GW One wonders if it is possible to make finding one's purpose in life one's purpose in life.

Of course it is. That's philosophy right there :->

Comment by Andrew_Ducker on Justified Expectation of Pleasant Surprises · 2009-01-15T12:56:26.000Z · LW · GW

Planning and optimising are definitely part of the fun that some gamers get. Going into the system and finding "Power Word: Nuke" and then working out what choices to make to get there - and then seeing you getting closer to your destination - is a big pull.

Comment by Andrew_Ducker on Eutopia is Scary · 2009-01-12T11:46:00.000Z · LW · GW

Greg Egan's short story "The Hundred Light Year Diary" tells what happens when people are (basically) handed the walkthrough for their life.

It's well worth reading (along with the rest of the stories in Axiomatic - which include a bunch of technology that Elizer mentions on a regular basis, and the interesting effects they might have.

Comment by Andrew_Ducker on Complex Novelty · 2008-12-20T15:34:37.000Z · LW · GW

Solving problem X is interesting. So you solve all problems of of the class that X is in. And then you start on other classes. And then you eventually see that not only do all problems boil down to classes of problems, but that all of those classes are part of the superclass of "problems", at which point you might decide that solving problem X162329 is as dull as making chair leg 162,329.

Solving a problem not being any more a "good" activity than having an orgasm, eating a cake, or making a chair leg is.

Comment by Andrew_Ducker on Complex Novelty · 2008-12-20T15:34:06.000Z · LW · GW

Solving problem X is interesting. So you solve all problems of of the class that X is in. And then you start on other classes. And then you eventually see that not only do all problems boil down to classes of problems, but that all of those classes are part of the superclass of "problems", at which point you might decide that solving problem X162329 is as dull as making chair leg 162,329.

Solving a problem not being any more a "good" activity than having an orgasm, eating a cake, or making a chair leg is.

Comment by Andrew_Ducker on Permitted Possibilities, & Locality · 2008-12-04T10:36:39.000Z · LW · GW a mind capable of modifying itself with deterministic precision - provably correct or provably noncatastrophic self-modifications.

So, you've solved The Halting Problem then?

Comment by Andrew_Ducker on Recursive Self-Improvement · 2008-12-01T22:16:52.000Z · LW · GW

The problem, as I see it, is that you can't take bits out of a running piece of software and replace them with other bits, and have them still work, unless said piece of software is trivial.

The likelihood that you could change the object retrieval mechanism of your AI and have it still be the "same" AI, or even a reasonably functional one, is very low, unless the entire system was deliberately written in an incredibly modular way. And incredibly modular systems are not efficient, which makes it unlikely that any early AI will be written in that manner.

The human brain is a mass of interconnecting systems, all tied together in a mish-mash of complexity. You couldn't upgrade any one part of it by finding a faster replacement for any one section of it. Attempting to perform brain surgery on yourself is going to be a slow, painstaking process, leaving you with far more dead AIs than live ones.

And, of course, as the AI picks the low-fruit of improvements, it'll start running into harder problems to solve that it may well find itself needing more effort and attempts to solve.

Which doesn't mean it isn't possible - it just means that it's going to be a slow takeoff, not a fast one.