Posts

Comments

Comment by jeffrey_soreff on Pretending to be Wise · 2009-02-22T03:56:19.000Z · LW · GW

Yvain "anyone has some clever reason why elves are worse than Sauron." Brin has some interesting comments, including "Now ponder something that comes through even the party-line demonization of a crushed enemy -- this clear-cut and undeniable fact: Sauron's army was the one that included every species and race on Middle Earth, including all the despised colors of humanity, and all the lower classes."

Comment by jeffrey_soreff on Informers and Persuaders · 2009-02-11T02:05:38.000Z · LW · GW

Two other examples of sciences where the vocabulary is less formal than is typical: Astronomy: "star", "black hole" (as compared to e.g. deoxyribonucleic acid) Genetics: names of genes have included "fruity", "shaven baby" and "killer of prune"

Comment by jeffrey_soreff on Value is Fragile · 2009-01-30T00:00:33.000Z · LW · GW

@Jordan - agreed.

I think the big difference in expected complexity is between sampling the space of possible singletons' algorithms results and sampling the space of competitive entities. I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe. To my mind the key is that the ability to relentlessly optimize one function only exists if a singleton gets and keeps an overwhelming advantage over everything else. If this does not happen, we get competing entities with the computationally difficult problem of outsmarting each other. Under this scenario, while I might not like the detailed results, I'd expect them to be complex to much the same extent and for much the same reasons as living organisms are complex.

Comment by jeffrey_soreff on Higher Purpose · 2009-01-25T21:17:32.000Z · LW · GW

I look at Causes as hedonic accessories from a different point of view: Given the history of how many Causes have turned out to contain purple kool-aid, I look at the problem as not so much "how can we carefully select rationally desirable Causes" but more nearly "How can we bypass this part of the human mental landscape altogether - preferably aquiring the hedonic gains of adopting a Cause, while skipping the hazards, both to oneself and to those around one, of actually adopting one."

Comment by jeffrey_soreff on Interpersonal Entanglement · 2009-01-25T01:53:04.000Z · LW · GW

"Where 'nonsentient romantic/sex partner' is pretty much what I use the word "catgirl" to indicate, in futuristic discourse." - 40 comments in this thread, and not a word about Kzin?

Comment by jeffrey_soreff on Eutopia is Scary · 2009-01-13T03:04:45.000Z · LW · GW

One arguably plausible change, that is arguably an improvement (of sorts), and yet would be a shock could be if techniques for assessing individuals' abilities became much more accurate. We could wind up with a world with elments of GATTACA or Brave New World (albeit this alone wouldn't do the equivalent of bottle-farming deltas...). To an extent things like IQ already do this, but more precise, specific assessments (if human abilities are actually stable enough over time for this to be possible) would be a large blow to both traditional views of human equality and to individuals' dreams of the range of their possible futures. It would also be very politically incorrect, at least in some circles.