Posts

Comments

Comment by iii on Babies and Bunnies: A Caution About Evo-Psych · 2012-10-21T17:38:05.162Z · LW · GW

I think that any situation that could not have occurred prior to the 20. century can be discarded out of hand when discussing the evolutionary roots of human behavior.

Comment by iii on Random LW-parodying Statement Generator · 2012-09-14T14:16:38.957Z · LW · GW

Inside the Blind Idiot God Evolution's pineal gland is not an immortal soul, but the truth.

I love this thing, we should include it as a site feature next to the comment button and compare user ratings on randomly generated posts to the rest.

Comment by iii on Only say 'rational' when you can't eliminate the word · 2012-06-02T22:52:57.952Z · LW · GW

Only make up hasty generalizations when it's entertaining to do so.

Also: if it gets you internet points.

Comment by iii on [Humor] [Link] Eclipse Maid, a posthuman maid role-playing game · 2011-12-28T16:15:37.729Z · LW · GW

Inexplicable happiness!

Comment by iii on Wanted: backup plans for "seed AI turns out to be easy" · 2011-10-02T21:00:00.440Z · LW · GW

as in producing the intended result, nothing stopping us from rounding the 1 and winding up as paperclips

Comment by iii on Wanted: backup plans for "seed AI turns out to be easy" · 2011-10-01T13:55:24.413Z · LW · GW

I'm unfamiliar with the state of our knowledge concerning these things, so take this as you will, A perfect utility function can yield many different things one of which is the adherence to "the principal for the devlopment of value(s) in human beings" which aren't necessary the same as "values that make existing in the universe most probable" or "what people want" or "what people will always want". a human optimal utility function would be something that leads to adressing the human condition as a problem, to improve it in the manner and method it seeks to improve itself, whether that is survivability or something else. An AI that could do this perfectly right now, could always use the same process of extrapolation again for whatever the situation may develop into.

or "AI which is most instrumentally useful for (all) human beings given our most basic goals"

Comment by iii on Wanted: backup plans for "seed AI turns out to be easy" · 2011-09-29T21:52:36.635Z · LW · GW

So we could just build a seed AI whose utility function is to produce a human optimal utility function?

Comment by iii on Needing Better PR · 2011-08-23T23:15:29.080Z · LW · GW

I doubt that a site that expects to entertain with college level math comprehension is ever going to ditch that image completely, but it should definitely be a goal.

Comment by iii on Antisocial personality traits predict utilitarian responses to moral dilemmas · 2011-08-23T21:40:52.806Z · LW · GW

Pardon me, but this seems to have little to nothing to do with whether utilitarianism should be considered a superior moral framework or not (if this has never been the point I apologize). If anything the article seems to lend evidence to the claim that given certain circumstances psychopaths tend to be more moral than the average individual, why stigmatizing a mental disorder and not its consequences is still tolerated in a society that seems to ostensibly have developed neural imaging is also up for debate.

Comment by iii on Feeling Rational · 2011-06-21T17:01:09.908Z · LW · GW

It sounds plausible, but I think its something of a premature conclusion. Consider how one would best fake an emotion: simply by motivating oneself to feel that way. Faking an expression is much much harder than simply choosing a field that matches your own moods and preferences. The reason we see people who don't appear genuine in high ranking positions as well as very low ones is that they are motivated by something other than the above, a drive for excellence or desperation where feelings do become a tool, but thinking in terms of the majority its easier to assume convention and self-discipline makes most peoples professionalism indistinguishable from any other motivator they might feel.