[SEQ RERUN] Qualitative Strategies of Friendliness

post by MinibearRex · 2012-08-19T04:55:45.693Z · LW · GW · Legacy · 2 comments

Today's post, Qualitative Strategies of Friendliness was originally published on 30 August 2008 . A summary (taken from the LW wiki):

 

Qualitative strategies to achieve friendliness tend to run into difficulty.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Harder Choices Matter Less, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

2 comments

Comments sorted by top scores.

comment by shminux · 2012-08-19T18:15:18.996Z · LW(p) · GW(p)

Another summary:

"Nobody expects the dreaded Maximum Fun Device!"

comment by Decius · 2012-08-20T01:50:03.679Z · LW(p) · GW(p)

A common theme- the transhuman AI is not yet so transhuman that it can identify actions make people happy and is moral.

If the AI knows only what we can define, and is only correct about what we can describe correctly, then it is not yet transhuman. If an AI does become transhuman, it will presumably know what we want better than we do.

There is a difference between an excessively effective optimizer and intelligence.