[SEQ RERUN] Artificial Mysterious Intelligence

post by MinibearRex · 2012-12-16T05:35:21.975Z · LW · GW · Legacy · 1 comments

Today's post, Artificial Mysterious Intelligence was originally published on 07 December 2008. A summary (taken from the LW wiki):

 

Attempting to create an intelligence without actually understanding what intelligence is, is a common failure mode. If you want to make actual progress, you need to truly understand what it is that you are trying to make.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Shared AI Wins, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

1 comments

Comments sorted by top scores.

comment by buybuydandavis · 2012-12-17T01:46:49.347Z · LW(p) · GW(p)

If you want to make actual progress, you need to truly understand what it is that you are trying to make.

No you don't. You can build things you don't understand. Machine learning routinely produces algorithms that people don't understand, but work, and outperform other methods.

It's all right to have preferences for AI research, and beliefs about what will most likely work. Stating them as certitudes is overstating your case.