Posts
Comments
AI IS HARD. IT'S REALLY FRICKING HARD.
Hundreds of blog posts and still no closer!
this particular abstract philosophy could end up having a pretty large practical import for all people
Eliezer:
Personally, I am not disputing the importance of friendliness. My question is, what do you think I should do about it?
If I were an AI expert, I would not be reading this blog since there is clearly very little technical content here.
My time would be simply too valuable to waste reading or writing popular futurism.
I certainly wouldn't post everyday, just to recapitulate the same material with minor variations (basically just killing time).
Of course, I'm not an expert... but you are. So instead of preaching the end of the world, why aren't you frantically searching for a way to defer it?
Unless, perhaps, you have given up?
Do these considerations offer useful insights for the average person living his life? Or are they just abstract philosophy without practical import for most people?
Good comment. I would really like to hear an answer to this.
The mind-projection fallacy is an old favourite on OB, and Eliezer always come up with some colourful examples.
None are as good as this one, though:
1) Supposing that moral progress is possible, why would I want to make such progress?
2) Psychological experiments such as the Stanford prison experiment suggest to me that people do not act morally when empowered not to do so. So if I were moral I would prefer to remain powerless, but I do not want to be powerless, therefore I perform my moral acts unwillingly.
3) Suppose that agents of type X act more morally than agents of type Y. Also suppose that the moral acts impact on fitness such that type Y agents out-reproduce type X agents. If the product of population size and moral utility is greater for Y than X then Y is the greater producer of moral good.
So is net morality important or morality per capita? How about a very moral population of size 0? What is the trade off between net and per capita moral output?
4) Predicting the long-term outcomes of our actions is very difficult. If the moral value of an act depends on the outcome, then our confidence in the morality of an act should be less than or equal to our confidence in the possible outcomes.
However, peoples' confidence in their morality is often much higher than their confidence in the outcome. Therefore, there must be a component of morality independent of outcome. What does the desirability of this component derive from?
"A truth-seeker does not want to impress people, he or she or ve wants to know."
What is the point of being a "truth-seeker"?
"people start to worry about how we can enforce laws/punish criminals and so forth if there's no free will"
Interesting observation. Also note how society differentiates between violent criminals and the violent mentally ill.
I suggest there are 4 stages in the life-cycle of a didact:
(1) The belief that one's intellectual opponents can be won over by rationality. (2) The belief that one's intellectual opponents can be won over by rationality and emotional reassurance. (3) The belief that one's intellectual opponents can be won over without rationality. (4) The belief that one's intellectual opponents do not need to be won over.
I am not suggesting that any stage is superior to any other.
Eliezer, I declare that you are currently at stage (2), commonly known as the "Dawkins phase". :)
I want to second botogol's request for a wrapped up version of the quantum mechanics series. Best of all would be a downloadable PDF.
I read a little of Eliezer's physics posts at the beginning, then realised I wasn't up to it intellectually. However, I'd like to come back and have another go sometime. I certainly think I stand a better chance with Eliezer's introduction than with a standard textbook.
To sum up: a bird in the hand is worth two in the bush!
Eliezer, you must have lowered your intellectual level because these days I can understand your posts again.
You talk about the friendliness problem as if it can be solved separately from the problem of building an AGI, and in anticipation of that event. I mean that you want to delay the creation of an AGI until friendliness is fully understood. Is that right?
Suppose that we had needed to build jet-planes without ever passing through the stage of propeller-based planes, or if we had needed to build modern computers without first building calculators, 8-bit machines etc. Do you think that would be possible? (It's not a rhetorical question... I really don't know the answer).
It seems to me that if we ever build an AGI, there will be many mistakes made along the way. Any traps waiting for us will certainly be triggered.
Perhaps this will all seem clearer when we all have 140 IQ's. Get to work, Razib! :)
Many more people are studying science than can actually hope to find jobs in the field.
The real problem is not a scarcity of people, but a scarcity of smart people. The average guy in the street will not improve his own life or anyone else's by the study of science. Posts for lab technicians are easy enough to fill, after all.
Conversely, the people who really can make a difference by and large do not need any encouragement.
On a practical note, I would be very interested in a discussion of the best ways an individual can make a monetary / political / social contribution to the development of an AGI. Assuming this has already been argued out, does anyone have a link?