Posts
Comments
1) You assign no probability to the AI being "Unfriendly" as you put it.. In particular you assume that AI = AI that kills everyone. For all I know this could be 0 and I am certainly of the opinion that it is very low.
2) The idea that the number of papers counts but the quality doesn't (except in that they are "good") is ridiculous - not only could 1 excellent paper be worth 1000 "good" ones, the "1000" good ones may not even be written if the excellent one comes first.
IMHO the only way to assess the risk of "unfriendly" AI is to build an AI (carefully) and ask it :)
Assuming you are a god I would assign a higher probability to the proposition that you are just testing me and will punish me with eternal damnation for my greed should I accept thus my expected utility is in fact infinitely negative.