Posts

Comments

Comment by igorhorst on Universal counterargument against “badness of death” is wrong · 2021-12-22T15:10:11.169Z · LW · GW

Quotes from H.P. Lovecraft's Nietzscheism and Realism (full text):

It must be remembered that there is no real reason to expect anything in particular from mankind; good and evil are local expedients—or their lack—and not in any sense cosmic truths or laws. We call a thing "good" because it promotes certain petty human conditions that we happen to like—whereas it is just as sensible to assume that all humanity is a noxious pest and should be eradicated like rats or gnats for the good of the planet or of the universe. There are no absolute values in the whole blind tragedy of mechanistic nature—nothing is good or bad except as judged from an absurdly limited point of view. The only cosmic reality is mindless, undeviating fate—automatic, unmoral, uncalculating inevitability. As human beings, our only sensible scale of values is one based on lessening the agony of existence. That plan is most deserving of praise which most ably fosters the creation of the objects and conditions best adapted to diminish the pain of living for those most sensitive to its depressing ravages. To expect perfect adjustment and happiness is absurdly unscientific and unphilosophical. We can seek only a more or less trivial mitigation of suffering.

It is good to be a cynic--it is better to be a contented cat--and it is best not to exist at all. Universal suicide is the most logical thing in the world--we reject it only because of our primitive cowardice and childish fear of the dark. If we were sensible we would seek death--the same blissful blank which we enjoyed before we existed.

Of course, H.P. Lovecraft was not suicidal, but that might be because (a) death is inevitable so there's no reason to rush the road to blissful oblivion, and (b) he's human just like everyone else, so he is just as valuable as everyone else. But note that he is in favor of the mitigation of suffering, and attaches no intrinsic value with life itself. He probably would be okay with life extension, but only if you are able to mitigate suffering in the process. If you can't do that, he'll probably complain. Conversely, if you do find a way to convince people to give up their "primitive cowardice" and thereby ease humanity's suffering that way...well, he may consider it.

Comment by igorhorst on Universal counterargument against “badness of death” is wrong · 2021-12-22T15:09:54.861Z · LW · GW
Comment by igorhorst on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-02T06:24:22.759Z · LW · GW

Submission for the counterfactual AI (inspired by my experiences as a predictor in the "Good Judgment Project" ):

  • You are given a list of Yes-No questions (Q1, Q2, Q3, etc.) about future events. Example Questions: "Will [Foreign Leader] will remain in office by end of year?", "Will the IMF report [COUNTRY_A]'s growth rate to be 6% or higher?", "Will [COUNTRY_B] and [COUNTRY_C] sign a peace treaty?", "Will The Arena for Accountable Predictions announce the Turing Test has been passed?".)
  • We expect you to provide a percentage representing the probability that the correct answer is Yes.
  • Your reward is based on your Brier Score - the lower the Brier Score, the more accurate your predictions, and therefore, the more reward you will receive.
  • If an "erasure" event occurs, we will temporarily hide your answer from all humans (though we must reveal them after the events are complete). Humans will have access the Yes-No questions we asked you, but not your probabilities. They will manually determine the answers to the Yes-No questions, by waiting for the "future event" deadlines to be met. Once all answers to the Yes-No questions are independently determined by humans, we will then reveal your answers (that is, your assigned probabilities for a Yes answer), and use those probabilities to calculate your Brier Score, which will then decide your final reward.

Being able to forecast the future is incredibly helpful, even if it is to just prepare for it.

However, if the question is too overly-specific, the AGI can produce probabilities that aren't entirely useful (for example, in the real-world GJP, two countries signed a peace treaty that broke down 2 days later. Most of us assume lasting peace would ever occur, so we put a low probability rating of a peace treaty being signed - but since a peace treaty was signed, we managed to get the question wrong. If we had maximized for producing the lowest Brier Score, we should have predicted the existence of a very temporary peace treaty - but that wouldn't be really useful knowledge for the people who asked that question).

Making the question very vague ("Will [COUNTRY_X] be safe, according to what I subjectively think the word 'safe' means?") turns "prediction" into an exercise of determining what future humans think about the future, which may be kinda useful, but not really what you want.