Posts

Comments

Comment by Long time lurker on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-08T08:51:57.175Z · LW · GW

/Edit 1: I want to preface this by saying I am just a noob who has never posted on Less Wrong before.

/Edit 2: 

I feel I should clarify my main questions (which are controversial): Is there a reason why turning all of reality into maximized conscious happiness is not objectively the best outcome for all of reality, regardless of human survival and human values?
Should this in any way affect our strategy to align the first agi, and why?

/Original comment:

If we zoom out and look at the biggest picture philosophically possible, then, isn´t the only thing that ultimately matters in the end 2 things - the level of consciousness and the overall "happiness" of said consciousness(es) throughout all of time and space (counting all realities that have, are and will exist)?

To clarify; isn´t the best possible outcome for all of reality one where every particle is utilized to experience a maximally conscious and maximally "happy" state for eternity? (I put happiness in quotes because how do you measure the "goodness" of a state, or consciousness itself for that matter.)

After many years of reading countless alignment discussions (of which I have understood maybe 20 %) I have never seen this being mentioned. So I wonder; if we are dealing with a super optimizer shouldn´t we be focusing on the super big picture?

I realize this might seem controversial but I see no rational reason for why it wouldn´t be true. Although my knowledge of rationality is very limited.