Cross-temporal dependency, value bounds and superintelligence

post by joaolkf · 2014-10-28T15:26:50.424Z · LW · GW · Legacy · 2 comments

In this short post I will attempt to put forth some potential concerns that should be relevant when developing superintelligences, if certain meta-ethical effects exist. I do not claim they exist, only that it might be worth looking for them since their existence would mean some currently irrelevant concerns are, in fact, relevant. 

 

These meta-ethical effects would be a certain kind of cross-temporal dependency on moral value. First, let me explain what I mean by cross-temporal dependency. If value is cross-temporal dependent it means that value at t2 could be affected by t1, independently of any causal role t1 has on t2. The same event X at t2 could have more or less moral value depending on whether Z or Y happened at t1. For instance, this could be the case on matters of survival. If we kill someone and replace her with a slightly more valuable person some would argue there was a loss rather than a gain of moral value; whereas if a new person with moral value equal to the difference of the previous two is created where there was none, most would consider an absolute gain. Furthermore, some might consider small, gradual and continual improvements are better than abrupt and big ones. For example, a person that forms an intention and a careful detailed plan to become better, and forceful self-wrought to be better could acquire more value than a person that simply happens to take a pill and instantly becomes a better person - even if they become that exact same person. This is not because effort is intrinsically valuable, but because of personal continuity. There are more intentions, deliberations and desires connecting the two time-slices of the person who changed through effort than there are connecting the two time-slices of the person who changed by taking a pill. Even though both persons become equally morally valuable in isolated terms, they do so from different paths that differently affects their final value.

More examples. You live now in t1. If suddenly in t2 you were replaced by an alien individual with the same amount of value as you would otherwise have in t2, then t2 may not have the exact same amount of value as it would otherwise have, simply by virtue of the fact that in t1 you were alive and the alien's previous time slice was not. 365 individuals with a 1 day life do not amount to the same value as a single individual living through 365 days. Slice history in 1 day periods, each day the universe contains one unique advanced civilization with the same overall total moral value, each civilization being completely alien and ineffable to another, each civilization only lives for one day, and then it would be gone forever. This universe does not seem to hold the same moral value as the one where only one of those civilizations flourishes for eternity. On all these examples the value of a period of time seems to be affected by the existence or not of certain events at other periods. They indicate that there is, at least, some cross-temporal dependency.

 

Now consider another type of effect, bounds on value. There could be a physical bound – transfinite or not - on the total amount of moral value that can be present per instant. For instance, if moral value rests mainly on sentient well-being, which can be categorized as a particular kind of computation, and there is a bound on the total amount of such computation which can be performed per instant, then there is a bound on the amount of value per instant. If, arguably, we are currently extremely far from such bound, and this bound will eventually be reached by a superintelligence (or any other structure), then the total moral value of the universe would be dominated by the value of this physical bound, given that regions where the physical bound wasn't reached would make negligible contributions. How much faster the bound can be reached, also how much more negligible pre-bound values are.

 

Finally, if there is a form of value cross-temporal dependence where preceding events leading to a superintelligence could alter the value of this physical bound, then we not only ought to make sure we safely construct a superintelligence, but also that we do so following the path that maximizes such bound. It might be the case that an overly abrupt superintelligence would decrease such bound, thus all future moral value would be diminished by the fact there was a huge discontinuity in the past in the events leading to this future. Even small decreases on such bound would have dramatic effects. Although I do not know of any plausible cross-temporal effect of such kind, it seems this question deserves at least a minimal amount of thought. Both cross-temporal dependency and bounds on value seem plausible (in fact I believe some form of them are true), so it is not at all prima facie inconceivable that we could have cross-temporal effects changing the bound up or down.

2 comments

Comments sorted by top scores.

comment by diegocaleiro · 2014-10-28T17:50:13.257Z · LW(p) · GW(p)

The effect you are mentioning would be magnified for fast running simulations, and slowed down for sluggish slumber ones. So it seems that the relevant unit here is either time-steps, subjective time, markov blankets or some other form of causally constrained model that can be accelerating in beings running at faster speeds.

That said, the effect seems possibly real, and many believe that identity is valuable and determine it's continuity as a function of the slope of change beings undergo.

The hypothesis you put forth then is that the total value of utilitronium per time-step may be higher depending on how did the universe get to be filled in as densely packet utilitronium, so we should pay attention to the shape and rate of the shockwave, not only to its final state.

The question I'd like to see addressed, assuming you are right, is whether the relevant factors which could compromise or multiply the final absolute value happen before or after the creation of a superintelligence. If after, then the superintelligence itself should decide how to progress.

Replies from: joaolkf
comment by joaolkf · 2014-10-29T12:57:35.474Z · LW(p) · GW(p)

The discontinuity would increase as subjective time per instant decreased and as subjective change per instant increased. If you have a lot of changes but also a lot of subjective time, it's fine. So running at faster speeds is fine, as long the number of subjective steps are the same.

Yes, the idea, simply put is that eventually the universe will be filled with utilitronium, and I'm asking whether anything that goes on before that can impact the value of the utilitronium, i.e. the maximal amount of value.

Given that most plausible cross-temporal dependency we know of are on cases of continuity, then the transition from humans to a superintelligence is the best candidate here. Which means they would be before the creation of a superintelligence.