Short story: An AGI's Repugnant Physics Experiment

post by ozziegooen · 2019-02-14T14:46:30.651Z · LW · GW · 5 comments

After lots of dedicated work and a few very close calls, a somewhat aligned AGI is created with a complex utility function based on utilitarian-like values. It immediately amplifies its intelligence and ponders what its first steps should be.

Upon some thought, it notices a few strange edge cases in physics. The universe seems fairly limited, unless…

The chances of expanding far past the known universe seem slim, but there could be a way with enough thought and experimentation. The chances are small, but the payoff could be enormous.

The AGI proceeds to spend all of the available resources in the universe to improve its model of fundamental physics.

5 comments

Comments sorted by top scores.

comment by rk · 2019-02-14T19:36:14.570Z · LW(p) · GW(p)

Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)

comment by Donald Hobson (donald-hobson) · 2019-02-14T15:31:50.252Z · LW(p) · GW(p)

This is an example of a pascals mugging. Tiny probabilities of vast rewards can produce weird behavior. The best known solution is either a bounded utility function, or a antipascalene agent. (An agent that ignores the best x% and worst y% of possible worlds when calculating expected utilities. It can be money pumped)

Replies from: shminux, ozziegooen
comment by shminux · 2019-02-14T15:47:24.614Z · LW(p) · GW(p)

Or a low-impact AI. "Don't break what you can't fix given your current level of knowledge and technology."

comment by ozziegooen · 2019-02-14T16:18:11.653Z · LW(p) · GW(p)

Yup, in general, I agree.

comment by Dagon · 2019-02-14T19:28:57.756Z · LW(p) · GW(p)

It's well-known that precision is lost at the extremes of calculation. I presume the AGI has appropriate error bars or confidence scores for it's estimates of effort and reward for such things, and it's making the correct judgement to maximize utilitarian value.

This sounds like a grand success to me!