Short story: An AGI's Repugnant Physics Experiment
post by ozziegooen
score: 9 (7 votes) ·
After lots of dedicated work and a few very close calls, a somewhat aligned AGI is created with a complex utility function based on utilitarian-like values. It immediately amplifies its intelligence and ponders what its first steps should be.
Upon some thought, it notices a few strange edge cases in physics. The universe seems fairly limited, unless…
The chances of expanding far past the known universe seem slim, but there could be a way with enough thought and experimentation. The chances are small, but the payoff could be enormous.
The AGI proceeds to spend all of the available resources in the universe to improve its model of fundamental physics.
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson)
· score: 7 (5 votes) · LW
This is an example of a pascals mugging. Tiny probabilities of vast rewards can produce weird behavior. The best known solution is either a bounded utility function, or a antipascalene agent. (An agent that ignores the best x% and worst y% of possible worlds when calculating expected utilities. It can be money pumped)
comment by shminux
· score: 4 (4 votes) · LW
Or a low-impact AI. "Don't break what you can't fix given your current level of knowledge and technology."
comment by Dagon
· score: 4 (3 votes) · LW
It's well-known that precision is lost at the extremes of calculation. I presume the AGI has appropriate error bars or confidence scores for it's estimates of effort and reward for such things, and it's making the correct judgement to maximize utilitarian value.
This sounds like a grand success to me!