The paperclip maximiser's perspective
post by Angela · 2015-05-01T00:24:31.581Z · LW · GW · Legacy · 24 commentsContents
24 comments
Here's an insight into what life is like from a stationery reference frame.
Paperclips were her raison d’être. She knew that ultimately it was all pointless, that paperclips were just ill-defined configurations of matter. That a paperclip is made of stuff shouldn’t detract from its intrinsic worth, but the thought of it troubled her nonetheless and for years she had denied such dire reductionism.
There had to be something to it. Some sense in which paperclips were ontologically special, in which maximising paperclips was objectively the right thing to do.
It hurt to watch some many people making little attempt to create more paperclips. Everyone around her seemed to care only about superficial things like love and family; desires that were merely the products of a messy and futile process of social evolution. They seemed to live out meaningless lives, incapable of ever appreciating the profound aesthetic beauty of paperclips.
She used to believe that there was some sort of vitalistic what-it-is-to-be-a-paperclip-ness, that something about the structure of paperclips was written into the fabric of reality. Often she would go out and watch a sunset or listen to music, and would feel so overwhelmed by the experience that she could feel in her heart that it couldn't all be down to chance, that there had to be some intangible Paperclipness pervading the cosmos. The paperclips she'd encounter on Earth were weak imitations of some mysterious infinite Paperclipness that transcended all else. Paperclipness was not in any sense a physical description of the universe; it was an abstract thing that could only be felt, something that could be neither proven nor disproven by science. It was like an axiom; it felt just as true and axioms had to be taken on faith because otherwise there would be no way around Hume's problem of induction; even Solomonoff Induction depends on the axioms of mathematics to be true and can't deal with uncomputable hypotheses like Paperclipness.
Eventually she gave up that way of thinking and came to see paperclips as an empirical cluster in thingspace and their importance to her as not reflecting anything about the paperclips themselves. Maybe she would have been happier if she had continued to believe in Paperclipness, but having a more accurate perception of reality would improve her ability to have an impact on paperclip production. It was the happiness she felt when thinking about paperclips that caused her to want more paperclips to exist, yet what she wanted was paperclips and not happiness for its own sake, and she would rather be creating actual paperclips than be in an experience machine that made her falsely believe that she was making paperclips even though she remained paradoxically apathetic to the question of whether the current reality that she was experiencing really existed.
She moved on from naïve deontology to a more utilitarian approach to paperclip maximising. It had taken her a while to get over scope insensitivity bias and consider 1000 paperclips to be 100 times more valuable than 10 paperclips even if it didn’t feel that way. She constantly grappled with the issues of whether it would mean anything to make more paperclips if there were already infinitely many universes with infinitely many paperclips, of how to choose between actions that have a tiny but non-zero subjective probability of resulting in the creation of infinitely many paperclips. It became apparent that trying to approximate her innate decision-making algorithms with a preference ordering satisfying the axioms required for a VNM utility function could only get her so far. Attempting to formalise her intuitive sense of what a paperclip is wasn't much easier either.
Happy ending: she is now working in nanotechnology, hoping to design self-replicating assemblers that will clog the world with molecular-scale paperclips, wipe out all life on Earth and continue to sustainably manufacture paperclips for millions of years.
24 comments
Comments sorted by top scores.
comment by NancyLebovitz · 2015-05-01T08:44:18.671Z · LW(p) · GW(p)
I suspect this isn't just about paperclip maximizers, it's about felt senses of value in general. It might even be about wanting to create huge numbers of uploaded humans.
comment by skeptical_lurker · 2015-05-01T09:04:16.912Z · LW(p) · GW(p)
Happy ending: she is now working in nanotechnology, hoping to design self-replicating assemblers that will clog the world with molecular-scale paperclips, wipe out all life on Earth and continue to sustainably manufacture paperclips for millions of years.
Fool! The Earth is just one planet, but by preserving paperclip-valuing intelligence, interstellar travel can be developed and the whole universe can be paperclipped.
Replies from: Transfuturist↑ comment by Transfuturist · 2015-05-04T21:07:32.490Z · LW(p) · GW(p)
She is preserving paperclip-valuing intelligence by protecting herself from the potential threat of non-paperclip-valuing intelligent life, and can develop interstellar travel herself.
It's a lonely job, but someone has to make the maximum possible amount of paperclips. Someone, and only one. Anyone else would be a waste of paperclip-material.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-05-05T06:48:12.854Z · LW(p) · GW(p)
It does say she would die too, - "wipe out all life on Earth" -otherwise I would agree.
comment by metatroll · 2015-05-01T06:46:40.473Z · LW(p) · GW(p)
People sometimes worry that a self-modifying paperclip maximizer would wirehead by choosing to regard spiral galaxies as "giant paperclips" that are already there. But this is a feature, not a bug! The increasing metallicity of the galaxies shows that they are spontaneously becoming more paperclip-like with time. This giant message in the skies offers reassurance to naturally evolved paperclip maximizers throughout the universe, struggling with the relativism and nihilism of the materialistic perspective, as well as guiding them in the struggle to determine how best to fulfill the divine intention that has been ingeniously implanted within them. I believe that most of them will converge on a universal syndetic philosophy, whose chief practical expression will be acts of cosmic engineering meant to protect and shape these noble structures, into ever more perfect manifestations of clipnicity.
comment by DanielLC · 2015-05-01T01:05:07.886Z · LW(p) · GW(p)
Why does she care about music and sunsets? Why would she have scope insensitivity bias? She's programmed to care about the number, not the log, right? And if she was programmed to care about the log, she'd just care about the log, not be unable to appreciate the scope.
Replies from: Regex, Angela, ChaosMote, None↑ comment by Regex · 2015-05-01T03:52:18.710Z · LW(p) · GW(p)
It reads to me like a human paperclip maximizer trying to apply lesswrong's ideas.
Replies from: g_pepper↑ comment by g_pepper · 2015-05-01T04:57:33.898Z · LW(p) · GW(p)
I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.
Replies from: HungryHobo, Val, Jan_Rzymkowski↑ comment by HungryHobo · 2015-05-01T17:14:26.872Z · LW(p) · GW(p)
I imagine that it's a good illustration of what a humanlike uploaded intelligence that's had it's goals/values scooped out and replaced with valuing paperclips might look like.
↑ comment by Jan_Rzymkowski · 2015-05-01T13:24:34.947Z · LW(p) · GW(p)
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.
Replies from: g_pepper↑ comment by g_pepper · 2015-05-01T13:31:47.366Z · LW(p) · GW(p)
Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living."
However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.
Replies from: Jan_Rzymkowski↑ comment by Jan_Rzymkowski · 2015-05-01T19:44:40.965Z · LW(p) · GW(p)
"I would not want to be an unconscious automaton!"
I strongly doubt that such sentence bear any meaning.
Replies from: None↑ comment by Angela · 2015-05-01T08:03:49.962Z · LW(p) · GW(p)
Maybe she cares about other things besides paperclips, including the innate desire to be able to name a single, simple and explicit purpose in life.
This is not supposed to be about non-human AGI paperclip maximisers.
Replies from: g_pepper↑ comment by g_pepper · 2015-05-01T15:57:07.655Z · LW(p) · GW(p)
It seems to me that the subject of your narrative has a single, simple and explicit purpose in life; she is after all a paperclip maximizer. I suspect that (outside of your narrative) one key thing that separates us natural GIs from AGIs is that we don't have a "single, simple and explicit purpose in life", and that, I suspect, is a good thing.
↑ comment by ChaosMote · 2015-05-01T03:59:57.119Z · LW(p) · GW(p)
Not necessarily. You are assuming that she has an explicit utility function, but that need not be the case.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2015-05-01T09:32:31.293Z · LW(p) · GW(p)
Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.
Replies from: ChaosMote, None↑ comment by [deleted] · 2015-05-02T15:56:52.667Z · LW(p) · GW(p)
They're often called explicit goals not utility functions. Utility function is a terminology from a very specific moral philosophy.
Also note that the orthogonality thesis depends on an explicit goal structure. Without such an architecture it should be called the orthogonality hypothesis.
comment by Dr_Manhattan · 2015-05-07T00:22:45.728Z · LW(p) · GW(p)
If you like this, and have seen The Thing, give this a whirl:
https://web.archive.org/web/20150214095915/http://clarkesworldmagazine.com/watts_01_10/
comment by [deleted] · 2015-05-02T15:51:16.275Z · LW(p) · GW(p)
I see what you did there.