Maybe you want to maximise paperclips too
post by dougclow · 2014-10-30T21:40:37.232Z · LW · GW · Legacy · 29 commentsContents
29 comments
As most LWers will know, Clippy the Paperclip Maximiser is a superintelligence who wants to tile the universe with paperclips. The LessWrong wiki entry for Paperclip Maximizer says that:
The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented
I think that a massively powerful star-faring entity - whether a Friendly AI, a far-future human civilisation, aliens, or whatever - might indeed end up essentially converting huge swathes of matter in to paperclips. Whether a massively powerful star-faring entity is likely to arise is, of course, a separate question. But if it does arise, it could well want to tile the universe with paperclips.
Let me explain.
To travel across the stars and achieve whatever noble goals you might have (assuming they scale up), you are going to want energy. A lot of energy. Where do you get it? Well, at interstellar scales, your only options are nuclear fusion or maybe fission.
Iron has the strongest binding energy of any nucleus. If you have elements lighter than iron, you can release energy through nuclear fusion - sticking atoms together to make bigger ones. If you have elements heavier than iron, you can release energy through nuclear fission - splitting atoms apart to make smaller ones. We can do this now for a handful of elements (mostly selected isotopes of uranium, plutonium and hydrogen) but we don’t know how to do this for most of the others - yet. But it looks thermodynamically possible. So if you are a massively powerful and massively clever galaxy-hopping agent, you can extract maximum energy for your purposes by taking up all the non-ferrous matter you can find and turning it in to iron, getting energy through fusion or fission as appropriate.
You leave behind you a cold, dark trail of iron.
That seems a little grim. If you have any aesthetic sense, you might want to make it prettier, to leave an enduring sign of values beyond mere energy acquisition. With careful engineering, it would take only a tiny, tiny amount of extra effort to leave the iron arranged in to beautiful shapes. Curves are nice. What do you call a lump of iron arranged in to an artfully-twisted shape? I think we could reasonably call it a paperclip.
Over time, the amount of space that you’ve visited and harvested for energy will increase, and the amount of space available for your noble goals - or for anyone else’s - will decrease. Gradually but steadily, you are converting the universe in to artfully-twisted pieces of iron. To an onlooker who doesn’t see or understand your noble goals, you will look a lot like you are a paperclip maximiser. In Eliezer’s terms, your desire to do so is an instrumental value, not a terminal value. But - conditional on my wild speculations about energy sources here being correct - it’s what you’ll do.
29 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2014-10-30T23:17:20.853Z · LW(p) · GW(p)
What you are describing is an accidental Clippy, just like humans are accidental CO2 maximizers. Which is a fair point, if we meet what looks like an alien Clippy, we should not jump to conclusions that paperclip maximizing is its terminal value.
Also, just to nitpick, if you have a lot of mass available, it would make sense to lump all this iron together and make a black hole, as you can extract a lot more energy from throwing stuff toward it than from the nuclear fusion proper. Or you can use fusion first, then throw the leftover iron bricks into the accreting furnace.
So the accidental Clippy would likely present as a black hole maximizer.
Replies from: MarkusRamikin, DanielLC, dougclow↑ comment by MarkusRamikin · 2014-11-02T16:05:27.875Z · LW(p) · GW(p)
"humans are accidental CO2 maximizers"
You're abusing words. There's a big difference between a producer of X and a maximiser of X.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-02T18:31:50.852Z · LW(p) · GW(p)
You have a point, even if it is expressed in a hostile manner.
However, from the outside it is often hard to tell whether something is a goal or a side effect. It certainly looks like whatever afflicted this planet intends to produce as much CO2 as possible, even though fossil fuel burning sometimes has to be delayed to produce enough tech to get access to more fossil fuels. Or you can pick some other artifact of human development, and in the right light it would look like maximizing it is a terminal goal. Like the number of heavy objects in the air at any given time.
Replies from: army1987, MarkusRamikin↑ comment by A1987dM (army1987) · 2014-11-08T20:03:28.532Z · LW(p) · GW(p)
It certainly looks like whatever afflicted this planet intends to produce as much CO2 as possible
Not sure about that.
↑ comment by MarkusRamikin · 2014-11-02T20:06:12.219Z · LW(p) · GW(p)
I'm sorry, I'm not trying to be hostile. But words have meanings. If you could equate "Y has been observed to produce some X among a myriad of other effects" with "Y is an X maximiser", what's the point in having a word like "maximiser"? Hell, even remove the "myriad of other effects" - a paperclip-making machine or paperclip factory isn't a paperclip maximiser either.
It certainly looks like whatever afflicted this planet intends to produce as much CO2 as possible
It certainly does not.
Well, maybe to a very contrived observer: you'd have to have all the knowledge about our planet necessary to realize that a CO2 increase is happening (not trivial) and that it's not a natural effect of whatever changes the planet naturally undergoes (even less trivial), and somehow magically be ignorant of any other details, to even entertain such a notion. Any more knowledge and you'd immediately begin noticing that our civilisation produces a myriad of effects that it would not bother producing if it were a CO2 maximiser, and that for all the effort and ingenuity that it puts into its works, the effects in terms of C02 increase are actually rather pathetic.
You're closer to a reasonable use of the term by calling the paperclip-producing advanced civilisation an incidental paperclip maximiser, because the end result will be the same - all matter eventually converted into paperclips. It's still a stretch, though, because a maximiser would take the shortest route towards tiling the universe with paperclips, while the advanced civilisation will be actively trying to minimise paperclipping in proportion to its actual goals - it will try to extract as much usefulness out of every bit of matter converted as it can. So it's still an incidental producer, not maximiser. Would an outside observer be able to tell the difference? I don't know, but I suggest the way this civilisation would be doing a myriad of interesting things instead of simply focusing on the most efficient way to produce paperclips would be an easy giveaway.
Of course if we only look at end results to decide if we call something a "maximiser", then any agent actively pursuing any goals is an "entropy maximiser". At this point I stop feeling like language conveys useful meaning.
if we meet what looks like an alien Clippy, we should not jump to conclusions that paperclip maximizing is its terminal value.
Yes, certainly. It seems to me that the thought you were expressing is actually the opposite of those of your words I've been objecting to: if something looks like a maximiser, it's possibly it isn't one.
↑ comment by DanielLC · 2014-11-01T17:44:13.433Z · LW(p) · GW(p)
I think I'd call it incidental clippy. It's not creating paperclips accidentally. It's just that the paperclips are only incidental to its true goal.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-11-01T20:13:56.884Z · LW(p) · GW(p)
Right, incidental fits better.
↑ comment by dougclow · 2014-10-31T06:53:00.425Z · LW(p) · GW(p)
Yes, good point that I hadn't thought of, thanks. It's very easy to imagine far-future technology in one respect and forget about it entirely in another.
To rescue my scenario a little, there'll be an energy cost in transporting the iron together; the cheapest way is to move it very slowly. So maybe there'll be paperclips left for a period of time between the first pass of the harvesters and the matter ending up at the local black hole harvester.
Replies from: DuncanS↑ comment by DuncanS · 2014-10-31T22:43:14.597Z · LW(p) · GW(p)
And of course you can throw black holes into black holes as well, and extract even more energy. The end game is when you have just one big black hole, and nothing left to throw in it. At that point you then have to change strategy and wait for the black hole to give off Hawking radiation until it completely evaporates.
But all these things can happen later - there's no reason for not going through a paperclip maximization step first, if you're that way inclined...
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-11-03T22:05:13.775Z · LW(p) · GW(p)
Except if you're really a paperclip maximizer.
comment by MarkusRamikin · 2014-10-31T16:16:51.599Z · LW(p) · GW(p)
Nice try with the sockpuppet, Clippy, you almost convinced us.
I particularly admire the picture meant to make paperclips look appealing.
comment by jimrandomh · 2014-10-30T23:36:16.188Z · LW(p) · GW(p)
Wouldn't you also want to throw the paperclips into black holes, to harvest the gravitational energy?
comment by Ben Pace (Benito) · 2014-10-31T08:23:16.579Z · LW(p) · GW(p)
I see this as an example of how anyone can rationalise any goal they please.
comment by Douglas_Knight · 2014-10-31T00:33:48.016Z · LW(p) · GW(p)
The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented
I thought that it was chosen in part for a story like: a paperclip manufacturer wants an AI to help it better manufacture paperclips.
comment by Gunnar_Zarncke · 2014-11-01T23:33:22.967Z · LW(p) · GW(p)
This is nice and simple and upvoted a lot. I think it could go to main. It could add a touch of lightness on Main that isn't often seen there.
Replies from: Eneaszcomment by HungryHobo · 2014-10-31T19:23:45.692Z · LW(p) · GW(p)
Alternative: this system turns out to be practical http://arxiv.org/pdf/0908.1803v1.pdf
and energy is gained from dumping matter into artificial micro black holes, you look like an entropy maximiser, gradually turning all matter into light and heat.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-11-08T20:04:56.872Z · LW(p) · GW(p)
When you link to arXiv, consider linking to the abstract rather than straight to the pdf.
comment by DeterminateJacobian · 2014-10-31T04:32:10.867Z · LW(p) · GW(p)
Heh, clever. In a sense, iron has the highest entropy (atomically speaking) of any element. So if you take the claim that an aspect of solving intergalactic optimization problems involves consuming as much negentropy as possible, and that the highest entropy state of space time is low-density iron (see schminux's comment on black holes), then Clippy it is. It seems though like superintelligent anything-maximizers would end up finding even higher entropy states that go beyond the merely atomic kind.
...Or even discover ways that suggest that availability of negentropy is not an actual limiter on the ability to do things. Does anyone know the state of that argument? Is it known to be true that the universe necessarily runs out of things for superintelligences to do because of thermodynamics?
Replies from: rule_and_line, dougclow, DanielLC↑ comment by dougclow · 2014-10-31T07:07:11.636Z · LW(p) · GW(p)
Empirically we seem to be converging on the idea that the expansion of the universe continues forever (see Wikipedia for a summary of the possibilities), but it's not totally slam-dunk yet. If there is a Big Crunch, then that puts a hard limit on the time available.
If - as we currently believe - that doesn't happen, then the universe will cool over time, until it gets too cold (=too short of negentropy) to sustain any given process. A superintelligence would obviously see this coming, and have plenty of time to prepare - we're talking hundreds of trillions of years before star formation ceases. It might be able to switch to lower-power processes to continue in attenuated form, but eventually it'll run out.
This is, of course, assuming our view of physics is basically right and there aren't any exotic possibilities like punching a hole through to a new, younger universe.
Replies from: None, Yosarian2↑ comment by [deleted] · 2014-11-02T01:11:14.925Z · LW(p) · GW(p)
Baring unknown physics, it is absolutely slam-dunk known that the universe ends in a big freeze, not a great crunch. Perlmutter et al got the nobel prize in 2011 for discovering that the expansion of the universe is accelerating, due to an unknown effect called for now dark energy. Unless there is some future undiscovered transition, the end of the universe will be cold and lonely, as even the nearest galaxies eventually red shift to infinity.
↑ comment by Yosarian2 · 2014-10-31T22:28:12.127Z · LW(p) · GW(p)
I don't remember the exact math, but I believe that it was shown that in an expanding and cooling universe, the amount of energy available at any one spot drops over time, but so long as some distant future energy could slow down it's thinking process and energy use arbitrarily, that you could live forever in subjective time by steadily slowing down the objective speed of your thought process over time. The Last Computer (or energy being, or whatever) would objectively go a longer and longer time between each thought, but from a subjective point of view it would be able to continue forever.
Of course, if the rate of the universe's expansion steadily accelerates indefinitely, that might not work, energy might fall off at too fast of a rate for that to be possible. We don't really know enough about dark energy yet to know how that's going to go.
↑ comment by DanielLC · 2014-11-01T17:48:05.831Z · LW(p) · GW(p)
There is a theoretical limit on how much negentropy is required to erase a bit. However, it depends on temperature. Unless the expansion of the universe has a limit, the universe will get arbitrarily cold, and computers could be arbitrarily efficient. Theoretically, you could make a finite amount of energy last an infinite number of computations.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-11-11T07:49:58.781Z · LW(p) · GW(p)
There is a theoretical limit on how much negentropy is required to erase a bit.
I take it's a lower limit? Your statement might be misinterpreted with those unfamiliar with the mathematical usage of the term "limit".
comment by A1987dM (army1987) · 2014-11-08T20:01:18.126Z · LW(p) · GW(p)
What do you call a lump of iron arranged in to an artfully-twisted shape? I think we could reasonably call it a paperclip.
Only if it can hold paper together.
Also, there probably are much more artful ways of shaping iron. Or you can make black holes out of it and use the extra negentropy to make even more artful stuff with non-iron materials.
comment by Froolow · 2014-11-03T12:40:57.518Z · LW(p) · GW(p)
I really enjoyed the article, but I think your argument falls down in the following way:
1) Fission / fusion are the best energy sources we know of, but we can't yet do it for all forms of matter
2) A sufficiently clever and motivated intelligence probably could do it for all forms of matter, because it looks to be thermodynamically possible
3) (Implicit premise) In between now and the creation of a galaxy-hopping superintelligence with the physical nouse to fusion / fission at least the majority of matter in its path, there will be no more efficient forms of energy discovered
4) Therefore paperclips (or at least something that looks enough like paperclips that we needn't argue)
Premise 1 is trivialy true, premise 2 has just enough wild speculation to make it plausible but still exciting, and the conclusion is supported if premise 3 is true. But premise 3 looks pretty shakey to me - we can already extract energy from the quantum foam and can at least theoretically extract energy from matter-antimatter collision (although I don't know if thermodynamics permits either of these methods to be more efficient than fusion). It is a bold judgement to suppose we are at the limits of our understanding of these processes, and bolder still to assume there are no further processes to discover.