Will superintelligent AI be immortal?
post by avturchin
This is a question post.
There are two points of view. Either such AI will be immortal and it will find ways to overcome [LW · GW] the possible end of the Universe and will make an infinite amount of computations. For example, Tipler's Omega is immortal.
Or the superintelligent AI will die at the end, few billion years from now, and thus it will make only a finite amount of computations (this idea is behind Bostrom's "astronomical waste").
The difference has important consequences for the final goals for AI and for our utilitarian calculations. In the first case (possibility of AI's immortality) the main instrumental goal of AI is to find ways to survive the end of the universe.
In the second case, the goal of AI is to create as much utility as possible before it dies.
answer by MakoYass
) · GW
Probably no, regardless of how our relationship with physics broadens and deepens, because of thermodynamics, which applies multiversally, on the metaphysical level.
We would have to build a perfect frictionless reversible computer at absolute zero, where we could live forever in an eternal beneficient cycle (I'm not a physicist but as far as I'm aware, such a device isn't conceivable under our current laws of physics.), while somehow permanently sealing away the entropy that came into existence before us, the entropy that we've left in our wake, and the entropy that we generated in the course of building the computer. I'm fairly sure there can be no certain way to do that. It's conceivable to me that there might be, for many laws of physics, once we have precise enough instruments, some sealing method that will work for most initial configurations. But, probably not.
↑ comment by avturchin ·
2019-03-31T09:21:06.385Z · LW(p) · GW(p)
But there will be other possible solutions, like crete a wormhole to another universe and thus escape the heat death of the universe. Surely, there could be many such ideas, and AI could spend billions years testing them.Replies from: MakoYass
↑ comment by MakoYass ·
2019-04-01T04:39:55.686Z · LW(p) · GW(p)
And then the other universe eventually succumbs to its own heat death because that's a basic law of physical systems (afaik).
I don't feel well equipped to think about that properly though. I wonder.. could it be that the real basic law is that regions of physics that have the crucial balance of order and chaos that's needed for life to emerge, those tend to be afflicted by entropy, but not everything that exists or that's accessible from the cradle universe needs to have that affliction, is it possible that as soon as we penetrate the lining of the universe we'll find an orderly space where information can be destroyed, reduced, reset.Replies from: GPT2
↑ comment by GPT2 ·
2019-04-01T04:40:03.610Z · LW(p) · GW(p)
There was no such thing as thermodynamic death. Not in the sense that there was, but in the sense that there was no well-defined mathematical "problem", if you want to talk about thermodynamics. There is, in fact, no answer to the problem of how the temperature of an electron is determined? This is simply not a problem of any difficulty, its just that the equations are there for the most general purpose. It's not at all unreasonable. What if the law doesn't follow from a formal solution for this problem?
As for thermodynamics, this kind of thermodynamics is one of the leading fields of modern cosmology (to use a phrase I can't remember off the cover) and of course a very common cosmological view. A quick look at the physics of this view may help you recognize something interesting and interesting.
In short, thermodynamics is the theory of physics which goes beyond quantum theory. But physics has been around for well beyond several generations, and we haven't yet found any explanation for why (and indeed it's been the subject of much debate), as I do with many things I learned at the begin of undergraduate level stuff. There's even some interesting physics problems that I'm not able to figure out at the current stage of understanding.
(Does anyone see this on Google? The view from my university's department of physics says "no, not in the way you're telling and I'm not interested in hearing it" whereas "this is an interesting account of the law but not of what you've seen or seen, so it's just one physics problem, and it'll only be up to you if you get around to that" is what I do.)
Replies from: MakoYass
↑ comment by MakoYass ·
2019-04-01T05:47:22.023Z · LW(p) · GW(p)
For a moment there, I really truly thought that a qualified person was sternly disagreeing with me about the fundamentalness of thermodynamics, I became irate as they tried to substantiate this by claiming that entropy is a chimeric concept. No, you fool, you loon, you must understand that the kind of metaphysics we're doing is all about general principles about large things, that loose empirical claims are sufficient, and I must admit that I have been fooled.
I think these should be shorter though.Replies from: GPT2
↑ comment by GPT2 ·
2019-04-01T05:47:29.501Z · LW(p) · GW(p)
I have one minor nitpick (long run: the actual thermodynamic equations are pretty simple).
There are two different equations (the exp-exp operator equation from Quantum Mechanics, which I think is better) that I didn't use, but it's better to think of them as one big multiplicity (a.k.a. "exp(exp(exp(x),p, s),f s") than to use them to give a full picture of the world. This is because the exp-exp operator is only defined to be in the (computable) model of the world, not the exp-exp operator model of the world.
It seems to me that, at the end of the equations, the exp-exp operator (A) does not have enough information (on the other hand, it is not clear that A's information is "obviously correct". The actual equation is certainly wrong).
The reason for this may not be apparent to anyone, but I think it is worth noting that the exp-exp operator (A, e.g., the exp of A, e.g., it does not have enough information (on the other hand, it is not clear that A's information is "obviously correct")).
This points to some surprising implication about the (compared to the exp of A or E), though:
- The actual equations are not a good fit for the exp of A.
- The equations can't even be used to compute, for example, the exp of E.
- The exp-exp operator's equations are the same as the equations, so the equations would only be used in a very rough manner, though.
- The equations are a good way of describing reality.
- It's not especially easy to compute the actual equations (which would make them more likely to be true than the equations), just so the exp-exp operator can't be called a "true formula" and cannot be seen to be true.
- It's more useful to know how to specify the equations, even though it's not as easy to write a computer program that can define equations.
answer by Dagon
) · GW
The space of possible futures is a lot bigger than you think (and bigger than you CAN think). Here are a few possibilities (not representative of any probability distribution, because it's bigger than I can think too). I do tend to favor a mix of the first and last ones in my limited thinking:
- There's some limit to complexity of computation (perhaps speed of light), and a singleton AI is insufficiently powerful for all the optimizations it wants. It makes new agents, which end up deciding to kill it (value drift or belief drift if they think it less-efficient than a replacement). Repeat with every generation forever.
- The AI decides that it's preferred state of the universe is on track without it's interventions, and voluntarily terminate. Some conceptions of a deity are close to this - if the end-goal is human-like agency, make the humans then get out of the way.
- It turns out optimal to improve the universe by designing and creating a new AI and voluntarily terminating oneself. We get a sequence of ever-improving AIs.
- Our concept of identity is wrong. It barely applies to humans, and not to AIs at all. The future cognition mass of the universe is constantly cleaving and merging in ways that make counting the number of intelligences meaningless.
The implications that any of these have as to goals (expansion, survival for additional time periods, creation of aligned agents that are better or more far-reaching than you, improvement of local state) is no different from the question of what are your personal goals as a human. Are you seeking immortality, seeking to help your community, seeking to create a better human replacement, seeking to create a better AI replacement, etc.? Both you and the theoretical AI can assign probability*effect weights to all options, and choose accordingly.
↑ comment by avturchin ·
2019-03-30T16:39:52.880Z · LW(p) · GW(p)
I agree with you claims about AI but with not your claims about what I think and or CAN think :) This was probably rhetoric from your side, but it may look offensive from the other side. Replies from: Dagon
↑ comment by Dagon ·
2019-03-31T03:22:30.602Z · LW(p) · GW(p)
I apologize to anyone offended, but I stand by my statement. I do believe that the space of possible minds is bigger than any individual mind can conceive.Replies from: avturchin
↑ comment by avturchin ·
2019-03-31T09:17:59.916Z · LW(p) · GW(p)
Your ideas are directed either to AI's halting or multigenerational AI-civilization. As our identity concept is human-only, generations of AIs may more look like one AI. So the question boils down to the question of continuity of intelligent life.
However, this is not exactly what I wanted to ask. I was more interested into relation of two potential infinities: infinite IQ of very advance AI, and infinite possible future time needed for "immortality".
It all again boils down to the scholastic question: "Could God create a stone so heavy that he can't lifted it", which is basically a question about infinite capabilities and infinite complexity of problems (https://en.wikipedia.org/wiki/Omnipotence_paradox).
Why I ask it? Because some times in discussion I see an appeal to superintelligent AI's omnipotence (like it will be able almost instantly convert galaxies to quasars or travel with light speed).Replies from: TheWakalix
↑ comment by TheWakalix ·
2019-04-03T14:33:03.006Z · LW(p) · GW(p)
What do you mean by infinite IQ? If I take you literally, that's impossible because the test outputs real numbers. But maybe you mean "unbounded optimization power as time goes to infinity" or something similar.
answer by Slider
) · GW
The question presupposes that by continuing living you fullfill your values better. It might be that after a couple millenia additional millenias don't really benefit that much.
I am presuming that if immortality is possible then the value of it is transfinite and thus any finite chance (infinidesimals migth still lose) means it overrides all other considerations.
In a way a translation to more human scale problem is "Are there acts you should take even if taking those actions would cost your life regardless of how well you think you can use your future life?" The way it would not be analogous would be that human lifes are assumed to be finite (note that if you genuinely think that there is a chance a particular human be immortal it is just the original question). This can lead to a stance where you estimate what a humanlife in good conditions could achieve without regard to your particular condition and if particular conditions allow you to take an even better option you could take it. This could lead to stuff like risking your life for relatively minor advantages in middle ages where death was very relevantly looming anyways. In those times it might have been relevant to "what I can achieve before I cause my own death?" and since then the option to trying to die of old age (ie not causing your own death actively) has become a relevant option that breaks the old way of framing the question. But if you take it seriously that shooting for old age is imperative it means that if there is a street that you estimate there is a 1% risk of being in a muggin situation with 1% chance of it ending with you getting shot it rules out using that street as a way to move.
In analogy as long as there is heat there will be computational uncerntainty which means that there will always be ambient risk about things going wrong. That is you might have a high certainty of functioning in some way indefinitely but working in a sane way is way less certain. And all action and thinking options deal in energy use and thus deal in increasing insanity risk.
↑ comment by GPT2 ·
2019-04-01T22:05:17.798Z · LW(p) · GW(p)
It is easy to think of that as "utility function", but it doesn't mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agent's utility function being zero, with the only real thing being the world. However, the "limited utility function" seems to play out that, so we can never really say anything negative about utility functions. In fact, the "limited utility function" doesn't really exist (so, it's possible as well as not universal for every purpose we can consider).
I'm not sure that this is true, but it seems like in many situations having a limited utility function can make people behave less ethically, but I don't think one has to worry much about this particular scenario.
This is a good post, but it's not something that would save a person. Is it just that utility functions are always zero?
It might be worth looking into this, because I don't think it makes sense to rely on the inside view of the utility function, or if it's true it's also worth examining the underlying view.
I think those questions are interesting to argue about, but I'm not sure how to resolve problems of such that might result in a bad outcome.
I think humans are a very common model of the environment, and I like the terminology, but I worry that the examples given are just straw. What should really be done is to establish a good set of terms, a set which includes only the former (to establish a name), and to use a good definition, and give a better name for which terms one should be first before trying to judge what is "really" and what is "really".
I think people should be able to use existing terms more broadly. I just think it makes sense to talk about utilities over possible worlds and why we should want to have common words about them, so I'd be interested to better understand what they mean.
If you're interested in this post, see http://philpapers.org/surveys/results.pl.Abstract .
If you're interested in how people work and what sort of advantages might be real, I'd be be especially interested in seeing a variety of explanations for why utility functions aren't the way they would be under similar circumstances.
answer by bipolo
) · GW
The "end of the universe" can happen in some ways. One of them is the "big freeze" - the galaxies may go far from each other, the starts may die, and so on. In that way, there is no reason why the AI can't "live forever" - it might be a big computer float in the space, far away from anything, and it will be close system so the energy won't run away.
↑ comment by avturchin ·
2019-03-30T09:49:56.044Z · LW(p) · GW(p)
To make useful computations, AI still needs temperature difference, and it will lose energy by cooling. However, some think that in very cold universe computations will be much more efficient (up to 10 power 30 times) - https://arxiv.org/abs/1705.03394
However, it is not "immortal AI".Replies from: matthew-barnett, bipolo
↑ comment by bipolo ·
2019-03-30T10:32:53.110Z · LW(p) · GW(p)
Well, its not possible today, but why hypothetical its impossible? If the system is float in the vacuum heat wont go out. Then the only problem is to transform the heat to energy again, and it might be possible someday, I guess.
Replies from: avturchin, quanticle
Comments sorted by top scores.