Posts

2022-04-30T13:01:01.905Z
2022-02-06T20:39:19.405Z
2022-02-04T19:03:05.279Z

Comments

Comment by superads91 on Unbounded Intelligence Lottery · 2022-07-08T17:12:32.587Z · LW · GW

True, in a way. Without solving said mystery (of how an animal brain produces not only calculations but also experiences) you could theoretically create philosophical zombie uploads. But in this post what is really desired is to save all conscious beings from death and disease by uploading them, so to that effect (the most important one) it still looks impossible.

(I deleted my post because on hindsight it sounded a bit off topic.)

Comment by superads91 on What if LaMDA is indeed sentient / self-aware / worth having rights? · 2022-06-18T16:21:39.714Z · LW · GW

I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it's reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!

I know for a fact that at least one human is consciousness (myself) because I can experience it. That's still the strongest reason to assume it, and it can't be called into question as you did.

Comment by superads91 on What if LaMDA is indeed sentient / self-aware / worth having rights? · 2022-06-17T16:08:53.223Z · LW · GW

"There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it's not (much like pain receptors aren't necessary to feel pain, etc.)."

That's just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in arquitecture from one that does calculations and generates experiences. All sentient machines that we know have the same basic arquitecture. All non-sentient calculation machines also have the same basic arquitecture. The likelihood that sentience will arise in the latter arquitecture as long as we scale it is, therefore, not impossible, but quite unlikely. The likelihood that it will arise in a current language model which doesn't need to sleep, could function for a trillion of years without getting tired, and that we know pretty much how it works which is fundamentally different from an animal brain and fundamentally similar to a pocket calculator, is even more unlikely.

"On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that's best at continuing the prompt."

Takes way more complexity to simulate a person than LaMDAs arquitecture, if possible at all in a Turing machine. A human brain is orders of magnitude more complex than LaMDA.

"The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules."

With orders of magnitude more complexity than LaMDA. So much so that with decades of neuroscience we still don't have a clue how consciousness is generated, while we have pretty good clues how LaMDA works.

"a meat brain, which, if we look inside, contains no sentience"

Can you really be so sure? Just because we can't see it yet doesn't mean it doesn't exist. Also, to deny consciousness is the biggest philosophical fallacy possible, because all that one can be sure that exists is his own consciousness.

"Of course, the brain claims to be sentient, but that's only because of how its neurons are connected."

Like I said, to deny consciousness is the biggest possible philosophical fallacy. No proof is needed that a triangle has 3 sides, the same about consciousness. Unless you're giving the word other meanings.

Comment by superads91 on What if LaMDA is indeed sentient / self-aware / worth having rights? · 2022-06-16T22:57:38.683Z · LW · GW

There's just no good reason to assume that LaMDA is sentient. Arquitecture is everything, and its arquitecture is just the same as other similar models: it predicts the most likely next word (if I recall correctly). Being sentient involves way more complexity than that, even something as simple as an insect. It claiming that it is sentient might just be that it was mischievously programmed that way, or it just found it was the most likely succession of words. I've seen other language models and chatbots claim they were sentient too, though perhaps ironically.

Perhaps as importantly, there's also no good reason to worry that it is being mistreated, or even that it can be. It has no pain receptors, it can't be sleep deprived because it doesn't sleep, can't be food deprived because it doesn't need food...

I'm not saying that it is impossible that it is sentient, just that there is no good reason to assume that it is. That plus the fact that it doesn't seem like it's being mistreated plus it also seems almost impossible to mistreat, should make us less worried. Anyway we should always play safe and never mistreat any "thing".

Comment by superads91 on Why I don't believe in doom · 2022-06-09T16:09:47.110Z · LW · GW

Right then, but my original claim still stands: your main point is, in fact, that it is hard to destroy the world. Like I've explained, this doesn't make any sense (hacking into nuclear codes). If we create an AI better than us at code, I don't have any doubts that it CAN easily do it, if it WANTS. My only doubt is whether it will want it or not. Not whether it will be capable, because like I said, even a very good human hacker in the future could be capable.

At least the type of AGI that I fear is one capable of Recursive Self-Improvement, which will unavoidably attain enormous capabilities. Not some prosaic non-improving AGI that is only human-level. To doubt whether the latter would have the capability to destroy the world is kinda reasonable, to doubt it about the former is not.

Comment by superads91 on Why I don't believe in doom · 2022-06-08T16:29:08.248Z · LW · GW

The post is clearly saying "it will take longer than days/weeks/months SO THAT we will likely have time to react". Both are highly unlikely. It wouldn't take a proper AGI weeks or months to hack into the nuclear codes of a big power, it would take days or even hours. That gives us no time to react. But the question here isn't even about time. It's about something MORE intelligent than us which WILL overpower us if it wants, be it on 1st or 100th try (nothing guarantees we can turn it off after the first failed strike).

Am I extremely sure that an unaligned AGI would cause doom? No. But to be extremely sure of the opposite is just as irrational. For some reason it's called a risk - it's something that has a certain probability, and given that we all should agree that that probability is high enough, we all should take the matter extremely seriously regardless of our differences.

Comment by superads91 on Why I don't believe in doom · 2022-06-08T15:45:03.251Z · LW · GW

Your argument boils down to "destroying the world isn't easy". Do you seriously believe this? All it takes is to hack into the codes of one single big nuclear power, thereby triggering mutually assured destruction, thereby triggering nuclear winter and effectively killing us all with radiation over time.

In fact you don't need AGI to destroy the world. You only need a really good hacker, or a really bad president. In fact we've been close about a dozen times, so I hear. If Stanislav Petrov had listened to the computer in 1983 who indicated 100% probability of incoming nuclear strike, the world would have been destroyed. If all 3 officials of the Russian submarine of the Cuban Missile Crisis had agreed to launch what they mistakenly thought would be a nuclear counter strike, the world would have been destroyed. Etc etc.

Of course there are also other easy ways to destroy the world, but this one is enough to invalidate your argument.

Comment by superads91 on · 2022-05-05T18:08:33.422Z · LW · GW

"You may notice that the whole argument is based on "it might be impossible". I agree that it can be the case. But I don't see how it's more likely than "it might be possible"."

I never said anything to the contrary. Are we allowed to discuss things that we're not sure whether it "might be possible" or not? It seems that you're against this.

Comment by superads91 on · 2022-05-05T18:04:30.649Z · LW · GW

Tomorrow people matter, in terms of leaving them a place in minimally decent conditions. That's why when you die for a cause, you're also dying so that tomorrow people can die less and suffer less. But in fact you're not dying for unborn people - you're dying for living ones from the future.

But to die to make room for others is simply to die for unborn people. Because them never being born is no tragedy - they never existed, so they never missed anything. But living people actually dying is a tragedy.

And I'm not against the fact that giving live is a great gift. Or should I say, it could be a great gift, if this world was at least acceptable, which it's far from being. It's just that not giving it doesn't hold any negative value, it's just neutral instead of positive. Whereas taking a life does hold negative value.

It's as simple as that.

Comment by superads91 on · 2022-05-04T23:46:17.208Z · LW · GW

I can see the altruism in dying for a cause. But it's a leap of faith to claim, from there, that there's altruism in dying by itself. To die why, to make room for others to get born? Unborn beings don't exist, they are not moral patients. It would be perfectly fine if no one else was born from now on - in fact it would be better than even 1 single person dying.

Furthermore, if we're trying to create a technological mature society capable of discovering immortality, perhaps much sooner will it be capable of colonizing other planets. So there are trillions of empty planets to put all the new people before we have to start taking out the old ones.

To die to make room for others just doesn't make any sense.

"consciousness will go on just fine without either of us specifically being here"

It sure will. But that's like saying that money will go on just fine if you go bankrupt. I mean, sure, the world will still be full of wealth, but that won't make you any less poor. Now imagine this happening to everyone inevitably. Sounds really sh*tty to me.

"Btw I'm new to this community,"

Welcome then!

Comment by superads91 on · 2022-05-04T21:49:52.841Z · LW · GW

To each paragraph:

  1. Totally unfair comparison. Do you really think that immortality and utopia are frivolous goals? So maybe you don't really believe in cryonics or something. Well, I don't either. But transhumanism is way more than that. I think that its goals with AI and life extension are all but a joke.

  2. That's reductive. As an altruist, I care about all other conscious being. Of course maintaining sanity demands some distancing, but that's that. So I'd say I'm a collectivist. But one person doesn't substitute the other. Others continuing to live will never make up for those who die. The act of ceasing to exist if of the utmost cruelty and there's nothing that can compensate that.

  3. I have no idea of what consciousness is scientifically, but morally I'm pretty sure it is valuable. All morality comes from the seeking of well-being for the conscious being. So if there's any value system, consciousness must be at the center. There's not much explaining here needed, it's just that everyone wants to be well - and to be.

  4. Like I said every conscious being wants to exist. It's just the way we've been programmed. All beings matter, myself included. I goddamn want to live, that is the basis of all wants and of all rights. Have I been brainwashed? Religions have been brainwashing people about the exact opposite for millenia, that death is ok, either because we go to heaven according to the West, or because we'll reincarnate or we're part of a whole according to the East. So, quite on the contrary, I think I have been de-brainwashed.

  5. An unborn person isn't a tragedy. A death one is. So it's much more important to care about the living than the unborn.

  6. If most people are saying that AGI is decade(s) off then we aren't that far.

As for raising children as best as we can I think that's just common sense.

  1. I partly agree. It would be horrible if Genghis Khan or Hitler never died. But we could always put them in a really good prison. I just don't wanna die and I think no minimally decent person deserves to, just so we can get rid of a few psychopaths.

Also we're talking about immortality not now, but in a technological utopia, since only such could produce it. So the dynamics would be different.

As for fresh new perspectives, in this post I propose selective memory deletion with immortality. So that would contribute to that. Even then, getting fresh new perspectives is pretty good, but nowhere near being worth the ceasing of trillions of consciousnesses.

Comment by superads91 on · 2022-05-04T18:24:36.028Z · LW · GW

"You are a clone of your dead childhood self."

Yes, that's a typical Buddhist-like statement, that we die and are reborn each instant. But I think it's just incorrect - my childhood self never died. He's alive right now, here. When I die the biological death, then I will stop existing. It's as simple as that. Yet I feel like Buddhists, or Eastern religion in general, does this and other mental gymnastics to comfort people.

"So you either stick with modernism (that transhumanism is the one, special ideology immune from humanity's tragic need to self-sedate), or dive into the void"

There are self-sedating transhumanists, for sure. Like, if you think there isn't a relevant probability that immortality just won't work, or if you're optimistic about the AI control problem, you're definitely a self-sedating transhumanist. I try to not be one as much as possible, but maybe I am in some areas - no one's perfect.

But it's pretty clear that there's a big difference between transhumanism and religions. The former relies on science to propose solutions to our problems, while the later is based on the teachings of prophets, people who thought that their personal intuitions were the absolute truth. And, in terms of self-sedating ideas, if transhumanism is a small grain of Valium, religion is a big fat tab.

"It's hard to say anything about reality when the only thing you know is that you're high af all the time."

I agree. I claim uncertainty on all my claims.

"Every day the same sun rises, yet it's a different day. You aren't the sun, you're the day.

Imagine droplets of water trapped in a cup, then poured back into the ocean. Water is consciousness, your mind is the cup."

Yeah, yeah, yeah, I know, I know, I've heard the story a thousand times. There's only one indivisible self/consciousness/being, we're just an instance of it. Well, you can believe that if you want, I don't have the scientific evidence to disprove it. But neither have you the evidence to prove it, so I can also disbelieve it. My intuition clearly disbelieves it. When I die biologically it will be blackout. It's cruel af.

"Imagine if their reign extended infinitely. But for the grace of Death might we soon unlock Immortality."

Either too deep or I'm too dumb, didn't quite get it. Please explain less poetically.

Comment by superads91 on · 2022-05-04T17:42:36.684Z · LW · GW

Still, that could all happen with philosophical zombies. A computer agent (AI) doesn't sleep and can function forever. These 2 factors is what leads me to believe that computers, as we currently define them, won't ever be alive, even if they ever come to emulate the world perfectly. At best they'll produce p-zombies.

Comment by superads91 on · 2022-05-04T16:00:30.314Z · LW · GW

"I'm feeling enthusiastic to try to make it work out, instead of being afraid that it won't."

Well, for someone who's accusing me of emotionally still defending a wrong mainstream norm (deathism), you're also doing it yourself by espousing empty positivism. Is it honest to feel enthusiastic about something when your probabilities are grim? The probabilities should come first, not how you feel about it.

"It's true that I lack the gear-level model explainig how it's possible for me to exist for quadrillion years."

Well I do have one to prove the opposite: the brain is finite, and as time tends to infinite so do memories, and it might be impossible to trim memories like we do in a computer without destroying the self.

"For every argument "what if it's impossible to do x and x is required to exist for quadrillion years" I can automatically construct counter arguments like "what if it's actually possible to do x" or "what if x is not required"."

That's fine! Are we allowed to have different opinions?

"How do you manage to get 70-80% confidence level here? This sounds overconfident to me."

Could be. I'll admit that it's a prediction based more on intuition than reasoning, so it's not of the highest value anyway.

Comment by superads91 on · 2022-05-04T12:45:36.808Z · LW · GW

It does ring true to me a bit. How could it not, when one cannot imagine a way to exist forever with sanity? Have you ever stopped to imagine, just relying on your intuition, what would be like to live for a quadrillion years? I'm not talking about a cute few thousand like most people imagine when we talk about immortality. I'm talking about proper gazillions, so to speak. Doesn't it scare the sh*t out of you? Just like Valentine says in his comment, it's curious how very few transhumanists have ever stopped to stare at this abyss.

On the other hand I don't think anyone hates death more than me. It truly makes me utterly depressed and hopeless. It's just that I don't see any possible alternative to it. That's why I'm pessimistic about the matter - both my intuition and reasoning really point to the idea that it's technological impossible for any conscious being to exist for a quadrillion years, although not to 100% certainty. Maybe 70-80%.

The ideal situation was that we lived forever but only ever remembered a short amount of time, so that we would always feel "fresh" (i.e. not go totally insane). I'm just not sure if that's possible.

Comment by superads91 on · 2022-05-04T09:24:48.523Z · LW · GW

"Whatever posttranshuman creature inherits the ghost of your body in a thousand years won't be "you" in any sense beyond the pettiest interpretation of ego as "continuous memory""

I used to buy into that Buddhist perspective, but I no longer do. I think that's a sedative, like all religions. Though I will admit that I still meditate, because I still hope to find out that I'm wrong. I hope I do, but I don't have a lot of hope. My reason and intuition are clear in telling me that the self is extremely valuable, both mine and that of all other conscious beings, and death is a mistake.

Unless you mean to say that they will only be a clone of me. Then you're right, a clone of me is not me at all, even if it feels exactly like me. But then we would have just failed at life extension anyway. Who's interested in getting an immortal clone? People are interested in living forever themselves, not someone else. At least if they're being honest.

"Your offspring are as much "you" as that thousand year ego projection. "

I've been alive for 30 years - not much, I admit, but I still feel as much like me as in the first day that I can remember. I suspect that as long as the brain remains healthy, that will remain so. But I never ever felt "me" in any other conscious being. Again, Buddhist projection. Sedative. Each conscious being is irreplaceable.

Comment by superads91 on · 2022-05-04T01:20:42.516Z · LW · GW

"if one might make a conscious being out of Silicon but not out of a Turing machine"

I also doubt that btw.

"what happens when you run the laws of physics on a Turing machine and have simulated humans arise"

Is physics computable? That's an open question.

And more importantly, there's no guarantee that the laws of physics would necessarily generate conscious beings.

Even if it did, could be p-zombies.

"What do you mean by "certainly exists"? One sure could subject someone to an illusion that he is not being subjected to an illusion."

True. But as long as you have someone, it's no longer an illusion. It's like, if you stimulate your pleasure centers with an electrode, and you say "hmmm that feels good", was the pleasure an illusion? No. It may have been physically an illusion, but not experientially, and the latter is what really matters. Experience is what really matters, or is at least enough to make something real. That consciousness exists is undeniable. "I think, therefore I am." Experience is the basis of all fact.

Comment by superads91 on · 2022-05-03T20:57:14.411Z · LW · GW

Can we really separate them? I'm sure that the limitations of consciousness (software) have a physical base (hardware). I'm sure we could find the physical correlates of "failure to keep up with experience", as well as we could find the physical correlates of why someone who doesn't sleep for a few days starts failing to keep up with experience as well.

It all translates down to hardware at the end.

But anyway I'll say again that I admitted it was speculative and not the best example.

Comment by superads91 on · 2022-05-03T18:56:50.069Z · LW · GW

"There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside."

Modern computers could theoretically do anything that a human does, except experience it. I can't draw a line around the part of my brain responsible for it because there is probably none, it's all of it. Even though I'm no neurologist. But from the little I know the brain has an integrated architecture.

Maybe in the future we could make conscious silicon machines (or of whatever material), but I still maintain that the brain is not a Turing machine - or at least not only.

"The outside only works in terms of information."

Could be. The mind processes information, but it is not information (this is an intuitive opinion, and so is yours).

"Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn't made of atoms."

Now we've arrived at my favorite part of the computationalist discourse: to claim or suggest that consciousness is an illusion. I think that all that can't be an illusion is consciousness. All that certainly exists is consciousness.

As for being made of atoms or not, well, information isn't, either. But it's expressed by atoms, and so is consciousness.

Comment by superads91 on · 2022-05-03T13:56:02.971Z · LW · GW

Perhaps our main difference is that you seem to believe in computationalism, while I don't. I think consciousness is something fundamentally different from a computer program or any other kind of information. It's experience, which is beyond information.

Comment by superads91 on · 2022-05-01T15:25:18.557Z · LW · GW

I think it is factually correct that we get Alzheimer's and dementia at old age because the brain gets worn out. Whether it is because of failing to keep up with all the memory accumulation could be more speculative. So I admit that I shouldn't have made that claim.

But the brain gets worn out from what? Doing its job. And what's its job...?

Anyway, I think it would be more productive to at least present an explanation in a couple of lines rather than only saying that I'm wrong.

Comment by superads91 on · 2022-05-01T15:04:06.380Z · LW · GW

"So, odds are, you would not need to explicitly delete anything, it fades away with disuse."

I don't know. Even some old people feel overwhelmed with so many memories. The brain does some clean-up, for sure. But I doubt whether it would work for really long timelines.

"So, I don't expect memory accumulation to be an obstacle to eternal youth. Also, plenty of time to work on brain augmentations and memory offloading to external storage :)"

Mind you that your personal identity is dependent on your "story", which has to encompass all your life, even if only the very key moments. My concern is that, as time tends to infinite, so does the "backbone" of memory, i.e. the minimum necessary, after "trimming" with the best technology possible, to maintain personal identity which should include memories from all along the timeline. So the question is whether a finite apparatus, the brain, can keep up with infinite data. Is it possible to remain you while not remembering all your story? And I don't even care about not remembering all my story. I just care about remaining me.

We know that a computer can theoretically function forever. Just keep repairing its parts (same could be done with the brain, no doubt) and deleting excess data. But the computer doesn't have a consciousness / personal identity which is dependent on its data. So computationalism might be leading us astray here. (Yes, I don't like computationalism/functionalism.)

Note: this is all speculation, of which I'm quite uncertain of. Before all the downvotes come (not that I care, but just to make it clear anyway).

Comment by superads91 on · 2022-05-01T14:38:28.660Z · LW · GW

The concern here is not boredom. I even believe that boredom could be solved with some perfect drug or whatever. The concern here is whether a consciousness identity can properly exist forever without inevitably degrading.

Comment by superads91 on · 2022-05-01T14:26:42.223Z · LW · GW

"No. There is nothing I find inherently scary or unpleasant about nonexistence."

Would you agree that you're perhaps a minority? That most people are scared/depressed about their own mortality?

"I'm just confused about the details of why that would happen. I mean, it would be sad if some future utopia didn't have a better solution for insanity or for having too many memories, than nonexistence.

Insanity: Look at the algorithm of my mind and see how it's malfunctioning? If nothing else works, revert my mindstate back a few months/years?

Memories: offload into long-term storage?"

On insanity, computationalism might be false. Consciousness might not be algorithmic. If it is, you're right, it's probably easy to deal with.

But I suspect that excess memories might always remain a problem. Is it really possible to off-load them while maintaining personal identity? That's an open question in my view.

Specially when, like me, you don't really buy into computationalism.

Comment by superads91 on · 2022-05-01T02:14:18.463Z · LW · GW

"Do I choose between being forced to exist forever, or to die after less than 100 years of existence? Neither. I'd like to have the option to keep living for as long as I want."

I didn't mean being forced to exist forever, or pre-commiting to anything. I meant that I really do WANT to exist forever, yet I can't see a way that it can work. That's the dilemma that I mentioned: to die, ever, even after a gazillion years, feels horrible, because YOU will cease to exist, no matter after how much time. To never die feels just as horrible because I can't see a way to remain sane after a very long time.

Who can guarantee that after x years you would feel satisfied and ready to die? I believe that as long as the brain remains healthy, it really doesn't wanna die. And even if it could reach the state of accepting death, the current me just don't ever wanna cease to exist. Doesn't the idea of inevitably eventually ceasing to exist feel absolutely horrible to you?

Comment by superads91 on · 2022-05-01T01:59:17.451Z · LW · GW

You're coming from a psychologic/spiritual point of view, which is valid. But I think you should perhaps consider a bit more the scientific perspective. Why do people get Alzheimer's and dementia at old age? Because the brain fails to keep up with all the experience/memory accumulation. The brain gets worn out, basically. My concern is more scientific than anything. Even with the best psychotherapy or the best meditation or even the best brain tinkering possible, as time tends to infinite so do memories and so does "work" for the brain to do, and unfortunately the brain is finite, so it will invariably get overwhelmed eventually.

Like, I don't doubt that in a few centuries or millenia we would have invented the technology to no longer get our brains worn out at age 100 but only at 1000 or 5000, but I don't think we'll ever invent the technology to avoid it past age 1 billion (just a gross estimate of course).

Personally, I'm only 30 and I don't feel tired of living at all, fortunately. Like I said, I wanna live forever. But both my intuition and these scientific considerations tell me that it can't remain like that indefinitely.

Comment by superads91 on · 2022-04-30T18:46:18.437Z · LW · GW

Thanks for the feedback (and the back-up). Well, I'd say that half of what I write on Lesswrong is downvoted and 40% is ignored, so I don't really care at this point. I don't think (most of) my opinions are outlandish. I never downvote anything I disagree with myself, and there's plenty that I disagree with. I only downvote absurd or rude comments. I think that's the way it should be, so I'm in full agreement on that.

You also got it right on my main point. That's precisely it. Mind you that "ending" consciousness also feels horrid to me! That's the dilemma. Would be great if we could find a way to achieve neverending consciousness without it being horrid.

Comment by superads91 on · 2022-04-30T18:03:58.907Z · LW · GW

" I can’t imagine getting bored with life even after a few centuries."

Ok, but that's not a lot of time, is it? Furthermore, this isn't even a question of time. For me, no finite time is enough. It's the mere fact of ceasing to exist. Isn't it simply horrifying? Even if you live a million healthy years, no matter, the fact is that you will cease to exist one day. And then there will be two options. Either your brain will be "healthy" and therefore will dread death as much as it does now, or it will be "unhealthy" and welcome death to relieve it's poor condition. To me both options seem horrifying - what matters more is that the present me wants to live forever with a healthy, not tired of life, not with dementia or Alzheimer's, sane brain forever.

Like I said in my post, as time tends to infinite, so do memories, so there will inevitably be a time where the brain no longer holds up. Even the best brain that science could buy in a far future, so to speak. It seems that we will have to cease to exist inevitably.

"I’m sure that there will be people that choose to die, and others that choose to deadhead, and those that choose to live with only a rolling century of memories. None of those sound appealing to my imagination at this point."

The latter actually sounds kinda appealing to me, if only I was more convinced that it was possible...

Comment by superads91 on · 2022-04-30T17:45:29.945Z · LW · GW

"I don't think we need to answer these questions to agree that many people would prefer to live longer than they currently are able."

Certainly, but that's not the issue here. The issue here is immortality. Many transhumanists desire to live forever, literally. Myself included. In fact I believe that many people in general do. Extending the human lifespan to thousands of years would be a great victory already, but that doesn't invalidate the search for true immortality, if people are interested in such which I'm sure some are.

"I have no idea what problems need to be solved to enable people to happily and productively live thousands of years, but I also have no reason to believe they're insurmountable."

Once again you're deviating from the question. I haven't questioned the possibility of living a few thousand years, but forever properly.

Comment by superads91 on Key questions about artificial sentience: an opinionated guide · 2022-04-28T17:06:56.060Z · LW · GW

1.) Suffering seems to need a lot of complexity, because it demands consciousness, which is the most complex thing that we know of.

2.) I personally suspect that the biological substrate is necessary (of course that I can't be sure.) For reasons, like I mentioned, like sleep and death. I can't imagine a computer that doesn't sleep and can operate for trillions of years as being conscious, at least in any way that resembles an animal. It may be superintelligent but not conscious. Again, just my suspicion.

3.) I think it's obvious - it means that we are trying to recreate something that biological systems do (arithmetics, imagine recognition, playing games, etc) on these electronic systems called computers or AI. Just like we try to recreate a murder scene with pencils and paper. But the murder drawing isn't remotely a murder, it's only a basic representation of a person's idea of a murder.

4.) Correct. I'm not completely excluding that possibility, but like I said, it would be a great luck to get there not on purpose. Maybe not "winning the lottery" luck as I've mentioned, but maybe 1 to 5% probability.

We must understand that suffering takes consciousness, and consciousness takes a nervous system. Animals without one aren't conscious. The nature of computers is so drastically different from that of a biological nervous system (and, at least until now, much less complex) that I think it would be quite unlikely that we eventually unwillingly generate this very complex and unique and unknown property of biological systems that we call consciousness. I think it would be a great coincidence.

Comment by superads91 on Key questions about artificial sentience: an opinionated guide · 2022-04-28T16:29:44.434Z · LW · GW

I never said you claimed such either, but Charbel did.

"It is possible that this [consciousness] is the only way to effectively process information"

I was replying to his reply to my comment, hence I mentioned it.

Comment by superads91 on Key questions about artificial sentience: an opinionated guide · 2022-04-28T00:34:20.610Z · LW · GW

Consciousness definitely serves a purpose, from an evolutionary perspective. It's definitely an adaptation to the environment, by offering a great advantage, a great leap, in information processing.

But from there to say that it is the only way to process information goes a long way. I mean, once again, just think of the pocket calculator. Is it conscious? I'm quite sure that it isn't.

I think that consciousness is a very biological thing. The thing that makes me doubt the most about consciousness in non-biological systems (let alone in the current ones which are still very simple) is that they don't need to sleep and they can function indefinitely. Consciousness seems to have these limits. Can you imagine not ever sleeping? Not ever dying? I don't think such would be possible for any conscious being, at least one remotely similar to us.

Comment by superads91 on Key questions about artificial sentience: an opinionated guide · 2022-04-26T21:38:08.984Z · LW · GW

I highly doubt this on an intuitive level. If a draw a picture of a man being shot, is it suffering? Naturally not, since those are just ink pigments in a sheet of cellulose. Suffering seems to need a lot of complexity and also seems deeply connected to biological systems. AI/computers are just a "picture" of these biological systems. A pocket calculator appears to do something similar to the brain but in reality it's much less complex and much different, and it's doing something completely different. In reality it's just an electric circuit. Are lightbulbs moral patients?

Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-08T11:10:25.541Z · LW · GW

Everyone knows that the Holocaust wasn't just genocide. It was also torture, evil medical experiments, etc. But you're right, I should have used a better example. Not that I think that anyone really misunderstood what I meant.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-07T04:52:34.959Z · LW · GW

"That said, I'm not convinced that permanent Holocaust is worse than permanent extinction, but that's irrelevant to my point anyway."

Maybe it's not. What we guess are other people's values is heavily influenced by our own values. And if you are not convinced that permanent Holocaust is worse than permanent extinction, then, no offense, but you have a very scary value system.

"If someone isn't convinced by the risk of permanent extinction, are you likely to convince them by the (almost certainly smaller) risk of permanent Holocaust instead?"

Naturally, because the latter is orders of magnitude worse than the former. But again, if you don't share this view, I can't see myself convincing you.

And we also have no idea if it really is smaller. But even a small risk of an extremely bad outcome is reason for high alarm.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-06T18:03:34.304Z · LW · GW

"I love violence and would hope that Mars is an utter bloodbath."

The problem is that biological violence hurts like hell. Even most athletes live with chronic pain, imagine most warriors. Naturally we could solve the pain part, but then it wouldn't be the violence I'm referring to. It would be videogame violence, which I'm ok with since it doesn't cause pain or injury or death. But don't worry, I still got the joke!

""don't kill anyone and don't cause harm/suffering to anyone"

The problem with this one is that the AI's optimal move is to cease to exist."

I've thought about it as well. Big brain idea: perhaps the first AGI's utility function would be to act in the real world as minimally as possible, maybe only with the goal of preventing other people from developing AGI, keeping like this until we solve alignment? Of course this latter part of policing the world would be already prone to a lot of ambiguity and sophism, but again, if we program do not's (do not let anyone else build AGI, plus do not kill anyone, plus do not cause suffering, etc) instead of do's, it could lead to a lot less ambiguity and sophism, by drastically curtailing the maneuver space. (Not that I'm saying that it would be easy.) As opposed to like "cure cancer", or "build an Eiffel tower".

"And that's already relying on being able to say what 'kill someone' means in a sufficiently clear way that it will satisfy computer programmers"

I don't think so. When the brain irreversibly stops you're dead. It's clear. This plays into my doubt that perhaps we keep underestimating the intelligence of a superintelligence. I think that even current AIs could be made to discern when a person is dead or alive, perhaps even better than us already.

"For instance, when Captain Kirk transports down to the Planet-of-Hats, did he just die when he was disassembled, and then get reborn? Do we need to know how the transporter works to say?"

Maybe don't teletransport anyone until we've figured that out? There the problem is teletransportation itself, not AGI efficiently recognizing what is death at least as well as we do. (But I'd venture saying that it could even solve that philosophical problem, since it's smarter than us.)

"Stuart Russell is a very clever man, and if his approach to finessing the alignment problem can be made to work then that's the best news ever, go Stuart!

But I am a little sceptical because it does seem short on details, and the main worry is that before he can get anywhere, some fool is going to create an unaligned AI, and then we are all dead."

I gotta admit that I completely agree.

"Whereas alignment looks harder and harder the more we learn about it."

I'll say that I'm not that much convinced about most of this that I've said. I'm still way more to the side that "control is super difficult and we're all gonna die (or worse)". But I keep thinking about these things, to see if maybe there's a "way out". Maybe we in this community have built a bias that "only the mega difficult value alignment will work", when it could be false. Maybe it's not just "clever hacks", maybe there are simply more efficient and tractable ways to control advanced AI than the intractable value alignment. But again, I'm not even that much convinced myself.

"and I am pretty sure that the author of 'Failed Utopia 4-2' has at least considered the possibility that it might not be so bad if we only get it 99%-right."

Exactly. Again, perhaps there are much more tractable ways than 100% alignment. OR, if we could at least solve worst-case AI safety (that is, prevent s-risk) it would already be a massive win.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-05T23:15:53.305Z · LW · GW

"Plenty of 'failed utopia'-type outcomes that aren't exactly what we would ideally want would still be pretty great, but the chances of hitting them by accident are very low."

I'm assuming you've read Eliezer's post "Failed Utopia 4-2", since you use the expression? I've actually been thinking a lot about that, how that specific "failed utopia" wasn't really that bad. In fact it was even much better than the current world, as disease and aging and I'm assuming violence too got all solved at the cost of all families being separated for a few decades, which is a pretty good trade if you ask me. It makes me think if there's some utility function with unaligned AI that could lead to some kind of nice future, like "don't kill anyone and don't cause harm/suffering to anyone". The truth is that in stories of genies the wishes are always very ambiguous, so a "wish" stated negatively (don't do this) might lead to less ambiguity than one stated positively (do that).

But this is even assuming that it will be possible to give utility functions to advanced AI, which I've heard some people say it won't.

This also plays into Stuart Russell's view. His approach seems much more simple than alignment, it's just in short not letting the advanced AI know its final objective. It makes me think whether there could be solutions to the advanced AI problem that would be more tractable than the intractable alignment.

Perhaps it's not that difficult after all.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-05T23:01:01.059Z · LW · GW

"Significant chances of Hell, maybe I take the nice safe extinction option if available."

The problem is that it isn't available. Plus realistically speaking the good outcome percentage is way below 50% without alignment solved.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-05T04:54:29.755Z · LW · GW

"From the point of view of most humans, there are few outcomes worse than extinction of humanity (x-risk)."

That's obviously not true. What would you prefer: extinction of humanity, or permanent Holocaust?

"Are you implying that most leaders would prefer extinction of humanity to some other likely outcome, and could be persuaded if we focused on that instead?"

Anyone would prefer extinction to say a permanent Holocaust. Anyone sane at least. But I'm not implying that they would prefer extinction to a positive outcome.

"but I also think that those unpersuaded by the risk of extinction wouldn't be persuaded by any other argument anyway"

I'll ask you again: which is worse, extinction or permanent Holocaust?

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-04T18:56:06.997Z · LW · GW

"I mean, I agree that we've failed at our goal. But "haven't done a very good job" implies to me something like "it was possible to not fail", which, unclear?"

Of course it was. Was it difficult? Certainly. So difficult that I don't blame anyone for failing, like I've stated in my comment reply to this post.

It's an extremely difficult problem both technically and politically/socially. The difference is that I don't see any technical solutions, and have as well heard very convincing arguments by the likes of Roman Yalmpolskiy that such thing might not even exist. But we can all agree that there is at least one political solution - to not build advanced AIs before we've solved the alignment problem. No matter how extremely difficult such solution might seem, it actually exists and seems possible.

So we've failed, but I'm not blaming anyone because it's damn difficult. In fact I have nothing but the deepest admiration for the likes of Eliezer, Bostrom and Russell. But my critique still stands: such failure (to get the leaders to care, not the technical failure to solve alignment) COULD be IN PART because most prominent figures like these 3 only talk about AI x-risk and not worse outcomes.

"unless they decide to take it as seriously as they take nuclear proliferation"

That's precisely what we need. I'd assume that most in this community are quite solidly convinced that "AI is far more dangerous than nukes" (to quote our friend Elon). If leaders could adopt our reasoning, it could be done.

"the actual result will be companies need large compliance departments in order to develop AI systems, and those compliance departments won't be able to tell the difference between dangerous and non-dangerous AI."

There are other regulation alternatives. Like restricting access to supercomputers. Or even stoping AI research altogether until we've made much more progress on alignment. But your concern is still completely legitimate. But where's the technical solutions in sight, as an alternative? Should we rather risk dying (again, that's not even the worse risk) because political solutions seem intractable and only try technical solutions when those seem even way more intractable?

Contingency measures, both technical and political, could also be more effective than both full alignment and political solutions.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-04T02:05:11.029Z · LW · GW

"I've come to the conclusion that it is impossible to make an accurate prediction about an event that's going to happen more than three years from the present, including predictions about humanity's end."

Correct. Eliezer has said this himself, check out his outstanding post "There is no fire alarm for AGI". However, you can still assign a probability distribution to it. Say, I'm 80% certain that dangerous/transformative AI (I dislike the term AGI) will happen in the next couple of decades. So the matter turns out to be just as urgent, even if you can't predict the future. Perhaps such uncertainty only makes it more urgent.

". I believe that the most important conversation will start when we actually get close to developing early AGIs (and we are not quite there yet), this is when the real safety protocols and regulations will be put in place, and when the rationalist community will have the best chance at making a difference. This is probably when the fate of humanity will be decided, and until then everything is up in the air."

Well, first, like I said, you can't predict the future, i.e. There's No Fire Alarm for AGI. So we might never know that we're close till we get there. Happened with other transformative technologies before.

Second, even if we could, we might not have enough time by then. Alignment seems to be pretty hard. Perhaps intractable. Perhaps straight impossible. The time to start thinking of solutions and implementing them is now. In fact, I'd even say that we're already too late. Given such monumental task, I'd say that we would need centuries, and not the few decades that we might have.

You're like the 3rd person I respond to in this post saying that "we can't predict the future, so let's not panic and let's do nothing until the future is nearer". The sociologist in me tells me that this might be one of the crucial aspects of why people aren't more concerned about AI safety. And I don't blame them. If I hadn't been exposed to key concepts myself like intelligence explosion, orthogonality thesis, basic AI drives, etc etc, I guess I'd have the same view.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-03T23:36:40.823Z · LW · GW

I'm of the opinion that we should tell both the politicians and the smart nerds. In my opinion we just haven't done a very good job. Maybe because we only focus on x-risk, and imo people in power and perhaps people in general might have internalized that we won't survive this century one way or another, be it climate change, nuclear, nano, bio or AI. So they don't really care. If we told them that unaligned AI could also create unbreakable dictatorships for their children perhaps it could be different. If people think of unaligned AI as either Heaven (Kurzweilian Singularity) or death/extinction, it seems plausible that some might wanna take the gamble because we're all gonna die without it anyway.

And I also don't quite agree that on net smart nerds only heard the part about AI being really powerful. We've seen plenty of people jump on the AI safety bandwagon. Elon Musk is only one person, and it could as well be that it's not so much that he didn't listen to the second part, as much as he screwed up.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-03T23:22:51.660Z · LW · GW

I meant to reply to the OP, not Eli.

Comment by superads91 on Uncontrollable Super-Powerful Explosives · 2022-04-03T20:27:48.910Z · LW · GW

18 months is more than enough to get a DSA if AGI turns out anything we fear (that is, something really powerful and difficult to control, probably arriving fast at such state through an intelligence explosion).

In fact, I'd even argue 18 days might be enough. AI is already beginning to solve protein folding (Alphafold). If it progresses from there and builds a nanosystem, that's more than enough to get a DSA aka take over the world. We currently see AIs like MuZero learning in hours what would take a lifetime for a human to learn, so it wouldn't surprise me an advanced AI solving advanced nanotech in a few days.

Whether the first AGI will be aligned or not is way more concerning. Not because who gets there first isn't also extremely important. Only because getting there first is the "easy" part.

I don't really think advanced AI can be compared to atomic bombs. The former is a way more explosive technology, pun intended.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-03T07:13:21.419Z · LW · GW

"what I really don't understand is why 'failure to solve the problem in time' sounds so much like 'we're all going to die, and that's so certain that some otherwise sensible people are tempted to just give in to despair and stop trying at all' "

I agree. In this community, most people only talk of x-risk (existential risk). Most people equate failure to align AI to our values to human extinction. I disagree. Classic literature examples of failure can be found, like With Folded Hands, where AI creates an unbreakable dictatorship, not extinction.

I think it's for the sake of sanity (things worse than extinction are quite harder to accept), or not to scare the normies, who are already quite scared.

But it's also true that unaligned AI could result in a kinda positive outcome, or even neutral. I just personally wouldn't put much probability in there. Why? 2 concepts that you can look up on Lesswrong: orthogonality thesis (high intelligence isn't necessarily correlated to high values), and basic AI drives (advanced AI would naturally develop dangerous instrumental goals like survival and resource acquisition). And also that it's pretty hard to tell computers to do what we mean, which scaled up could turn out very dangerous.

(See Eliezer's post "Failed Utopia 4-2", where an unaligned AGI ends up creating a failed utopia which really doesn't sound THAT bad, and I'd say is even much better than the current world when you weight all the good and bad.)

Fundamentally, we just shouldn't take the gamble. The stakes are too high.

If you wanna have an impact, AI is the way to go. Definitely.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-03T01:58:16.827Z · LW · GW

A loud strategy is definitely mandatory. Just not too loud with the masses. Only loud with the politicians, tech leaders and researchers. We must convince them that this is dangerous. More dangerous than x-risk even. I know that power corrupts, but I don't think any minimally sane human wants to destroy everything or worse. The problem imo is that they aren't quite convinced, nor have we created strong cooperative means in this regard.

So far this is kinda being done. People like Stuart Russell are quite vocal. Superintelligence by Nick Bostrom had a huge impact. But despite all these great efforts it's still not enough.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-03T01:47:04.139Z · LW · GW

M. Y. Zuo, what you describe is completely possible. The problem is that such positive outcome, as well as all others, is extremely uncertain! It's a huge gamble. Like I've said in other posts: would you spin a wheel of fortune with, let's say, 50% probability of Heaven, and 50% probability of extinction or worse?

Let me tell you that I wouldn't spin that wheel, not even with a 5% probability of bad outcomes only. Alignment is about making sure that we reduce that probability to a low as we can. The stakes are super high.

And if like me you place a low probability of acceptable outcomes without alignment solved, then it becomes even more imperative.

Comment by superads91 on If AGI were coming in a year, what should we do? · 2022-04-02T23:29:16.488Z · LW · GW

Could be. I'll concede that the probability that the average person couldn't effectively do anything is much higher than the opposite. But imo some of the probable outcomes are so nefarious that doing nothing is just not an option, regardless. After all, if plenty of average people effectively decided to do something, something could get done. A bit like voting - one vote achieves nothing, many can achieve something.

Comment by superads91 on If AGI were coming in a year, what should we do? · 2022-04-02T22:11:45.251Z · LW · GW

I'm sorry but doing nothing seems unnaceptable to me. There are some in this forum who have some influence on AI companies, so those could definitely do something. As for the public in general, I believe that if a good number of people took AI safety seriously, so that we could make our politicians take it seriously, things would change.

So there would definitely be the need to do something. Specially because unfortunately this is not a friendly AI / paperclipper dicotomy like most people here present it by only considering x-risk and not worse outcomes. I can imagine someone accepting death because we've always had to accept it, but not something worse than it.

Comment by superads91 on MIRI announces new "Death With Dignity" strategy · 2022-04-02T09:31:39.573Z · LW · GW

Personally I don't blame it that much on people (that is, those who care), because maybe the problem is simply intractable. This paper by Roman Yalmpolskiy is what has convinced me the most about it:

https://philpapers.org/rec/YAMOCO

It basically asks the question: is it really possible to be able to control something much more intelligent than ourselves and which can re-write its own code?

Actually I wanna believe that it is, but we'd need something on the miracle level, as well as way more people working on it. As well as way more time. It's virtually impossible in a couple decades and with a couple hundred researchers, i.e. intractable as things stand.

That leaves us with political solutions as the only tangible solutions on the table. Or also technical contingency measures, which could perhaps be much easier to develop than alignment, and prevent the worst outcomes.

Speaking of which, I know we all wanna stay sane, but death isn't even the worse possible outcome. And this is what makes the problem so pressing, so much more than nuclear risk or grey goo. If we could indeed die with dignity at least, it wouldn't be so bad already. (I know this sounds extremely morbid, but like Eliezer, I'm beyond caring.)