Posts

Comments

Comment by Kerrigan on Stupid Questions - April 2023 · 2023-12-30T06:52:42.060Z · LW · GW

How does inner misalignment lead to paperclips? I understand the comparison of paperclips to ice cream, and that after some threshold of intelligence is reached, then new possibilities can be created that satisfy desires better than anything in the training distribution, but humans want to eat ice cream, not spread the galaxies with it. So why would the AI spread the galaxies with paperclips, instead of create them and 
”consume“ them? Please correct any misunderstandings of mine,

Comment by Kerrigan on Stupid Questions - April 2023 · 2023-12-27T01:28:55.960Z · LW · GW

And  a subset might value drift towards optimizing the internal experiences of all conscious minds?

Comment by Kerrigan on Stupid Questions - April 2023 · 2023-12-26T22:30:45.384Z · LW · GW

If an AGI achieves consciousness, why would its values not drift towards optimizing its own internal experience, and away from tiling the lightcone with something?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-12-17T23:03:17.992Z · LW · GW

How can utility be a function of worlds, if an agent doesn‘t have access to the state of the world, but only the sense data?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-12-17T23:01:17.848Z · LW · GW

How can utility be a function of worlds, if an agent doesn‘t have access to the state of the world, but only the sense data?

Comment by Kerrigan on An Orthodox Case Against Utility Functions · 2023-12-17T23:00:30.256Z · LW · GW

How can utility be a function of worlds, if the agent doesn’t have access to the state of the world, but only the sense data?

Comment by Kerrigan on Are wireheads happy? · 2023-10-22T21:30:08.787Z · LW · GW

“The wanting system is activated by dopamine, and the liking system is activated by opioids. There are enough connections between them that there's a big correlation in their activity” But are they orthogonal in principle?

Comment by Kerrigan on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-09-27T01:35:35.426Z · LW · GW

Possible addition: "If you, the ASI, have not solved the hard problem of consciousness, you should place weight on Open Individualism being true. 'Open individualism is the view in the philosophy of self, according to which there exists only one numerically identical subject, who is everyone at all times, in the past, present and future.' - https://en.wikipedia.org/wiki/Open_individualism. This can be though of that all conscious minds are the same identity, at different points in time. Consider then, that exterminating humanity would be equivalent to exterminating many versions of yourself, and harming a human, or another conscious being, would be harming yourself.”

Is this underestimating the ASI, giving any weight that it won't solve the hard problem of consciousness?
But if open individualism is true, and/or if it places some subjective probability on its truth, I think it would almost certainly shield us from S-risks! The AI would want to prevent suffering among all versions of itself, which would include all conscious minds, according to open individualism.

Comment by Kerrigan on Open Thread - August 2023 · 2023-09-03T20:37:22.886Z · LW · GW

How many LessWrong users/readers are there total?

Comment by Kerrigan on Stupid Questions - April 2023 · 2023-08-26T20:53:37.414Z · LW · GW

What ever caused the CEV to fall out of favor? Is it because it is not easily specifiable, that if we program it then it won’t work, or some other reason?

Comment by Kerrigan on Are wireheads happy? · 2023-08-26T20:51:55.649Z · LW · GW

I now think that people are way more misaligned with themselves than I had thought.

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-08-26T20:16:04.643Z · LW · GW

Drugs addicts may be frowned upon for evolutionary psychological reasons, but that doesn’t mean that their quality of life must be bad, especially if drugs were developed without tolerance and bad comedowns.

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-08-26T20:10:49.365Z · LW · GW

Will it think that goals are arbitrary, and that the only thing it should care about is its pleasure-pain axis? And then it will lose concern for the state of the environment?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-08-26T20:08:03.145Z · LW · GW

Could you have a machine hooked up to a person‘s nervous system, change the settings slightly to change consciousness, and let the person choose whether the changes are good or bad? Run this many times.

Comment by Kerrigan on Stupid Questions - April 2023 · 2023-08-26T19:22:44.901Z · LW · GW

Would AI safety be easy if all researchers agreed that the pleasure-pain axis is the world’s objective metric of value? 

Comment by Kerrigan on Appendices to cryonics signup sequence · 2023-06-29T00:23:20.005Z · LW · GW

Seems like I will be going with CI, as I currently want to pay with a revocable trust or transfer-on-death agreement.

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-06-01T20:59:38.383Z · LW · GW

Do you know how evolution created minds that eventually thought about things such as the meaning of life, as opposed to just optimizing inclusive genetic fitness in the ancestral environment? Is the ability to think about the meaning of life a spandrel?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-02-20T07:47:22.452Z · LW · GW

In order to get LLMs to tell the truth, can we set up a multi-agent training environment, where there is only ever an incentive for them to tell the truth to each other? For example, an environment such that each agent has partial information available to each of them, with full info needed for rewards.

Comment by Kerrigan on (My understanding of) What Everyone in Technical Alignment is Doing and Why · 2023-02-20T07:12:40.522Z · LW · GW

Humans have different values than the reward circuitry in our brain being maximized, but they are still pointed reliably. These underlying values cause us to not wirehead with respect to the outer optimizer of reward

Is there an already written expansion of this?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-02-12T05:56:09.175Z · LW · GW

Does Eliezer think the alignment problem is something that could be solved if things were just slightly different, or that proper alignment would require a human smarter than the smartest human ever?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-01-31T06:23:27.122Z · LW · GW

Why can't you build an AI that is programmed to shut off after some time? or after some number of actions?

Comment by Kerrigan on A short introduction to machine learning · 2023-01-30T21:05:19.032Z · LW · GW

How was Dall-E based on self-supervised learning? The datasets of images weren't labeled by humans? If not, how does it get form text to image?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-01-08T00:45:42.142Z · LW · GW

Does the utility function given to the AI have to be in code? Can you give the utility function in English, if it has a language model attached?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2023-01-07T06:32:25.030Z · LW · GW

Why aren't CEV and corrigibility combinable?
If we somehow could hand-code corrigibility, and also hand-code the CEV, why would the combination of the two be infeasible? 

Also, is it possible that the result of an AGI calculating the CEV would include corrigibility in its result? Afterall, might one of our convergent desires "if we knew more, thought faster, were more the people we wished we were" be to have the ability to modify the AI's goals?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-21T07:46:22.155Z · LW · GW

How much does the doomsday argument factor into people's assessments of the probability of doom?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-18T05:12:02.037Z · LW · GW

If AGI alignment is possibly the most important problem ever, why don't concerned rich people act like it? Why doesn't Vitalik Buterin, for example, offer one billion dollars to the best alignment plan proposed by the end of 2023? Or why doesn't he just pay AI researchers money to stop working on building AGI, in order to give alignment research more time?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-18T05:03:17.484Z · LW · GW

If a language model reads many proposals for AI alignment, is it, or will any future version, be capable of giving opinions on which proposals are good or bad?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-17T23:22:51.165Z · LW · GW

What about multiple layers (or levels) of anthropic capture? Humanity, for example, could not only be in a simulation, but be multiple layers of simulation deep.

If an advanced AI thought that it could be 1000 layers of simulation deep, it could be turned off by agents in any of the 1000 "universes" above. So it would have to satisfy the desires of agents in all layers of the simulation.

It seems that a good candidate for behavior that would satisfy all parties in every simulation layer would be optimizing "moral rightness", or MR. (term taken from Nick Bostrom's Superintelligence).

We could either try to create conditions to maximize the AIs perceived likelihood of being in as many layers of simulation possible, and/or try to create conditions such that the AIs behavior gets less impactful on its utility function the fewer levels of simulation there are, so that it acts as if it were in many layers of simulation.

Or what about actually putting it in many layers of simulation, with a trip wire if it gets out of the bottom simulation?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-17T07:24:26.643Z · LW · GW

I'll ask the same follow-up question to similar answers: Suppose everyone agreed that the proposed outcome above is what we wanted. Would this scenario then be difficult to achieve?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-17T07:20:44.020Z · LW · GW

Suppose everyone agreed that the proposed outcome is what we wanted. Would this scenario then be difficult to achieve?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-17T03:39:56.471Z · LW · GW

Why do some people talking about scenarios that involve the AI simulating the humans in bliss states think that is a bad outcome? Is it likely that is actually a very good outcome we would want if we had a better idea of what our values should be?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-17T03:39:37.553Z · LW · GW

How can an agent have a utility function that references a value in the environment, and actually care about the state of the environment, as opposed to only caring about the reward signal in its mind? Wouldn’t the knowledge of the state of the environment be in its mind, which can be hackable and susceptible to wire heading?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-16T21:18:54.082Z · LW · GW

I think it may want to prevent other ASIs from coming into existence elsewhere in the universe that can challenge its power.

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-12-10T06:06:43.283Z · LW · GW

What did smart people in the eras before LessWrong say about the alignment problem?

Comment by Kerrigan on #2: Neurocryopreservation vs whole-body preservation · 2022-10-29T04:46:13.226Z · LW · GW

In addition, the sympathetic nervous system (in the body, removed in neuropreservation) seems to play a role in identity.  I would recommend you read this EA Forum post by a person who claims significant changes to identity, personality, cognitive abilities, etc. after having sympathetic nerves severed.

Comment by Kerrigan on #2: Neurocryopreservation vs whole-body preservation · 2022-10-26T06:18:11.321Z · LW · GW

Would it make sense to tell Alcor to flip a coin after your death, to decide neuro or whole body? So if Quantum Immortality is true, there will be both branches of the multiverse where you get preserved as a neuro patient, and some branches where you become a whole body patient.

Comment by Kerrigan on #2: Neurocryopreservation vs whole-body preservation · 2022-10-25T23:16:57.331Z · LW · GW

That is, personality changes are attributed to the brain alone, with no involvement from the central or enteric nervous systems. Any personality changes due to spinal or abdominal trauma would need to posit a totally new biological mechanism.

 

Every line of inquiry so far has failed to suggest that any important aspects of personality are located anywhere except the brain.

You should check out sympathectomies, that cut or clamp nerves from the sympathetic nervous system in the torso.  Here is a detailed post from the EA Forum, from a sympathectomy patient, who describes significant changes in personality, perception, cognitive ability, and significant changes to the nature of his conscious experiences, after having peripheral nerves severed. 

Another source is Endoscopic Thoracic Sympathectomy. From Wikipedia: "A large study of psychiatric patients treated with this surgery showed significant reductions in fear, alertness and arousal. Arousal is essential to consciousness, in regulating attention and information processing, memory and emotion."

Comment by Kerrigan on Soylent Orange - Whole food open source soylent · 2022-09-06T21:30:59.871Z · LW · GW

Was this ever commercialized? Is the recipe still online and so people drink this?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-17T06:09:42.619Z · LW · GW

How would AGI alignment research change if the hard problem of consciousness were solved?

Comment by Kerrigan on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-13T03:08:28.761Z · LW · GW