Posts

How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? 2020-09-10T00:40:36.781Z
Anirandis's Shortform 2020-08-29T20:23:45.522Z
‘Maximum’ level of suffering? 2020-06-20T14:05:14.423Z
Likelihood of hyperexistential catastrophe from a bug? 2020-06-18T16:23:41.608Z

Comments

Comment by Anirandis on Cosmopolitan values don't come free · 2023-12-02T19:18:25.831Z · LW · GW

I don't think misaligned AI drives the majority of s-risk (I'm not even sure that s-risk is higher conditioned on misaligned AI), so I'm not convinced that it's a super relevant communication consideration here. 

I'm curious what does, in that case; and what proportion affects humans (and currently-existing people or future minds)? Things like spite threat commitments from a misaligned AI warring with humanity seem like a substantial source of s-risk to me.

Comment by Anirandis on [deleted post] 2023-09-27T22:56:54.723Z

I find this worrying. If social dynamics have introduced such a substantial freak-out-ness about these kinds of issues, it's hard to evaluate the true probability of them. If s-risks are indeed likely then I, as a potential victim of horrific suffering worse than any human has ever experienced, would want to be able to reasonably evaluate their probability.

Comment by Anirandis on The Waluigi Effect (mega-post) · 2023-08-26T16:03:33.488Z · LW · GW

Moreover, this Semiotic–Simulation Theory has increased my credence in the absurd science-fiction tropes that the AI Alignment community has tended to reject, and thereby increased my credence in s-risks.

The potential consequences of this are harrowing - it feels strange how non-seriously this is being taken if there’s a conceivable path to s-risk here. Is there a reason for the alignment community seeming almost indifferent?

Comment by Anirandis on My views on “doom” · 2023-08-17T13:12:43.076Z · LW · GW

What does the distribution of these non-death dystopias look like? There’s an enormous difference between 1984 and maximally efficient torture; for example, do you have a rough guess of what the probability distribution looks like if you condition on an irreversibly messed up but non-death future?

Comment by Anirandis on AI: Practical Advice for the Worried · 2023-04-01T18:33:08.168Z · LW · GW

I'm a little confused by the agreement votes with this comment - it seems to me that the consensus around here is that s-risks in which currently-existing humans suffer maximally are very unlikely to occur. This seems an important practical question; could the people who agreement-upvoted elaborate on why they find this kind of thing plausible?

 

The examples discussed in e.g. the Kaj Sotala interview linked later down the chain tend to regard things like "suffering subroutines", for example.

Comment by Anirandis on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-03-18T22:10:17.355Z · LW · GW

I have a disturbing feeling that arguing to future AI to "preserve humanity for pascals-mugging-type-reasons" trades off X-risk for S-risk. I'm not sure that any of these aforementioned cases encourage AI to maintain lives worth living.

 

Because you're imagining AGI keeping us in a box? Or that there's a substantial probability on P(humans are deliberately tortured | AGI) that this post increases?

Comment by Anirandis on There's probably a tradeoff between AI capability and safety, and we should act like it · 2022-06-09T00:21:11.638Z · LW · GW

Related: alignment tax

Comment by Anirandis on Meta wants to use AI to write Wikipedia articles; I am Nervous™ · 2022-03-30T21:04:29.517Z · LW · GW

Presumably it'd take less manpower to review each article that the AI's written (i.e. read the citations & make sure the article accurately describes the subjects) than it would to write articles from scratch. I'd guess this is the case even if the claims seem plausible & fact-checking requires a somewhat detailed reading through of the sources.

Comment by Anirandis on Anirandis's Shortform · 2022-03-27T00:57:04.611Z · LW · GW

Cheers for the reply! :)

 

integrate these ideas into your mind and it's complaining loudly that you're going to fast (although it doesn't say it quite that way, I think this is a useful framing). Stepping away, focusing on other things for a while, and slowly coming back to the ideas is probably the best way to be able to engage with them in a psychologically healthy way that doesn't overwhelm you

I do try! When thinking about this stuff starts to overwhelm me I can try to put it all on ice, usually some booze is required to be able to do that TBH.

Comment by Anirandis on Late 2021 MIRI Conversations: AMA / Discussion · 2022-03-25T18:28:59.299Z · LW · GW

But of course it's also plausible that destructive conflict between aggressive civilizations leads to horrifying outcomes for us

 

Also, wouldn't you expect s-risks from this to be very unlikely by virtue of (1) civilizations like this being very unlikely to have substantial measure over the universe's resources, (2) transparency making bargaining far easier, and (3) few technologically advanced civilizations would care about humans suffering in particular as opposed to e.g. an adversary running emulations of their own species?

Comment by Anirandis on Anirandis's Shortform · 2022-03-20T00:03:34.711Z · LW · GW

Since it's my shortform, I'd quite like to just vent about some stuff.

 

I'm still pretty scared about a transhumanist future going quite wrong. It simply seems to me that there's quite the conjunction of paths to "s-risk" scenarios: generally speaking, any future agent that wants to cause disvalue to us - or an empathetic agent - would bring about an outcome that's Pretty Bad by my lights. Like, it *really* doesn't seem impossible that some AI decides to pre-commit to doing Bad if we don't co-operate with it; or our AI ends up in some horrifying conflict-type scenario, which could lead to Bad outcomes as hinted at here; etc. etc.

 

Naturally, this kind of outcome is going to be salient because it's scary - but even then, I struggle to believe that I'm more than moderately biased. The distribution of possibilities seems somewhat trimodal: either we maintain control and create a net-positive world (hopefully we'd be able to deal with the issue of people abusing uploads of each other); we all turn to dust; or something grim happens. And the fact that some very credible people (within this community at least) also conclude that this kind of thing has reasonable probability further makes me conclude that I just need to somehow deal with these scenarios being plausible, rather than trying to convince myself that they're unlikely. But I remain deeply uncomfortable trying to do that.

 

Some commentators who seem to consider such scenarios plausible, such as Paul Christiano, also subscribe to the naive view regarding energy-efficiency arguments over pleasure and suffering: that the worst possible suffering is likely no worse than the greatest possible pleasure is good. And that this may also be the case for humans. Even if this is the case, and I'm sceptical, I still feel that I'm too risk-averse. In that world I wouldn't accept a 90% chance of eternal bliss with a 10% chance of eternal suffering. I don't think I hold suffering-focused views; I think there's a level of happiness that can "outweigh" even extreme suffering. But when you translate it to probabilities, I become deeply uncomfortable with even a 0.01% chance of bad stuff happening to me. Particularly when the only way to avoid this gamble is to permanently stop existing. Perhaps something an OOM or two lower and I'd be more comfortable.


I'm not immediately suicidal, to be clear. I wouldn't classify myself as 'at-risk'. But I nonetheless find it incredibly hard to find solace. There's a part of me that hopes things get nuclear, just so that a worse outcome is averted. I find it incredibly hard to care about other aspects of my life; I'm totally apathetic. I started to improve and got mid-way through the first year of my computer science degree, but I'm starting to feel like it's gotten worse. I'd quite like to finish my degree and actually meaningfully contribute to the EA movement, but I don't know if I can at this stage. I'm guessing it's a result of me becoming more pessimistic about the worst outcomes resulting in my personal torture, since that's the only real change that's occurred recently. Even before I became more pessimistic I still thought about these outcomes constantly, so I don't think just a case of me thinking about them more.

 

I take sertraline but it's beyond useless. Alcohol helps, so at least there's that. I've tried quitting thinking about this kind of thing - I've spent weeks trying to shut down any instance where I thought about it. I failed.

 

I don't want to hear any over-optimistic perspectives on these issues. I'd greatly appreciate any genuine, sincerely held opinions on them (good or bad), or advice on dealing with the anxiety. But I don't necessarily need or expect a reply; I just wanted to get this out there. Even if nobody reads it. Also, thanks a fuckton to everyone who was willing to speak to me privately about this stuff.

 

Sorry if this type of post isn't allowed here, I just wanted to articulate some stuff for my own sake somewhere that I'm not going to be branded a lunatic. Hopefully LW/singularitarian views are wrong, but some of these scenarios aren't hugely dependent on an imminent & immediate singularity. I'm glad I've written all of this down. I'm probably going to down a bottle or two of rum and try to forget about it all now.

Comment by Anirandis on February 2022 Open Thread · 2022-02-19T02:44:40.078Z · LW · GW

Thanks for the response; I'm still somewhat confused though. The question was to do with the theoretical best/worst things possible, so I'm not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here. 

 

Specifically I'm confused about:

Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased

I'm not really sure what's meant by "the reality" here, nor what's meant by biased. Is the assertion that humans' intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn't likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it's worse (rather than better)? I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.

Comment by Anirandis on February 2022 Open Thread · 2022-02-12T23:07:59.168Z · LW · GW

I'm not sure if this is the right place to ask this, but does anyone know what point Paul's trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)

Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved

[ . . . ]

I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is vs the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.

If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation.

It seems like an important topic but I'm a bit confused by what he's saying here. Is the perspective he's discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn't that suggest every human's life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of 'hedonium' and 'dolorium', in which case it's of solely altruistic concern & can be dealt with by strictly limiting compute?

 

Also, I'm not really sure if this set of views is more "a broken bone/waterboarding is a million times as morally pressing as making a happy person", or along the more empirical lines of "most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn't scale to the same degree." Even a tiny chance of the second one being true is awful to contemplate. 

Comment by Anirandis on · 2022-02-06T14:53:44.243Z · LW · GW

we ask the AGI to "make us happy", and it puts everyone paralyzed in hospital beds on dopamine drips. It's not hard to think that after a couple hours of a good high, this would actually be a hellish existence, since human happiness is way more complex than the amount of dopamine in one's brain (but of course, Genie in the Lamp, Mida's Touch, etc)

This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn't wear off. Being turned into a wirehead arguably kills you, but it's a much better experience than death for the wirehead!

(I've actually read in a popular lesswrong post about s-risks Paul clearly saying that the risk of s-risk was 1/100th of the risk of x-risk (which makes for even less than 1/100th overall). Isn't that extremely naive, considering the whole Genie in the Lamp paradigm? How can we be so sure that the Genie will only create hell 1 time for each 100 times it creates extinction?)

I think the kind of Bostromian scenario you're imagining is a slightly different line of AI concern than the types that Paul & the soft takeoff crowd are concerned about. The whole genie in the lamp thing, to me at least, doesn't seem likely to create suffering. If this hypothetical AI values humans being alive & nothing more than that, it might separate your brain in half so that it counts as 2 humans being happy, for example. I think most scenarios where you've got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.

Comment by Anirandis on · 2022-02-05T14:53:08.778Z · LW · GW

I'm way more scared about the electrode-produced smiley faces for eternity and the rest. That's way, way worse than dying.

FWIW, it seems kinda weird to me that such an AI would keep you alive... if you had a "smile-maximiser" AI, wouldn't it be indifferent to humans being braindead, as long as it's able to keep them smiling?

 

I'd like to have Paul Christiano's view that the "s-risk-risk" is 1/100 and that AGI is 30 years off

I think Paul's view is along the lines of "1% chance of some non-insignificant amount of suffering being intentionally created", not a 1% chance of this type of scenario.[1]

 

Could AGI arrive tomorrow in its present state?

I guess. But we'd need to come up with some AI model tomorrow, and this model suddenly becomes agentive and rapidly grows in power, and this model is designed with a utility function that values keeping humans alive but does not value humans flourishing... and even then, there'd likely be better ways to e.g. maximise the number of smiles in the universe, by using artificially created minds.

 

Eliezer has written a bit about this, but I think he considers it a mostly solved problem. 

 

What can I do as a 30 year old from Portugal with no STEM knowledge? Start learning math and work on alignment from home?

Probably get treatment for the anxiety and try to stop thinking about scenarios that are very unlikely, albeit salient in your mind. (I know, speaking from experience, that it's hard to do so!)

  1. ^

     I did, coincidentally, cold e-mail Paul a while ago to try to get his model on this type of stuff & got the following response:

    "I think these scenarios are plausible but not particularly likely. I don't think that cryonics makes a huge difference to your personal probabilities, but I could imagine it increasing them a tiny bit. If you cared about suffering-maximizing outcomes a thousand times as much as extinction, then I think it would be plausible for considerations along these lines to tip the balance against cryonics (and if you cared a million times more I would expect them to dominate). I think these risks are larger if you are less scope sensitive since the main protection is the small expected fraction of resources controlled by actors who are inclined to make such threats."

    TBH it's difficult to infer a particular probability estimate for one's individual probability without cryonics or voluntary uploading here; it's not completely clear just how bad a scenario would have to be (for a typical biological human) in order to fall within the class of scenarios described as 'plausible but not particularly likely'.

Comment by Anirandis on Secure homes for digital people · 2021-12-31T22:42:17.274Z · LW · GW
  • I think the problem is very likely to be resolved by different mechanisms based on trust and physical control rather than cryptography.

Do you expect these mechanisms to also resolve the case where a biological human is forcibly uploaded in horrible conditions?

Comment by Anirandis on Anirandis's Shortform · 2021-08-08T03:41:01.084Z · LW · GW

Lurker here; I'm still very distressed after thinking about some futurism/AI stuff & worrying about possibilities of being tortured. If anyone's willing to have a discussion on this stuff, please PM!

Comment by Anirandis on Open & Welcome Thread – November 2020 · 2020-11-17T11:40:13.512Z · LW · GW

I know I've posted similar stuff here before, but I could still do with some people to discuss infohazardous s-risk related stuff that I have anxieties with. PM me.

Comment by Anirandis on [deleted post] 2020-11-16T15:45:13.548Z

a

Comment by Anirandis on Hedonic asymmetries · 2020-11-15T07:31:39.962Z · LW · GW

Evolution "wants" pain to be a robust feedback/control mechanism that reliably causes the desired amount of avoidance - in this case, the greatest possible amount.

I feel that there's going to be a level of pain for which a mind of nearly any level of pain tolerance would exert 100% of its energy to avoid. I don't think I know enough to comment on how much further than this level the brain can go, but it's unclear why the brain would develop the capacity to process pain drastically more intense than this; pain is just a tool to avoid certain things, and it ceases to become useful past a certain point.

There are no cheap solutions that would have an upper cut-off to pain stimuli (below the point of causing unresponsiveness) without degrading the avoidance response to lower levels of pain.

I'm imagining a level of pain above that which causes unresponsiveness, I think. Perhaps I'm imagining something more extreme than your "extreme"?

It is to be expected that humans who are actively trying to cause pain (or to imagine how to do so) will succeed in causing amounts of pain beyond most anything found in nature.

Yeah, agreed.

Comment by Anirandis on Hedonic asymmetries · 2020-11-13T22:44:27.131Z · LW · GW

I'm unsure that "extreme" would necessarily get a more robust response, considering that there comes a point where the pain becomes disabling.

It seems as though there might be some sort of biological "limit" insofar as there are limited peripheral nerves, the grey matter can only process so much information, etc., and there'd be a point where the brain is 100% focused on avoiding the pain (meaning there'd be no evolutionary advantage to having the capacity to process additional pain). I'm not really sure where this limit would be, though. And I don't really know any biology so I'm plausibly completely wrong.

Comment by Anirandis on A full explanation to Newcomb's paradox. · 2020-10-12T17:29:28.543Z · LW · GW

I think the idea is that the 4th scenario is the case, and you can’t discern whether you’re the real you or the simulated version, as the simulation is (near-) perfect. In that scenario, you should act in the same way that you’d want the simulated version to. Either (1) you’re a simulation and the real you just won $1,000,000; or (2) you’re the real you and the simulated version of you thought the same way that you did and one-boxed (meaning that you get $1,000,000 if you one-box.)

Comment by Anirandis on How much to worry about the US election unrest? · 2020-10-12T17:22:28.784Z · LW · GW

If Trump loses the election, he's not the president anymore and the federal bureaucracy and military will stop listening to him.

He’d still be president until Biden’s inauguration though. I think most of the concern is that there’d be ~3 months of a president Trump with nothing to lose.

Comment by Anirandis on Open & Welcome Thread - September 2020 · 2020-09-20T00:36:00.414Z · LW · GW

If anyone happens to be willing to privately discuss some potentially infohazardous stuff that's been on my mind (and not in a good way) involving acausal trade, I'd appreciate it - PM me. It'd be nice if I can figure out whether I'm going batshit.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-11T17:10:07.281Z · LW · GW
it's much harder to know if you've got it pointed in the right direction or not

Perhaps, but the type of thing I'm describing in the post is more preventing worse-than-death outcomes even if the sign is flipped (by designing a reward function/model in such a way that it's not going to torture everyone if that's the case.)

This seems easier than recognising whether the sign is flipped or just designing a system that can't experience these sign-flip type errors; I'm just unsure whether this is something that we have robust solutions for. If it turns out that someone's figured out a reliable solution to this problem, then the only real concern is whether the AI's developers would bother to implement it. I'd much rather risk the system going wrong and paperclipping than going wrong and turning "I have no mouth, and I must scream" into a reality.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-11T16:02:00.532Z · LW · GW

My anxieties over this stuff tend not to be so bad late at night, TBH.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-11T03:47:22.459Z · LW · GW

Seems a little bit beyond me at 4:45am - I'll probably take a look tomorrow when I'm less sleep deprived (although still can't guarantee I'll be able to make it through then; there's quite a bit of technical language in there that makes my head spin.) Are you able to provide a brief tl;dr, and have you thought much about "sign flip in reward function" or "direction of updates to reward model flipped"-type errors specifically? It seems like these particularly nasty bugs could plausibly be mitigated more easily than avoiding false positives (as you defined them in the arxiv's paper's abstract) in general.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-10T19:24:52.352Z · LW · GW

Would you not agree that (assuming there's an easy way of doing it), separating the system from hyperexistential risk is a good thing for psychological reasons? Even if you think it's extremely unlikely, I'm not at all comfortable with the thought that our seed AI could screw up & design a successor that implements the opposite of our values; and I suspect there are at least some others who share that anxiety.

For the record, I think that this is also a risk worth worrying about for non-psychological reasons.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-10T15:01:50.541Z · LW · GW
You seem to have a somewhat general argument against any solution that involves adding onto the utility function in "What if that added solution was bugged instead?".

I might've failed to make my argument clear: if we designed the utility function as U = V + W (where W is the thing being added on and V refers to human values), this would only stop the sign flipping error if it was U that got flipped. If it were instead V that got flipped (so the AI optimises for U = -V + W), that'd be problematic.


I think it's better to move on from trying to directly target the sign-flip problem and instead deal with bugs/accidents in general.

I disagree here. Obviously we'd want to mitigate both, but a robust way of preventing sign-flipping type errors specifically is absolutely crucial (if anything, so people stop worrying about it.) It's much easier to prevent one specific bug from having an effect than trying to deal with all bugs in general.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-10T13:36:01.952Z · LW · GW

I see. I'm somewhat unsure how likely AGI is to be built with a neuromorphic architecture though.


I don't think that's an example of (3), more like (1) or (2), or actually "none of the above because GPT-2 doesn't have this kind of architecture".

I just raised GPT-2 to indicate that flipping the goal sign suddenly can lead to optimising for bad behavior without the AI neglecting to consider new strategies. Presumably that'd suggest it's also a possibility with cosmic ray/other errors.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-10T02:11:20.068Z · LW · GW

I hadn't really considered the possibility of a brain-inspired/neuromorphic AI, thanks for the points.

(2) seems interesting; as I understand it, you're basically suggesting that the error would occur gradually & the system would work to prevent it. Although maybe the AI realises it's getting positive feedback for bad things and keeps doing them, or something (I don't really know, I'm also a little sleep deprived and things like this tend to do my head in.) Like, if I hated beer then suddenly started liking it, I'd probably continue drinking it. Maybe the reward signals are simply so strong that the AI can't resist turning into a "monster", or whatever. Perhaps the system would implement checksums of some sort to do this automatically?

A similar point to (3) was raised by Dach in another thread, although I'm uncertain about this since GPT-2 was willing to explore new strategies when it got hit by a sign-flipping bug. I don't doubt that it would be different with a neuromorphic system, though.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-10T01:59:36.333Z · LW · GW

Mainly for brevity, but also because it seems to involve quite a drastic change in how the reward function/model as a whole functions. So it doesn't seem particularly likely that it'll be implemented.

Comment by Anirandis on How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe? · 2020-09-10T00:59:26.379Z · LW · GW

True, but note that he elaborates and comes up with a patch to the patch (that being have W refer to a class of events that would be expected to happen in the Universe's expected lifespan rather than one that won't.) So he still seems to support the basic idea, although he probably intended just to get the ball rolling with the concept rather than conclusively solve the problem.

Comment by Anirandis on Anirandis's Shortform · 2020-09-09T02:53:34.620Z · LW · GW

Perhaps malware could be another risk factor in the type of bug I described here? Not sure.

I'm still a little dubious of Eliezer's solution to the problem of separation from hyperexistential risk; if we had U = V + W where V is a reward function & W is some arbitrary thing it wants to minimise (e.g. paperclips), a sign flip in V (due to any of a broad disjunction of causes) would still cause hyperexistential catastrophe.

Or what about the case where instead of maximising -U, the values that the reward function/model gives for each "thing" is multiplied by -1. E.g. AI system gets 1 point for wireheading and -1 for torture, some weird malware/human screw-up (in the reward model or some relevant database), etc. flips the signs for each individual action. AI now maximises U = W - V.

This seems a lot more nuanced than *just* avoiding cosmic rays; and the potential consequences of a hellish "I have no mouth, and I must scream"-type are far worse than human extinction. I'm not happy with *any* non-negligible probability of this happening.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-09-04T14:24:32.429Z · LW · GW

I see what you're saying here, but the GPT-2 incident seems to downplay it somewhat IMO. I'll wait until you're able to write down your thoughts on this at length; this is something that I'd like to see elaborated on (as well as everything else regarding hyperexistential risk.)

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-09-03T14:45:43.408Z · LW · GW
Paperclipping seems to be negative utility, not approximately 0 utility.

My thinking was that an AI system that *only* takes values between 0 and + ∞ (or some arbitrary positive number) would identify that killing humans would result in 0 human value, which is its minimum utility.


I read Eliezer's idea, and that strategy seems to be... dangerous. I think that "Giving an AGI a utility function which includes features which are not really relevant to human values" is something we want to avoid unless we absolutely need to.

How come? It doesn't seem *too* hard to create an AI that only expends a small amount of its energy on preventing the garbage thing from happening.


I have much more to say on this topic and about the rest of your comment, but it's definitely too much for a comment chain. I'll make an actual post containing my thoughts sometime in the next week or two, and link it to you.

Please do! I'd love to see a longer discussion on this type of thing.


EDIT: just thought some more about this and want to clear something up:

Modern machine learning systems often require a specific incentive in order to explore new strategies and escape local maximums. We may see this behavior in future attempts at AGI, And no, it would not be flipped with the reward function/model- I'm highlighting that there is a really large variety of sign flip mistakes and most of them probably result in paperclipping.

I'm a little unsure on this one after further reflection. When this happened with GPT-2, the bug managed to flip the reward & the system still pursued instrumental goals like exploring new strategies:

Bugs can optimize for bad behavior
One of our code refactors introduced a bug which flipped the sign of the reward. Flipping the reward would usually produce incoherent text, but the same bug also flipped the sign of the KL penalty. The result was a model which optimized for negative sentiment while preserving natural language. Since our instructions told humans to give very low ratings to continuations with sexually explicit text, the model quickly learned to output only content of this form. This bug was remarkable since the result was not gibberish but maximally bad output. The authors were asleep during the training process, so the problem was noticed only once training had finished. A mechanism such as Toyota’s Andon cord could have prevented this, by allowing any labeler to stop a problematic training process.

So it definitely seems *plausible* for a reward to be flipped without resulting in the system failing/neglecting to adopt new strategies/doing something weird, etc.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-09-03T00:01:13.808Z · LW · GW
As an almost entirely inapplicable analogy . . . it's just doing something weird.
If we inverted the utility function . . . tiling the universe with smiley faces, i.e. paperclipping.

Interesting analogy. I can see what you're saying, and I guess it depends on what specifically gets flipped. I'm unsure about the second example; something like exploring new strategies doesn't seem like something an AGI would terminally value. It's instrumental to optimising the reward function/model, but I can't see it getting flipped *with* the reward function/model.

Can you clarify what you mean by this? Also, I get what you're going for, but paperclips is still extremely negative utility because it involves the destruction of humanity and the reconfiguration of the universe into garbage.

My thinking was that a signflipped AGI designed as a positive utilitarian (i.e. with a minimum at 0 human utility) would prefer paperclipping to torture because the former provides 0 human utility (as there aren't any humans), whereas the latter may produce a negligible amount. I'm not really sure if it makes sense tbh.

The reward modelling system would need to be very carefully engineered, definitely.

Even if we engineered it carefully, that doesn't rule out screw-ups. We need robust failsafe measures *just in case*, imo.

I thought of this as well when I read the post. I'm sure there's something clever you can do to avoid this but we also need to make sure that these sorts of critical components are not vulnerable to memory corruption. I may try to find a better strategy for this later, but for now I need to go do other things.

I wonder if you could feasibly make it a part of the reward model. Perhaps you could train the reward model itself to disvalue something arbitrary (like paperclips) even more than torture, which would hopefully mitigate it. You'd still need to balance it in a way such that the system won't spend all of its resources preventing this thing from happening at the neglect of actual human values, but that doesn't seem too difficult. Although, once again, we can't really have high confidence (>90%) that the AGI developers are going to think to implement something like this.

There was also an interesting idea I found in a Facebook post about this type of thing that got linked somewhere (can't remember where). Stuart Armstrong suggested that a utility function could be designed as such:

Let B1 and B2 be excellent, bestest outcomes. Define U(B1)=1, U(B2)=-1, and U=0 otherwise. Then, under certain assumptions about what probabilistic combinations of worlds it is possible to create, maximising or minimising U leads to good outcomes. Or, more usefully, let X be some trivial feature that the agent can easily set to -1 or 1, and let U be a utility function with values in [0,1]. Have the AI maximisise or minimise XU. Then the AI will always aim for the same best world, just with a different X value.

Even if we solve any issues with these (and actually bother to implement them), there's still the risk of an error like this happening in a localised part of the reward function such that *only* the part specifying something bad gets flipped, although I'm a little confused about this one. It could very well be the case that the system's complex enough that there isn't just one bit indicating whether "pain" or "suffering" is good or bad. And we'd presumably (hopefully) have checksums and whatever else thrown in. Maybe this could be mitigated by assigning more positive utility to good outcomes than negative utility to bad outcomes? (I'm probably speaking out of my rear end on this one.)


Memory corruption seems to be another issue. Perhaps if we have more than one measure we'd be less vulnerable to memory corruption. Like, if we designed an AGI with a reward model that disvalues two arbitrary things rather than just one, and memory corruption screwed with *both* measures, then something probably just went *very* wrong in the AGI and it probably won't be able to optimise for suffering anyway.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-09-02T15:53:13.140Z · LW · GW

Thanks for the detailed response. A bit of nitpicking (from someone who doesn't really know what they're talking about):

However, the vast majority of these mistakes would probably buff out or result in paper-clipping.

I'm slightly confused by this one. If we were to design the AI as a strict positive utilitarian (or something similar), I could see how the worst possible thing to happen to it would be *no* human utility (i.e. paperclips). But most attempts at an aligned AI would have a minimum at "I have no mouth, and I must scream". So any sign-flipping error would be expected to land there.

If humans are making changes to the critical software/hardware of an AGI (And we'll assume you figured out how to let the AGI allow you to do this in a way that has no negative side effects), *while that AGI is already running*, something bizarre and beyond my abilities of prediction is already happening.

In the example, the AGI was using online machine learning, which, as I understand it, would probably require the system to be hooked up to a database that humans have access to in order for it to learn properly. And I'm unsure as to how easy it'd be for things like checksums to pick up an issue like this (a boolean flag getting flipped) in a database.

Perhaps there'll be a reward function/model intentionally designed to disvalue some arbitrary "surrogate" thing in an attempt to separate it from hyperexistential risk. So "pessimizing the target metric" would look more like paperclipping than torture. But I'm unsure as to (1) whether the AGI's developers would actually bother to implement it, and (2) whether it'd actually work in this sort of scenario.

Also worth noting is that an AGI based on reward modelling is going to have to be linked to another neural network, which is going to have constant input from humans. If that reward model isn't designed to be separated in design space from AM, someone could screw up with the model somehow. If we were to, say, have U = V + W (where V is the reward given by the reward model and W is some arbitrary thing that the AGI disvalues, as is the case in Eliezer's Arbital post that I linked,) a sign flip-type error in V (rather than a sign flip in U) would lead to a hyperexistential catastrophe.

It will not be possible to flip the sign of the utility function or the direction of the updates to the reward model, even if several of the researchers on the project are actively trying to sabotage the effort and cause a hyperexistential disaster.

I think this is somewhat likely to be the case, but I'm not sure that I'm confident enough about it. Flipping the direction of updates to the reward model seems harder to prevent than a bit flip in a utility function, which could be prevent through error-correcting code memory (as you mentioned earlier.)


Despite my confusions, your response has definitely decreased my credence in this sort of thing from happening.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-09-01T14:21:36.835Z · LW · GW

I've seen that post & discussed it on my shortform. I'm not really sure how effective something like Eliezer's idea of "surrogate" goals there would actually be - sure, it'd help with some sign flip errors but it seems like it'd fail on others (e.g. if U = V + W, a sign error could occur in V instead of U, in which case that idea might not work.) I'm also unsure as to whether the probability is truly "very tiny" as Eliezer describes it. Human errors seem much more worrying than cosmic rays.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-30T16:05:45.642Z · LW · GW

I don't really know what the probability is. It seems somewhat low, but I'm not confident that it's *that* low. I wrote a shortform about it last night (tl;dr it seems like this type of error could occur in a disjunction of ways and we need a good way of separating the AI in design space.)


I think I'd stop worrying about it if I were convinced that its probability is extremely low. But I'm not yet convinced of that. Something like the example Gwern provided elsewhere in this thread seems more worrying than the more frequently discussed cosmic ray scenarios to me.

Comment by Anirandis on Anirandis's Shortform · 2020-08-29T20:23:46.126Z · LW · GW

It seems to me that ensuring we can separate an AI in design space from worse-than-death scenarios is perhaps the most crucial thing in AI alignment. I don’t at all feel comfortable with AI systems that are one cosmic ray: or, perhaps more plausibly, one human screw-up (e.g. this sort of thing) away from a fate far worse than death. Or maybe a human-level AI makes a mistake and creates a sign flipped successor. Perhaps there’s some sort of black swan possibility that nobody realises. I think that it’s absolutely critical that we have a robust mechanism in place to prevent something like this from happening regardless of the cause; sure, we can sanity-check the system, but that won’t help when the issue is caused after we’ve sanity-checked it, as is the case with cosmic rays or some human errors (like Gwern’s example, which I linked). We need ways to prevent this sort of thing from happening *regardless* of the source.

Some propositions seem promising. Eliezer’s suggestion of assigning a sort of “surrogate goal” that the AI hates more than torture, but not enough to devote all of its energy to attempt to prevent, seems promising. But this would only work when the *entire* reward is what gets flipped; with how much confidence can we rule out, say, a localised sign flip in some specific part of the AI that leads to the system terminally valuing something bad but that doesn’t change anything else (so the sign on the “surrogate” goal remains negative). Can we even be confident that the AI’s development team is going to implement something like this, and that it will work as intended?

An FAI that's one software bug or screw-up in a database away from AM is a far scarier possibility than a paperclipper, IMO.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-24T00:17:50.304Z · LW · GW

Sure, but I'd expect that a system as important as this would have people monitoring it 24/7.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-22T18:25:08.378Z · LW · GW

Do you think that this specific risk could be mitigated by some variant of Eliezer’s separation from hyperexistential risk or Stuart Armstrong's idea here:

Let B1 and B2 be excellent, bestest outcomes. Define U(B1) = 1, U(B2) = -1, and U = 0 otherwise. Then, under certain assumptions about what probabilistic combinations of worlds it is possible to create, maximising or minimising U leads to good outcomes.
Or, more usefully, let X be some trivial feature that the agent can easily set to -1 or 1, and let U be a utility function with values in [0, 1]. Have the AI maximise or minimise XU. Then the AI will always aim for the same best world, just with a different X value.

Or at least prevent sign flip errors from causing something worse than paperclipping?

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-21T15:20:31.598Z · LW · GW

I asked Rohin Shah about that possibility in a question thread about a month ago. I think he's probably right that this type of thing would only plausibly make it through the training process if the system's *already* smart enough to be able to think about this type of thing. And then on top of that there are still things like sanity checks which, while unlikely to pick up numerous errors, would probably notice a sign error. See also this comment:

Furthermore, if an AGI design has an actually-serious flaw, the likeliest consequence that I expect is not catastrophe; it’s just that the system doesn’t work. Another likely consequence is that the system is misaligned, but in an obvious ways that makes it easy for developers to recognize that deployment is a very bad idea.

IMO it's incredibly important that we find a way to prevent this type of thing from occurring *after* the system has been trained, whether that be hyperexistential separation or something else. I think that a team that's safety-conscious enough to come up with a (reasonably) aligned AGI design is going to put a considerable amount of effort into fixing bugs & one as obvious as a sign error would be unlikely to make it through. And hopefully - even better, they would have come up with a utility function that can't be easily reversed by a single bit flip or doesn't cause outcomes worse than death when minimised. That'd (hopefully?) solve the SignFlip issue *regardless* of what causes it.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-19T02:53:17.257Z · LW · GW

I'm under the impression that an AGI would be monitored *during* training as well. So you'd effectively need the system to turn "evil" (utility function flipped) during the training process, and the system to be smart enough to conceal that the error occurred. So it'd need to happen a fair bit into the training process. I guess that's possible, but IDK how likely it'd be.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-19T02:18:07.571Z · LW · GW

Sure, but the *specific* type of error I'm imagining would surely be easier to pick up than most other errors. I have no idea what sort of sanity checking was done with GPT-2, but the fact that the developers were asleep when it trained is telling: they weren't being as careful as they could've been.

For this type of bug (a sign error in the utility function) to occur *before* the system is deployed and somehow persist, it'd have to make it past all sanity-checking tools (which I imagine would be used extensively with an AGI) *and* somehow not be noticed at all while the model trains *and* whatever else. Yes, these sort of conjunctions occur in the real world but the error is generally more subtle than "system does the complete opposite of what it was meant to do".

I made a question post about this specific type of bug occurring before deployment a while ago and think my views have shifted significantly; it's unlikely that a bug as obvious as one that flips the sign of the utility function won't be noticed before deployment. Now I'm more worried about something like this happening *after* the system has been deployed.

I think a more robust solution to all of these sort of errors would be something like the separation from hyperexistential risk article that I linked in my previous response. I optimistically hope that we're able to come up with a utility function that doesn't do anything worse than death when minimised, just in case.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-19T01:38:56.861Z · LW · GW

Wouldn't any configuration errors or updates be caught with sanity-checking tools though? Maybe the way I'm visualising this is just too simplistic, but any developers capable of creating an *aligned* AGI are going to be *extremely* careful not to fuck up. Sure, it's possible, but the most plausible cause of a hyperexistential catastrophe to me seems to be where a SignFlip-type error occurs once the system has been deployed.


Hopefully a system as crucially important as an AGI isn't going to have just one guy watching it who "takes a quick bathroom break". When the difference is literally Heaven and Hell (minimising human values), I'd consider only having one guy in a basement monitoring it to be gross negligence.

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-19T00:51:36.380Z · LW · GW

If we actually built an AGI that optimised to maximise a loss function, wouldn't we notice long before deploying the thing?


I'd imagine that this type of thing would be sanity-checked and tested intensively, so signflip-type errors would predominantly be scenarios where the error occurs *after* deployment, like the one Gwern mentioned ("A programmer flips the meaning of a boolean flag in a database somewhere while not updating all downstream callers, and suddenly an online learner is now actively pessimizing their target metric.")

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-16T00:30:45.598Z · LW · GW

Interesting. Terrifying, but interesting.

Forgive me for my stupidity (I'm not exactly an expert in machine learning), but it seems to me that building an AGI linked to some sort of database like that in such a fashion (that some random guy's screw-up can effectively reverse the utility function completely) is a REALLY stupid idea. Would there not be a safer way of doing things?

Comment by Anirandis on Open & Welcome Thread - August 2020 · 2020-08-15T22:52:10.208Z · LW · GW

Do you think that this type of thing could plausibly occur *after* training and deployment?