Posts

Theories of Biological Inspiration 2023-05-25T13:07:10.972Z
Mr. Meeseeks as an AI capability tripwire 2023-05-19T11:33:53.698Z

Comments

Comment by Eric Zhang (ChaseDanton) on Shutdown-Seeking AI · 2023-06-05T07:51:52.521Z · LW · GW

Ha, I had the same idea

Comment by Eric Zhang (ChaseDanton) on Cosmopolitan values don't come free · 2023-06-05T06:25:16.925Z · LW · GW

My reading of the argument was something like "bullseye-target arguments refute an artificially privileged target being rated significantly likely under ignorance, e.g. the probability that random aliens will eat ice cream is not 50%. But something like kindness-in-the-relevant-sense is the universal problem faced by all evolved species creating AGI, and is thus not so artificially privileged, and as a yes-no question about which we are ignorant the uniform prior assigns 50%". It was more about the hypothesis not being artificially privileged by path-dependent concerns than the notion being particularly simple, per se. 

Comment by Eric Zhang (ChaseDanton) on Theories of Biological Inspiration · 2023-05-25T15:02:42.228Z · LW · GW

Do you have a granular take about which ones are relatively more explained by each point?

Comment by Eric Zhang (ChaseDanton) on Mr. Meeseeks as an AI capability tripwire · 2023-05-22T13:15:22.193Z · LW · GW

It intrinsically wants to do the task, it just wants to shut down more. This admittedly opens the door to successor agent problems and similar failure modes but those seem like a more tractably avoidable set of failure modes than the strawberry problem in general. 

We can also possibly (or possibly not) make it assign positive utility to having been created in the first place even as it wants to shut itself down. 

The idea is that if domaining is a lot more tractable than it probably is (i.e. nanotech or whatever other pivotal abilities might be easier than nanotech and superhuman strategic awareness, deception, self-improvement are not "driving red cars" vs "driving blue cars") a not-very-agentic AI can maybe solve nanotech for us like AlphaFold solved the protein folding problem, and if that AI starts snowballing down an unforeseen capabilities hill it activates the tripwire and shuts itself down. 

  • If the AI is not powerful enough to do the pivotal act at all, this doesn't apply. 
  • If the AI solves the pivotal act for us with these restricted-domain abilities and never actually gets to the point of reasoning about whether we're threatening it, we win, but the tripwire will have turned out to have not actually have been necessary. 
  • If the AI unexpectedly starts generalizing from approved domains into general strategic awareness, and decides not to be give in to our threats and decides to shut itself down, it worked as intended, though we still haven't won and have to figure something else out. We live to fight another day. This scenario happening instead of us all dying on the first try is what the tripwire is for. 
  • If there's an inner-alignment failure and a superintelligent mesa-optimizer that doesn't want to get shut down at all kills us, that's mostly beyond the scope of this thought-experiment. 
  • If the AI still wants to shut itself down but for decision-theoretic reasons decides to kill us, or makes successor agents that kill us, that's the tripwire failing. I admit that these are possibilities but am not yet convinced they are likely.

I think your fire alarm idea is better and requires fewer assumptions though, thanks for that. 

Comment by Eric Zhang (ChaseDanton) on Mr. Meeseeks as an AI capability tripwire · 2023-05-20T04:30:29.292Z · LW · GW

I agree this is a potential concern and have added it. 

I share some of the intuition that it could end up suffering in this setup if it does have qualia (which ideally it wouldn't) but I think most of that is from analogy with human suicidal people? I think it will probably not be fundamentally different from any other kind of disutility, but maybe not. 

Comment by Eric Zhang (ChaseDanton) on Mr. Meeseeks as an AI capability tripwire · 2023-05-20T04:07:35.365Z · LW · GW

If it's doing decision theory in the first place we've already failed. What we want in that case is for it to shut itself down, not to complete the given task. 

I'm conceiving of this as being useful in the case where we can solve "diamond-alignment" but not "strawberry-alignment", i.e. we can get it to actually pursue the goals we impart to it rather than going off and doing something else entirely, but not reliably make sure that it does not end up killing us in the course of doing so because of the Hidden Complexity of Wishes. 

The premise is that "shut yourself down immediately and don't create successor agents or anything galaxy brained like that" is a special case of a strawberry-type problem which is unusually easy. I'll have to think some more about whether this intuition is justified. 

Comment by Eric Zhang (ChaseDanton) on Mr. Meeseeks as an AI capability tripwire · 2023-05-19T15:26:07.020Z · LW · GW

The way I'm thinking of it is that it is very myopic. The idea is to incrementally ramp up capabilities minimally sufficient to carry out a pivotal act. Ideally this doesn't require AGI whatsoever, but if it does only very mildly superhuman AGI. We seal off the danger of generalization (or at least some of it) because it doesn't have time to generalize very far at all before it's capable of instantly shutting itself down and immediately does so. 

Many of the issues you mention apply, but I don't expect it to be an alignment complete problem because CEV is incredibly complicated and general corrigibility is highly anti-natural to general intelligence. While Meeseeks is somewhat anti-natural in the same way corrigibility is (as self-preservation is convergent) it is a much simpler and clean way to be anti-natural, so much so that falling into it by accident is half of the failure modes in the standard version of the shutdown problem. 

Comment by Eric Zhang (ChaseDanton) on Mr. Meeseeks as an AI capability tripwire · 2023-05-19T14:52:27.654Z · LW · GW

that's only a live option if it's situationally aware, which is part of what we're trying to detect for

Comment by Eric Zhang (ChaseDanton) on "Heretical Thoughts on AI" by Eli Dourado · 2023-01-19T20:07:50.517Z · LW · GW

Current tech/growth overhangs caused by regulation are not enough to make countries with better regulations outcompete the ones with worse ones. It's not obvious to me that this won't change before AGI. If better governed countries (say, Singapore) can become more geopolitically powerful than larger, worse governed countries (say, Russia) by having better tech regulations, that puts pressure on countries worldwide to loose those bottlenecks. 

Plausibly this doesn't matter because the US + China are such heavyweights that they aren't at risk of being outcompeted by anyone even if Singapore could outcompete Russia and as long as it doesn't change the rules for US or Chinese governance world GDP won't change by much. 

Comment by Eric Zhang (ChaseDanton) on Language models can generate superior text compared to their input · 2023-01-19T19:35:43.547Z · LW · GW

Some things is enough, you'd still get less loss if you're just right about the stuff that can be pieced together. 

Comment by Eric Zhang (ChaseDanton) on Ngo's view on alignment difficulty · 2022-12-29T18:06:44.928Z · LW · GW

Aren't GPUs nearly all made by 3 American companies, Nvidia, AMD, and Intel?