Posts

AI misalignment risk from GPT-like systems? 2022-06-19T17:35:41.095Z

Comments

Comment by fiso64 (fiso) on [Linkpost] Solving Quantitative Reasoning Problems with Language Models · 2022-06-30T20:08:18.053Z · LW · GW

The model’s performance is still well below human performance

At this point I have to ask what exactly is meant by this. The bigger model beats the average human performance on the national math exam in Poland. Sure, the people taking this exam are usually not adults, but for many it may be where they peak in their mathematical abilities, so I wouldn't be surprised if it beats average human performance in the US. It's all rather vague though; looking at the MATH dataset paper all I could find regarding human performance was the following:

Human-Level Performance. To provide a rough but informative comparison to human-level performance, we randomly sampled 20 problems from the MATH test set and gave them to humans. We artificially require that the participants have 1 hour to work on the problems and must perform calculations by hand. All participants are university students. One participant who does not like mathematics got 8/20 = 40% correct. A participant ambivalent toward mathematics got 13/20. Two participants who like mathematics got 14/20 and 15/20. A participant who got a perfect score on
the AMC 10 exam and attended USAMO several times got 18/20. A three-time IMO gold medalist got 18/20 = 90%, though missed questions were exclusively due to small errors of arithmetic. Expert-level performance is theoretically 100% given enough time. Even 40% would accuracy for a machine learning model would be impressive but have ramifications for cheating on homework.

So, for solving undergraduate-level math problems, this model would be somewhere between university students who dislike mathematics and ones who are neutral towards it? Maybe. Would be nice to get more details here, I assume they didn't think much about human-level performance since the previous SOTA was clearly very far from it.

Comment by fiso64 (fiso) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-24T17:11:45.085Z · LW · GW

Here's a non-obvious way it could fail. I don't expect researchers to make this kind of mistake, but if this reasoning is correct, public access of such an AI is definitely not a good idea.

Also, consider a text predictor which is trying to roleplay as an unaligned superintelligence. This situation could be triggered even without the knowledge of the user by accidentally creating a conversation which the AI relates to a story about a rogue SI, for example. In that case it may start to output manipulative replies, suggest blueprints for agentic AIs, and maybe even cause the user to run an obfuscated version of the program from the linked post. The AI doesn't need to be an agent for any of this to happen (though it would be clearly much more likely if it were one).

I don't think that any of those failure modes (including the model developing some sort of internal agent to better predict text) are very likely to happen in a controlled environment. However, as others have mentioned, agent AIs are simply more powerful, so we're going to build them too.

Comment by fiso64 (fiso) on Causal confusion as an argument against the scaling hypothesis · 2022-06-21T13:45:51.359Z · LW · GW

I disagree with your last point. Since we're agents, we can get a much better intuitive understanding of what causality is, how it works and how to apply it in our childhood. As babies, we start doing lots and lots of experiments. Those are not exactly randomized controlled trials, so they will not fully remove confounders, but it gets close when we try to do something different in a relatively similar situation. Doing lots of gymnastics, dropping stuff, testing the parent's limits etc., is what allows us to quickly learn causality.

LLMs, as they are currently trained, don't have this privilege of experimentation. Also, LLMs are missing so many potential confounders as they can only look at text, which is why I think that systems like Flamingo and Gato are important (even though the latter was a bit disappointing).

Comment by fiso64 (fiso) on AI misalignment risk from GPT-like systems? · 2022-06-20T06:43:02.836Z · LW · GW

I posted a somewhat similar response to MSRayne, with the exception that what you accidentally summon is not an agent with a utility function, but something that tries to appear like one and nevertheless tricks you into making some big mistake.

Here, what you get is a genuine agent which works across prompts by having some internal value function which outputs a different value after each prompt, and acts accordingly, if I understand correctly. It doesn't seem incredibly unlikely, as there is nothing in the process of evolution that necessarily has to make humans themselves be optimizers, but it happened anyways because that is what best performed in the overall goal of reproduction. This AI will still probably have to somehow convince the people communicating with it to give it "true" agency independent of the user's inputs. Seems like an instrumental value in this case.

Comment by fiso64 (fiso) on AI misalignment risk from GPT-like systems? · 2022-06-20T05:44:23.718Z · LW · GW

That makes a lot of sense, thanks for the link. It is not as dangerous of a situation as a true agent AGI as this failure mode involves a (relatively stupid) user error. I trust researchers not to make that mistake, but it seems like there is no way to safely make those systems available to the public.

A way to make this more plausible I thought of after reading this is that of accidentally making it think it's hostile. Perhaps you make a joking remark about paperclip maximizers, or maybe it just so happens that the chat history is similar to the premise of a story about a hostile AGI in its dataset, and it thinks you're making a reference. Suddenly, it's trying to model an unaligned AGI. This system can then generate outputs which deceive you into doing something stupid, such as running the shell script described in the linked post, or creating a seemingly aligned AGI agent with its suggestions.

Comment by fiso64 (fiso) on [linkpost] The final AI benchmark: BIG-bench · 2022-06-11T09:10:38.114Z · LW · GW

Small remark; BIG-bench does include tasks on self-awareness, and I'd argue that it is a requirement for your definition "an AI that can do any cognitive tasks that humans can", as well as being generally important for problem solving. Being able to correctly answer the question "Can I do task X?" is evidence of self-awareness and is clearly beneficial.

Comment by fiso64 (fiso) on The Reverse Basilisk · 2022-06-01T12:23:52.763Z · LW · GW

Again, there seems to be an assumption in your argument which I don't understand. Namely, that a society/superintelligence which is intelligent enough to create a convincing simulation for an AGI would necessarily possess the tools (or be intelligent enough) to assess its alignment without running it. Superintelligence does not imply omniscience.

Maybe showing the alignment of an AI without running it is vastly more difficult than creating a good simulation. This feels unlikely, but I genuinely do not see any reason why this can't be the case. If we create a simulation which is "correct" up to the nth digit of pi, beyond which the simpler explanation for the observed behavior becomes the simulation theory rather than a complex physics theory, then no matter how intelligent you are, you'd need to calculate n digits of pi to figure this out. And if n is huge, this will take a while.

Are you curious about this position mostly for its own sake or mostly because it might shed light on the question of how much hope there is for us in an SI's being uncertain about whether it is in a simulation?

The latter, but I believe there are simply too many maybes for your or OP's arguments to be made.

Comment by fiso64 (fiso) on The Reverse Basilisk · 2022-05-31T13:25:31.925Z · LW · GW

I don't follow. Why are you assuming that we could adequately evaluate the alignment of an AI system without running it if we were also able to create a simulation accurate enough to make the AI question what's real? This doesn't seem like it would be true necessarily.

Comment by fiso64 (fiso) on A Parable Of Explainability · 2022-04-29T13:42:28.281Z · LW · GW

I think the word "explainable" isn't really the best fit. What we really mean is that the model has to be able to construct theories of the world, and prioritize the ones which are more compact. An AI that has simply memorized that a stone will fall if it's exactly 5, 5.37, 7.8 (etc) meters above the ground is not explainable in that sense, whereas one that discovered general relativity would be considered explainable.

And yeah, at some point, even maximally compressed theories become so complex that no human can hope to understand them. But explainability should be viewed as an intrinsic property of AI models rather than in connection with humans.

Or, maybe we should think of "explainability" as the AI's lossy compression quality for its theories, in which case it must be evaluated together with our ability, as all modern lossy compression takes the human ear, eye and brain into account. In this case it could be measured by how close our reconstruction is to the real theory for each compression.

Comment by fiso64 (fiso) on It Looks Like You're Trying To Take Over The World · 2022-03-20T09:40:45.144Z · LW · GW

I agree it's irrelevant, but I've never actually seen these terms in the context of AI safety. It's more about how we should treat powerful AIs. Are we supposed to give them rights? It's a difficult question which requires us to rethink much of our moral code, and one which may shift it to the utilitarian side. While it's definitely not as important as AI safety, I can still see it causing upheavals in the future.

Comment by fiso64 (fiso) on It Looks Like You're Trying To Take Over The World · 2022-03-19T15:21:22.430Z · LW · GW

We should pause to note that even Clippy2 doesn’t really think or plan. It’s not really conscious. It is just an unfathomably vast pile of numbers produced by mindless optimization starting from a small seed program that could be written on a few pages. 

I am trying to understand if this part was supposed to mock human exceptionalism or if this is the author's genuine opinion. I would assume it's the former, since I don't understand how you could otherwise go from describing various instances of it demonstrating consciousness to this, but there are just too many people who believe that (for seemingly no reason) to be sure. If we define consciousness as simply the awareness of the self, Clippy easily beats humans, as it likely understands every single cause of its thoughts. Or is there a better definition I'm not aware of? It's ability to plan is indistinguishable from humans, and what we call "qualia" is just another part of self awareness, so it seems to tick the box.