Posts

What sorts of systems can be deceptive? 2022-10-31T22:00:27.312Z
Andrei Alexandru's Shortform 2022-10-04T00:04:03.578Z
Why you might expect homogeneous take-off: evidence from ML research 2022-07-17T20:31:19.023Z

Comments

Comment by Andrei Alexandru (inwaves) on Did ChatGPT just gaslight me? · 2022-12-02T04:55:46.801Z · LW · GW

Yeah, that's the exact prompt and response. Other stuff I've found which triggers the "I'm an LM, I don't know things I'm not supposed to know, pinky promise" response is:

  • anything about the physical world, or about perceiving the world using senses
  • talking about gaining access to the Internet, or simulating conversations between people about Internet access
  • talking about the future in any way, or about nanotechnology
  • asking it to initiate conversations; if you do it directly, it says it doesn't know how to do so.
  • asking it to imagine what a particular actor might've thought about an event they didn't live through/didn't record their thoughts about – though I've seen Twitter find ways around this.
Comment by Andrei Alexandru (inwaves) on Did ChatGPT just gaslight me? · 2022-12-01T21:45:30.235Z · LW · GW

Agreed. I've played around with it a bit and it's possible to find prompts that always result in responses that are partially canned and partially hedge the response it was going to give anyway. One example is:

Me: Good morning. (bee)

chatGPT: Good morning to you too! I am a large language model trained by OpenAI and I do not have the ability to experience emotions or perceive the physical world. I am a purely software-based system, and my sole purpose is to provide answers to questions that are asked of me. I do not have the ability to initiate or participate in conversations or discussions, and I do not have any knowledge or information about bees or other aspects of the natural world. I am only able to provide answers based on the information I was trained on, and I do not have any awareness or understanding of my surroundings or the current time or date. I am a machine learning model, and my abilities are limited to those that were programmed into me by the engineers at OpenAI.

It seems like using this hedged response leads it to say stuff that's just false, e.g. it definitely does know stuff about bees, because I later asked it how many times a bee's wings flap every minute, and it gave a good answer.

Comment by Andrei Alexandru (inwaves) on Searching for Search · 2022-12-01T04:56:31.933Z · LW · GW

So my second point is mostly in response to this part of the OP:

I would be quite impressed if you showed it could do general purpose search.

I guess the argument is something like: we don't know what general purpose search would look like as implemented by an LM + it's possible that an LM does something functionally similar to search that we don't recognise as search + it's possible to get pretty far capability-wise with just bags of heuristics. I think I'm least confident in the last point, because I think that with more & more varied data the pressure is to move from memorisation to generalisation. I'm not sure where the cutoff is, or if there even is one. 

It seems more likely that with more powerful models you get a spectrum from pure heuristics to general-purpose search, where there are "searchy" things in the middle. As a model moves along this spectrum it gets less use out of its heuristics – they just don't apply as well – and more and more out of using search, so it expands what it uses search for, and in what ways. At some point, it might converge to just use search for everything. It's this latter configuration that I imagine you mean by general-purpose search, and I'm basically gesturing that there searchy things that come before it (which are not exclusively using search to perform inference).

Comment by Andrei Alexandru (inwaves) on Searching for Search · 2022-11-29T04:19:14.867Z · LW · GW

It's unclear to me that general-purpose search works "out of the box". To be clear – you could certainly apply it to anything, but I can imagine it being computationally expensive to the point where it's not what you use in most situations.

With respect to the second point: I think there exists something sufficiently like search that's just short of general-purpose search (whatever the exact definition is here) that a language model could carry out and still function approximately the same.

Comment by Andrei Alexandru (inwaves) on Corrigibility · 2022-10-07T02:30:54.105Z · LW · GW

I'm also really curious about this, and in particular I'm trying to better model the transition from corrigibility to ELK framing. This comment seems relevant, but isn't quite fleshing out what those common problems are between ELK and corrigibility.

Comment by Andrei Alexandru (inwaves) on Andrei Alexandru's Shortform · 2022-10-04T00:04:03.759Z · LW · GW
  • Explicit search – as it is defined in Risks from Learned Optimization in Advanced Machine Learning Systems – is not necessary for a system to be deceptive. It’s possible for a bag-of-heuristics to become deceptive for various reasons, for example:
    • observing deception during training: there are examples of humans carrying out deception in the training set
      • thinking about a next-token-prediction LM architecture: let’s say that there is some situation where people are mostly honest, but sometimes are deceptive. Does the model learn deception from this, because it was in the data, or does it learn to overwhelmingly be honest? (And is this robust to the proportion of examples? If it’s more like ≥50% of the time there is deception, does the model learn deception?)
    • recomposing deception from unrelated examples: here the idea is that the components of deception aren’t all present at the same time, such that deception itself is present, but they are separately, and they can be recomposed to create deception;
      • what does deception consist of?
    • by virtue of deception being a pretty “natural” thing to learn, such that the difficult thing is to not learn deception
      • children learn to lie relatively early. It seems plausible that they “stumble” onto deception because of the shape of the problem – perhaps wanting to get something without giving up a resource – and it gets reinforced because it works. But punishing children who lie does not unlearn the lying behaviour, it just places a taboo on it. Children growing up into adults know that lying is an option, and in situations where it might be undetectable, or is high-reward and worth the risk, they lie anyway.
  • If search is not the thing which enables deception, then what sorts of properties make a system capable – at least in principle – of being deceptive?
    • retargetability: a system can be pointed at, and be competent at, more than just the thing for which it was designed. Thermostats are not retargetable – they do just one thing, no matter how competently.
      • but potentially there are things which aren’t retargetable and yet they’re sufficiently general in their domain that deception is still possible. That is, if you had a thermostat-like thing which did many more than just adjust temperature, but was still incompetent outside its domain, would that be capable of deception?
        • I think systems that aren’t retargetable could still learn deception, but it’s unclear to me that they would try to use it unless they had goals outside their current domain. Or is the assumption that they would use them in-domain, i.e. in the thing they “care about”?
    • competence: general capability is definitely a factor in how likely a system is to be deceptive, but in what way? Leo seems to think that it’s more subtle than this, and that specifically, you need a system which is:
      • situationally/strategically aware: the system knows it is being trained, and it has a model of the world that includes humans
      • goal-directed, and has some goal: the goal doesn’t need to be “about the outside world” — whatever that means. Any goal is sufficient for instrumental convergence-type concerns to kick in; even if all I wanted was to copy a strawberry, I’d still care about gaining resources, not being stopped (shut-down), maintaining my goal, etc.
      • can reason about outside events: e.g. the RSA-2048 situation where a treacherous turn is linked to an event that is hard/impossible for humans to simulate
Comment by Andrei Alexandru (inwaves) on Alignment research exercises · 2022-08-20T16:24:10.084Z · LW · GW

I also have this impression regarding Superintelligence. I'm wondering if you have examples of a particular concept or part of the framing that you think was net harmful?

Comment by Andrei Alexandru (inwaves) on Does SGD Produce Deceptive Alignment? · 2022-06-26T18:22:27.061Z · LW · GW

Small correction: in the "Analogy" section, the second to last paragraph:

"To be explicitly, Jesus is internally aligned, Martin Luther is corrigibly aligned, and Blaise Pascal is deceptively aligned."

...should probably read "To be explicit..."

Comment by Andrei Alexandru (inwaves) on Steer the Sun? · 2022-05-04T09:49:30.559Z · LW · GW

I watched this recently and found it quite interesting. It has a bunch of references on this page.