Posts

LLMs are likely not conscious 2024-09-29T20:57:26.111Z
Exciting New Interpretability Paper! 2023-05-09T16:39:35.009Z
Penalize Model Complexity Via Self-Distillation 2023-04-04T18:52:41.063Z
Cap Model Size for AI Safety 2023-03-06T01:11:59.617Z
Simple Way to Prevent Power-Seeking AI 2022-12-07T00:26:10.488Z

Comments

Comment by research_prime_space on LLMs are likely not conscious · 2024-09-30T04:51:17.785Z · LW · GW

It can't represent a subjective sense of yellow, because if so, consciousness would be a linear function. That's somewhat ridiculous because I would experience a story about a "dog" differently based on the context.

 Furthermore, LLMs scale "features" by how strongly they appear (e.g. the positive sentiment vector is scaled up if the text is very positive). So the LLM's conscious processing of a positive sentiment would be linearly proportional to how positive the text is. Which also seems ridiculous.

I don't expect consciousness to have any useful properties. Let's say you have a deterministic function y = f(x). You can encode just y = f(x), or y = f(x) where f includes conscious representations in the intermediate layers. The latter does not help you achieve increased training accuracy in the slightest. Neural networks also have a strong simplicity bias towards low frequency functions (this has been mathematically proven), and f(x) without consciousness is much simpler/lower frequency to encode than f(x) with consciousness. 

Comment by research_prime_space on LLMs are likely not conscious · 2024-09-29T22:21:23.545Z · LW · GW

I removed it. I don't have an agenda; I just included it because it changed my priors on the mechanism for human consciousness. So that subsequently affected my prior for whether or not AI could be conscious. 

Comment by research_prime_space on Comparing Anthropic's Dictionary Learning to Ours · 2023-10-20T18:06:10.033Z · LW · GW

This is cool! These sparse features should be easily "extractable" by the transformer's key, query, and value weights in a single layer. Therefore, I'm wondering if these weights can somehow make it easier to "discover" the sparse features? 

Comment by research_prime_space on Penalize Model Complexity Via Self-Distillation · 2023-04-10T00:13:45.824Z · LW · GW
  1. I don't really think that 1. would be true -- following DAN-style prompts is the minimum complexity solution. You want to act in accordance with the prompt.
  2. Backdoors don't emerge naturally. So if it's computationally infeasible to find an input where the original model and the backdoored model differ, then self-distillation on the backdoored model is going to be the same as self-distillation on the original model. 

The only scenario where I think self-distillation is useful would be if 1) you train a LLM on a dataset, 2) fine-tune it to be deceptive/power-seeking, and 3) self-distill it on the original dataset, then self-distilled model would likely no longer be deceptive/power-seeking. 

Comment by research_prime_space on Penalize Model Complexity Via Self-Distillation · 2023-04-08T02:49:49.661Z · LW · GW

I think self-distillation is better than network compression, as it possesses some decently strong theoretical guarantees that you're reducing the complexity of the function. I haven't really seen the same with the latter.

But what research do you think would be valuable, other than the obvious (self-distill a deceptive, power-hungry model to see if the negative qualities go away)? 

Comment by research_prime_space on Penalize Model Complexity Via Self-Distillation · 2023-04-08T00:58:43.394Z · LW · GW

As of right now, I don't think that LLMs are trained to be power seeking and deceptive.

Power-seeking is likely if the model is directly maximizing rewards, but LLMs are not quite doing this.

Comment by research_prime_space on Penalize Model Complexity Via Self-Distillation · 2023-04-07T16:02:31.359Z · LW · GW

I just wanted to add another angle. Neural networks have a fundamental "simplicity bias", where they learn low frequency components exponentially faster than high frequency components. Thus, self-distillation is likely to be more efficient than training on the original dataset (the function you're learning has fewer high frequency components). This paper formalizes this claim. 

But in practice, what this means is that training GPT-3.5 from scratch is hard but simply copying GPT-3.5 is pretty easy. Stanford was recently able to finetune a pretty bad 7B model to be as good as GPT-3.5 using only 52K examples (generated from GPT-3.5) and $600 of compute. This means that once a GPT is out there, it's fairly easy for malevolent actors to replicate it. And while it's unlikely that the original GPT model, given its strong simplicity bias, is engaging in complicated deceptive behavior, it's highly likely that the malevolent actor has finetuned their model to be deceptive and power-seeking. This creates a perfect storm where malevolent AI can go rogue. I think this is a significant threat, and OpenAI should add some more guardrails to try and prevent this. 

Comment by research_prime_space on Cap Model Size for AI Safety · 2023-03-06T23:35:41.917Z · LW · GW

I feel like capping the memory of GPUs would also affect normal folk who just want to train simple models, so it may be less likely to be implemented. It also doesn't really cap the model size, which is the main problem.

But I agree it would be easier to enforce, and certainly, much better than the status quo.

Comment by research_prime_space on Cap Model Size for AI Safety · 2023-03-06T23:33:04.600Z · LW · GW

I think you make a lot of great points.

I think some sort of cap is the one of the highest impact things we can do from a safety perspective.  I agree that imposing the cap effectively and getting buy-in from broader society is a challenge, however, these problems are a lot more tractable than AI safety. 

I haven't heard anybody else propose this so I wanted to float it out there.

Comment by research_prime_space on Simple Way to Prevent Power-Seeking AI · 2022-12-20T01:41:37.065Z · LW · GW

I'd love some feedback on this if possible, thank you!

Comment by research_prime_space on [deleted post] 2022-12-12T23:18:54.183Z

Thanks, I appreciate the explanation!

Comment by research_prime_space on [deleted post] 2022-12-12T23:17:51.611Z

Thanks, that's a really helpful framing!

Comment by research_prime_space on [deleted post] 2022-12-04T02:10:53.269Z

I agree with everything you've said. Obviously, AI (in most domains) would need to evaluate its plans in the real world to acquire training data. But my point is that we have the choice to not carry out some of the agent's plans in the real-world. For some of the AI's plans, we can say no -- we have a veto button. It seems to me that the AI would be completely fine with that -- is that correct? If so, it makes safety a much more tractable problem than it otherwise would be.

Comment by research_prime_space on Open thread, June. 12 - June. 18, 2017 · 2017-06-15T19:51:32.352Z · LW · GW

I have a question about AI safety. I'm sorry in advance if it's too obvious, I just couldn't find an answer on the internet or in my head.

The way AI has bad consequences is through its drive to maximize (destroys the world in order to produce paperclips more efficiently). If you instead designed AIs to: 1) find a function/algorithm within an error range of the goal, 2)stop once that method is found, 3) do 1) and 2) while minimizing the amount of resources it uses and/or its effect on the outside world

If the above could be incorporated as a convention into any AI designed, would that mitigate the risk of AI going "rougue"?

Comment by research_prime_space on Welcome to Less Wrong! · 2017-06-14T21:40:43.339Z · LW · GW

Hi! I'm 18 years old, female, and a college student (don't want to release personal information beyond that!). I'm majoring in math, and I hopefully want to use those skills for AI research :D

I found you guys from EA, and I started reading the sequences last week, but I really do have a burning question I want to post to the Discussion board so I made an account.