Posts

(Non-deceptive) Suboptimality Alignment 2023-10-18T02:07:52.776Z
Universal and Transferable Adversarial Attacks on Aligned Language Models [paper link] 2023-07-29T03:21:15.477Z
NYT: The Surprising Thing A.I. Engineers Will Tell You if You Let Them 2023-04-17T18:59:33.626Z

Comments

Comment by Sodium on Daniel Kokotajlo's Shortform · 2024-04-20T22:11:01.781Z · LW · GW

Others have mentioned Coase (whose paper is a great read!). I would also recommend The Visible Hand: The Managerial Revolution in American Business. This is an economic history work detailing how large corporations emerged in the US in the 19th century. 

Comment by Sodium on Goals selected from learned knowledge: an alternative to RL alignment · 2024-01-27T22:32:45.780Z · LW · GW

Thanks for the response!

I'm worried that instead of complicated LMA setups with scaffolding and multiple agents, labs are more likely to push for a single tool using LM agent, which seems cheaper and simpler. I think some sort of internal steering for a given LM based on learned knowledge discovered through interpretability tools is probably the most competitive method. I get your point that the existing method in LLMs aren't necessarily re targeting some sort of searching method, but at the same time they don't have to be? Since there isn't this explicit search and evaluation process in the first place, I think of it more as a nudge guiding LLM hallucinations.

I was just thinking, a really ambitious goal would be apply some sort of GSLK steering to LLAMA and see if you could get it to perform well on the LLM leaderboard, similar to how there's models there that's just DPO applied to LLAMA

Comment by Sodium on Goals selected from learned knowledge: an alternative to RL alignment · 2024-01-20T17:50:33.733Z · LW · GW

The existing research on selecting goals from learned knowledge would be conceptual interpretability and model steering through activation addition or representation engineering, if I understood your post correctly? I think these are promising paths to model steering without RL.

I'm curious if there is a way to bake conceptual interpretability into the training process. In a sense, can we find some suitable loss function that incentivizes the model to represent its learned concepts in an easily readable form, and applying it during training? Maybe train a predictor that predicts a model's output from its weights and activations? The hope is to have a reliable interpretability method that scales with compute. Another issue is that existing papers also focus on concepts represented linearly, which is fine if most important concepts are represented that way, but who knows?

Anyways, sorry for the slightly rambling comment. Great post! I think this is the most promising plan to alignment. 

Comment by Sodium on Simulators · 2023-12-13T03:40:27.832Z · LW · GW

I don't have any substantive comment to provide at the moment, but I want to share that this is the post that piqued my initial interest in alignment. It provided a fascinating conceptual framework around how we can qualitatively describe the behavior of LLMs, and got me thinking about implications of more powerful future models. Although it's possible that I would eventually become interested in alignment, this post (and simulator theory broadly) deserve a large chunk of the credit. Thanks janus.

Comment by Sodium on Mission Impossible: Dead Reckoning Part 1 AI Takeaways · 2023-11-03T03:25:34.293Z · LW · GW

Joe Biden watched mission impossible and that's why we have the EO is now my favorite conspiracy theory. 

Comment by Sodium on (Non-deceptive) Suboptimality Alignment · 2023-10-18T02:26:54.556Z · LW · GW

Basic Background:

Risks from Learned Optimization introduces a set of terminologies that help us think about the safety of ML systems, specifically as it relates to inner alignment. Here’s a general overview of what these ideas are.

A neural network is trained on some loss/reward function by a base optimizer (e.g., stochastic gradient descent on a large language model using next token prediction as the loss function). The loss function can also be thought of the base-objective , and the base optimizer would select for algorithms that perform well on this base-objective.

After training, the neural net implements some algorithm, which we call the learned algorithm. The learned algorithm can itself be an optimization process (but it may also be, for example, a collection of heuristics). Optimizers are loosely defined, but the gist is that an optimizer is something that searches through a space of actions and picks one that scores the highest according to some function, which depends on the input it's given. One can think of AlphaGo as an optimizer that searches through the space of the next moves and picks one that leads to the highest win probability. 

When the learned algorithm is also an optimizer, we call it a mesa-optimizer. All optimizers have a goal, which we call the mesa-objective  The objective of the mesa-optimizer may be different from the base-objective which programmers have explicit control over. The mesa-objective, however, needs to be learned through training. 

Inner misalignment happens when the learned mesa-objective differs from the base-objective. For example, if we are training a roomba neural net, we can use how clean the floor is as a reward function. That would be the base-objective. However, if the roomba is a mesa-optimizer, it could have different mesa-objectives such as maximizing the amount of dust sucked in or the amount of dust inside the dust collector. The post below deals with one such class of inner alignment failure: suboptimality alignment. 

In the post, I sometimes compare suboptimality alignment with deceptive alignment, which is a complicated concept. I think it’s best to just read the actual paper if you want to understand that.

Comment by Sodium on AI presidents discuss AI alignment agendas · 2023-09-13T03:28:56.561Z · LW · GW

Generally S-tier content. This video has motivated me to look into specific agendas I haven't had a closer look at yet (am planning on looking into shard theory first). Please keep going.

Would say I think some of the jokes at the beginning could've been handled a bit better, but I also don't have any specific advice to offer..

Comment by Sodium on My Assessment of the Chinese AI Safety Community · 2023-04-26T23:51:00.792Z · LW · GW
Comment by Sodium on [Linkpost] GatesNotes: The Age of AI has begun · 2023-03-23T18:53:45.646Z · LW · GW

I read the post and see his blogs/videos from time to time. I feel like the tone/style across them is very consistent, so that's some weak evidence that he writes them himself?

Comment by Sodium on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2021-07-28T20:11:53.046Z · LW · GW

Could you send a link to the discussion/ any other source on this?