Sam Marks's Shortform
post by Sam Marks (samuel-marks) · 2022-04-13T21:38:26.871Z · LW · GW · 44 commentsContents
44 comments
44 comments
Comments sorted by top scores.
comment by Sam Marks (samuel-marks) · 2024-12-15T21:22:18.148Z · LW(p) · GW(p)
x-posting a kinda rambling thread I wrote about this blog post from Tilde research.
---
If true, this is the first known application of SAEs to a found-in-the-wild problem: using LLMs to generate fuzz tests that don't use regexes. A big milestone for the field of interpretability!
I'll discussed some things that surprised me about this case study in
---
The authors use SAE features to detect regex usage and steer models not to generate regexes. Apparently the company that ran into this problem already tried and discarded baseline approaches like better prompt engineering and asking an auxiliary model to rewrite answers. The authors also baselined SAE-based classification/steering against classification/steering using directions found via supervised probing on researcher-curated datasets.
It seems like SAE features are outperforming baselines here because of the following two properties: 1. It's difficult to get high-quality data that isolate the behavior of interest. (I.e. it's difficult to make a good dataset for training a supervised probe for regex detection) 2. SAE features enable fine-grained steering with fewer side effects than baselines.
Property (1) is not surprising in the abstract, and I've often argued that if interpretability is going to be useful, then it will be for tasks where there are structural obstacles to collecting high-quality supervised data (see e.g. the opening paragraph to section 4 of Sparse Feature Circuits https://arxiv.org/abs/2403.19647).
However, I think property (1) is a bit surprising in this particular instance—it seems like getting good data for the regex task is more "tricky and annoying" than "structurally difficult." I'd weakly guess that if you are a whiz at synthetic data generation then you'd be able to get good enough data here to train probes that outperform the SAEs. But that's not too much of a knock against SAEs—it's still cool if they enable an application that would otherwise require synthetic datagen expertise. And overall, it's a cool showcase of the fact that SAEs find meaningful units in an unsupervised way.
Property (2) is pretty surprising to me! Specifically, I'm surprised that SAE feature steering enables finer-grained control than prompt engineering. As others have noted, steering with SAE features often results in unintended side effects; in contrast, since prompts are built out of natural language, I would guess that in most cases we'd be able to construct instructions specific enough to nail down our behavior of interest pretty precisely. But in this case, it seems like the task instructions are so long and complicated that the models have trouble following them all. (And if you try to improve your prompt to fix the regex behavior, the model starts misbehaving in other ways, leading to a "whack-a-mole" problem.) And also in this case, SAE feature steering had fewer side-effects than I expected!
I'm having a hard time drawing a generalizable lesson from property (2) here. My guess is that this particular problem will go away with scale, as larger models are able to more capably follow fine-grained instructions without needing model-internals-based interventions. But maybe there are analogous problems that I shouldn't expect to be solved with scale? E.g. maybe interpretability-assisted control will be useful across scales for resisting jailbreaks (which are, in some sense, an issue with fine-grained instruction-following).
Overall, something surprised me here and I'm excited to figure out what my takeaways should be.
---
Some things that I'd love to see independent validation of:
1. It's not trivial to solve this problem with simple changes to the system prompt. (But I'd be surprised if it were: I've run into similar problems trying to engineer system prompts with many instructions.)
2. It's not trivial to construct a dataset for training probes that outcompete SAE features. (I'm at ~30% that the authors just got unlucky here.)
---
Huge kudos to everyone involved, especially the eagle-eyed @Adam Karvonen [LW · GW] for spotting this problem in the wild and correctly anticipating that interpretability could solve it!
---
I'd also be interested in tracking whether Benchify (the company that had the fuzz-tests-without-regexes problem) ends up deploying this system to production (vs. later finding out that the SAE steering is unsuitable for a reason that they haven't yet noticed).
Replies from: neel-nanda-1, Buck, sarahconstantin↑ comment by Neel Nanda (neel-nanda-1) · 2024-12-16T06:03:45.238Z · LW(p) · GW(p)
Note that this is conditional SAE steering - if the latent doesn't fire it's a no-op. So it's not that surprising that it's less damaging, a prompt is there on every input! It depends a lot on the performance of the encoder as a classifier though
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2024-12-16T18:42:39.329Z · LW(p) · GW(p)
Isn't every instance of clamping a feature's activation to 0 conditional in this sense?
Replies from: neel-nanda-1↑ comment by Neel Nanda (neel-nanda-1) · 2024-12-16T22:45:55.717Z · LW(p) · GW(p)
That's technically even more conditional as the intervention (subtract the parallel component) also depends on the residual stream. But yes. I think it's reasonable to lump these together though, orthogonalisation also should be fairly non destructive unless the direction was present, while steering likely always has side effects
↑ comment by Buck · 2024-12-16T20:10:13.406Z · LW(p) · GW(p)
Isn't it easy to detect regexes in model outputs and rejection sample lines that contain regexes? This requires some custom sampling code if you want optimal latency/throughput, but the SAEs also require that.
Replies from: samuel-marks, arthur-conmy, faul_sname↑ comment by Sam Marks (samuel-marks) · 2024-12-16T21:00:27.886Z · LW(p) · GW(p)
I'm guessing you'd need to rejection sample entire blocks, not just lines. But yeah, good point, I'm also curious about this. Maybe the proportion of responses that use regexes is too large for rejection sampling to work? @Adam Karvonen [LW · GW]
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2024-12-16T21:01:53.305Z · LW(p) · GW(p)
@Adam Karvonen [LW · GW] I feel like you guys should test this unless there's a practical reason that it wouldn't work for Benchify (aside from "they don't feel like trying any more stuff because the SAE stuff is already working fine for them").
Replies from: karvonenadam, Buck↑ comment by Adam Karvonen (karvonenadam) · 2024-12-17T18:31:07.795Z · LW(p) · GW(p)
Rejection sampling is a strong baseline that we hadn’t considered, and it’s definitely worth trying out—I suspect it will perform well here. Currently, our focus is on identifying additional in-the-wild tasks, particularly from other companies, as many of Benchify’s challenges involve sensitive details about their internal tooling that they prefer to keep private. We’re especially interested in tasks where it’s not possible to automatically measure success or failure via string matching, as this is where techniques like model steering are most likely to be the most practical.
I also agree with Sam that rejection sampling would likely need to operate on entire blocks rather than individual lines. By the time an LLM generates a line containing a regular expression, it’s often already committed to that path—for example, it might have skipped importing required modules or creating the necessary variables to pursue an alternative solution.
↑ comment by Buck · 2024-12-17T16:44:46.489Z · LW(p) · GW(p)
I’m curious how they set up the SAE stuff; I’d have thought that this would require modifying some performance-critical inference code in a tricky way.
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2024-12-17T18:01:36.846Z · LW(p) · GW(p)
The entrypoint to their sampling code is here. It looks like they just add a forward hook to the model that computes activations for specified features and shifts model activations along SAE decoder directions a corresponding amount. (Note that this is cheaper than autoencoding the full activation. Though for all I know, running the full autoencoder during the forward pass might have been fine also, given that they're working with small models and adding a handful of SAE calls to a forward pass shouldn't be too big a hit.)
Replies from: Buck, karvonenadam↑ comment by Adam Karvonen (karvonenadam) · 2024-12-17T18:41:14.771Z · LW(p) · GW(p)
The forward hook for our best performing approach is here. As Sam mentioned, this hasn’t been deployed to production. We left it as a case study because Benchify is currently prioritizing other parts of their stack unrelated to ML.
For this demonstration, we added a forward hook to a HuggingFace Transformers model for simplicity, rather than incorporating it into a production inference stack.
↑ comment by Arthur Conmy (arthur-conmy) · 2024-12-18T02:25:57.597Z · LW(p) · GW(p)
I suggested something similar, and this was the discussion (bolding is the important author pushback):
Replies from: BuckArthur Conmy
11:33 1 DecWhy can't the YC company not use system prompts and instead:
1) Detect whether regex has been used in the last ~100 tokens (and run this check every ~100 tokens of model output)
2) If yes, rewind back ~100 tokens, insert a comment like # Don't use regex here (in a valid way given what code has been written so far), and continue the generation
Dhruv Pai
10:50 2 Dec
This seems like a reasonable baseline with the caveat that it requires expensive resampling and inserting such a comment in a useful way is difficult.
When we ran baselines simply repeating the number of times we told the model not to use regex right before generation in the system prompt, we didn't see the instruction following improve (very circumstantial evidence). I don't see a principled reason why this would be much worse than the above, however, since we do one-shot generation with such a comment right before the actual generation.
↑ comment by faul_sname · 2024-12-17T18:54:39.386Z · LW(p) · GW(p)
If you have a bunch of things like this, rather than just one or two, I bet rejection sampling gets expensive pretty fast - if you have one constraint which the model fails 10% of the time, dropping that failure rate to 1% brings you from 1.11 attempts per success to 1.01 attempts per success, but if you have 20 such constraints that brings you from 8.2 attempts per success to 1.2 attempts per success.
Early detection of constraint violation plus substantial infrastructure around supporting backtracking might be an even cheaper and more effective solution, though at the cost of much higher complexity.
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2024-12-17T18:59:35.861Z · LW(p) · GW(p)
Based on the blog post, it seems like they had a system prompt that worked well enough for all of the constraints except for regexes (even though modifying the prompt to fix the regexes thing resulted in the model starting to ignore the other constraints). So it seems like the goal here was to do some custom thing to fix just the regexes (without otherwise impeding the model's performance, include performance at following the other constraints).
(Note that using SAEs to fix lots of behaviors might also have additional downsides, since you're doing a more heavy-handed intervention on the model.)
↑ comment by sarahconstantin · 2024-12-16T18:16:13.010Z · LW(p) · GW(p)
why wouldn't you want regexes?
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2024-12-16T20:56:10.278Z · LW(p) · GW(p)
Apparently fuzz tests that used regexes were an issue in practice for Benchify (the company that ran into this problem). From the blog post:
Benchify observed that the model was much more likely to generate a test with no false positives when using string methods instead of regexes, even if the test coverage wasn't as extensive.
comment by Sam Marks (samuel-marks) · 2023-11-12T17:57:46.213Z · LW(p) · GW(p)
Imagine Alice is an environmentalist who is making an argument to Bob about the importance of preventing deforestation. Alice expects to have a discussion about the value of biodiversity, the tradeoffs of preserving the environment vs. economic productivity, that sort of stuff.
But instead of any of that, Bob replies he's concerned about wild animal welfare and that e.g. the Amazon Rainforest is a vast cesspit of animal suffering. Therefore, Bob is generally against preserving wildlife refuges and might support actively destroying them in some cases.
I think this experience is probably very disorienting to Alice. She was expecting to have a conversation about X, Y, and Z and instead Bob swoops in arguing about ☈, ♄, and ⚗. When I've been in the Alice role in similar sorts of conversations, I've felt things like:
- Skepticism that Bob is stating his true reasons for his position
- Annoyance that Bob is sidetracking the conversation instead of engaging with the core arguments
- Disappointment that I didn't get to make my case and see my argument (which I think is persuasive) land
I think all of these reactions are bad and unproductive (e.g. Bob isn't sidetracking the conversation; the conversation just didn't go according to my expectations). But they're also extremely natural -- I think it takes a lot of epistemic discipline to tamp down on these reactions, reorient to the conversation you're actually having, and check whether you still stand by your old views.
---
I think proponents of open source, when they talk to AI safety folks, often find themselves in Alice's position. They are expecting a discussion about the merits of openness, the risks of centralization, the harms of regulatory capture, etc. "But bioweapons" Bob responds. If Alice has never thought about this point before, it'll probably feel like it came totally out of left field, and she'll have reactions similar to the ones I described above (e.g. skepticism that Bob is stating his true reasons).
(And note that this might be hard for Bob to notice! For Bob, the "core argument" here has always been about bioweapons and other considerations around offense/defense balance for existential threats. He might be confused/annoyed that Alice wants to talk about the merits of openness.)
What should Bob do here? I'm not really sure, but one idea is: to the extent that Bob can honestly say he agrees with Alice on what Alice views as being the "core issues," he should start the conversation out by making that clear. E.g. Bob is sympathetic to the general principles underlying Alice's view he could say so: "open source software has generally been great for the world, and I would love for there to be a proposal for open source AI that I could get behind." Once that agreement is established, he could then move on to explaining why he thinks there are other considerations "outside of the scope of Alice's argument" which he feels are more compelling.
Replies from: bruce-lewis↑ comment by Bruce Lewis (bruce-lewis) · 2023-11-13T16:24:02.481Z · LW(p) · GW(p)
I think Bob should be even more direct about what's happening. "I know most of the people who disagree with you on this are thinking of X, Y, and Z. My reasons are different. My opinions on X, Y and Z are largely similar to yours. But I'm concerned about ☈, ♄, and ⚗." I think this approach would do even more than the idea in your last paragraph to make the surprise less jarring for Alice.
comment by Sam Marks (samuel-marks) · 2022-04-13T21:38:27.209Z · LW(p) · GW(p)
Counterarguing johnswentworth on RL from human feedback
johnswentworth recently wrote [LW · GW]that "RL from human feedback is actively harmful to avoiding AI doom." Piecing together things from his comments [LW(p) · GW(p)] elsewhere [LW(p) · GW(p)]. My best guess at his argument is: "RL from human feedback only trains AI systems to do things which look good to their human reviewers. Thus if researchers rely on this technique, they might be mislead into confidently thinking that alignment is solved/not a big problem (since our misaligned systems are concealing their misalignment). This misplaced confidence that alignment is solved/not a big deal is bad news for the probability of AI doom."
I disagree with (this conception of) John's argument; here are two counterarguments:
- Whether "deceive humans" is the strategy RL agents actually learn seems like it should rely on empirical facts. John's argument relies on the claim that AI systems trained with human feedback will probably learn to deceive their reviewers (rather than actually do a good job on the task). This seems like an empirical claim that ought to rely on facts about:
(i) the relative difficulties of (a) deceiving humans, (b) correctly performing the task, and (c) evaluating proposed solutions to the task;
(ii) how the AI system generalizes from training data (which in turn depends on the internal structure of the system).
John seems to think that intelligent systems will have a much easier time with (a) than with (b), given our competence at (c); in fact so much easier that questions of inductive bias (ii) aren't relevant. This doesn't seem quite so overdetermined to me.[1] John and Paul Christiano discussed the above point in this thread [LW(p) · GW(p)]; they did quite an impressive job whittling their disagreement down to the crux [LW(p) · GW(p)] "evaluation isn't easier than generation, and that claim is true regardless of how good you are at evaluation until you get basically perfect at it." - RL from human feedback might be only one ingredient in a solution to alignment. (I haven't seen this discussed elsewhere.) I don't think many people expect RL from human feedback to solve alignment by itself -- in particular, it only works for tasks that we are able to evaluate, which don't include all tasks that we'd want an AGI to do. One hope is that we could use RL from human feedback (or other ML techniques) to build AI assistants which make it possible to evaluate more tasks and with higher accuracy; in other words, it might be the case that our ability to evaluate proposed solutions actually scales with our AI capabilities.[2] Another possible hope is that RL from human feedback + some sort of interpretability could help make sure that our RL agents aren't pursuing strategies that look like deception (and then hope that our interpretability is good enough that "doesn't look like deception" is a good proxy for "not actually deceptive"). There are probably also other things I haven't thought of which you can combine RL from human feedback with. In any case, it might be worthwhile to work on RL from human feedback even if it can't solve alignment by itself, provided you believe it might be moving us closer to the goal, with the remaining work being done by other alignment techniques.
Both of these above counterarguments assume that things go better than the worst case. (For example, RL from human feedback really is completely useless if you're worried that your RL agent might be a deceptively aligned mesa-optimiser.) But it can still be worthwhile to work on alignment approaches which only help in non-worst-case worlds
- ^
My main intuition here is that AI systems will try to deceive us long before they're good at it; we'll catch these initial attempts and give them negative feedback. Then it seems plausible to me that they'll just learn to generalize correctly from there (and also plausible that they won't). In other words, it's worth trying to align AI systems this way in case alignment is easier than we expected due to good luck with inductive biases.
- ^
Of course, you need your assistants to be aligned, but you could optimistically hope that you can bootstrap up from tasks which are so easy for you to evaluate that your AIs trained from human feedback are very likely to be aligned.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-19T12:46:19.789Z · LW(p) · GW(p)
(For example, RL from human feedback really is completely useless if you're worried that your RL agent might be a deceptively aligned mesa-optimiser.)
I think this is mostly what we are worried about though? RL agents are mesa-optimizers already, or if they aren't, they eventually will be if they are smart/capable enough. Deceptive alignment is the main thing we all worry about, though not the only thing. Without deception, if we are careful we can hopefully notice misalignment and course-correct, and/or use our AIs to do useful alignment work for us.
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2022-04-19T19:16:14.801Z · LW(p) · GW(p)
(Meta: after I made this post, I realized that what I wrote was a little confusing on this point (because I was a little confused on this point). I've been hoping someone would call me out on it to give me an excuse to spend time writing up a clarification. So thanks!)
So, on my understanding, there are two distinct ways you can get deceptive behavior out of an AI system:
(1) You could have trained it with a mis-specified objective function. Then your AI might start Goodharting its given reward; if this reward function was learned via human feedback, this means doing things that seem good to humans, but might not actually be good. This deceptive behavior could even arise purely "by accident," that is, without the AI being able to grasp deception or even having a world model that includes humans. My favorite example is the one mentioned in the challenges section here -- a simulated robotic hand was trained to grasp a ball, but it instead learned to appear to grasp the ball.
(2) Even if you have perfectly specified your objective function, your model might be a deceptively-aligned mesa-optimizer with a completely unrelated mesa-objective. (Aside: this is what makes mesa-optimizers terrifying to me -- they imply that even if we were able to perfectly specify human values, we might still all die because the algorithms we trained to maximize human values ended up finding mis-aligned mesa-optimizers instead.)
In other words, (2) is the inner alignment failure and if you're worried about it you think hard about the probability of mesa-optimizers arising; (1) is the outer alignment failure and if you're worried about it I guess you argue a lot about air conditioners.
I'm pretty sure John was worried about (1), because if he were worried about (2) he would have said that all outer alignment research, not just RL from human feedback, is actively harmful to avoiding AI doom. (And FWIW this stronger claim seems super wrong to me, and I expect it also seems wrong to John and most other people.)
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-20T15:11:57.207Z · LW(p) · GW(p)
Thanks for the explanation (upvoted). I don't really understand it though, it seems like a straw man. At any rate I'm not now interested in exegesis on John, I want to think about the arguments and claims in their own right.
What would you say is the main benefit from the RL from Human Feedback research so far? What would have happened if the authors had instead worked on a different project?
Replies from: samuel-marks, paulfchristiano↑ comment by Sam Marks (samuel-marks) · 2022-04-21T20:50:19.593Z · LW(p) · GW(p)
What would you say is the main benefit from the RL from Human Feedback research so far? What would have happened if the authors had instead worked on a different project?
I feel like these questions are a little tricky to answer, so instead I'll attempt to answer the questions "What is the case for RL from human feedback (RLFHF) helping with alignment?" and "What have we learned from RLFHF research so far?"
What is the case for RLFHF helping with alignment?
(The answer will mainly be me repeating the stuff I said in my OP, but at more length.)
The most naive case for RLFHF is "you train some RL agent to assist you, giving it positive feedback when it does stuff you like and negative feedback for stuff you don't like. Eventually it learns to model your preferences well and is able to only do stuff you like."
The immediate objections that come to mind are:
(1) The RL agent is really learning to do stuff that lead you to giving it positive feedback (which is an imperfect proxy for "stuff you like.") Won't this lead to the RL agent manipulating us/replacing us with robots that always report they're happy/otherwise Goodharting their reward function?
(2) This can only train an RL agent to do tasks that we can evaluate. What about tasks we can't evaluate? For example, if you tell your RL agent to write macroeconomic policy proposal, we might not be able to give it feedback on whether its proposal is good or not (because we're not smart enough to evaluate macroeconomic policy), which sinks the entire RLFHF method.
(3) A bunch of other less central concerns that I'll relegate to footnotes.[1][2][3]
My response to objection (1) is ... well at this point I'm really getting into "repeat myself from the OP" territory. Basically, I think this is a valid objection, but
(a) if the RL agent's reward model is very accurate, it's not obviously true that the easiest way for it to optimize for its reward is to do deceptive/Goodhart-y stuff; this feels like it should rely on empirical facts like the ones I mentioned in the OP.
(b) even if the naive approach doesn't work because of this objection, we might be able to do other stuff on top of RLFHF (e.g. interpretability, something else we haven't thought of yet) to penalize Goodhart-y behavior or prevent it from arising in the first place.
The obvious counterargument here is "Look, Sam, you clearly are just not appreciating how much smarter than you a superintelligence will be. Inevitably there will be some way to Goodhart the reward function to get more reward than 'just do what we want' would give, and no technique you come up with of trying to penalize this behavior will stop the AI from finding and exploiting this strategy." To which I have further responses, but I think I'll resist going further down the conversational tree.
Objection (2) above is a good one, but seems potentially surmountable to me. Namely, it seems that there might be ways to use AI to improve our ability to evaluate things. The simplest form of this is recursive reward modelling: suppose you want to use RLFHF to train an AI to do task X but task X is difficult/expensive to evaluate; instead you break "evaluate X" into a bunch of easy-to-evaluate subtasks, and train RL agents to help with those; now you're able to more cheaply evaluate X.
In summary, the story about how RLFHF helps with alignment is "if we're very lucky, naive RLFHF might produced aligned agents; if we're less lucky, RLFHF + another alignment technique might still suffice."
What have we learned from RLFHF research so far?
Here's some stuff that I'm aware of; probably there's a bunch of takeaways that I'm not aware of yet.
(1) Learning to Summarize from Human Feedback didn't do recursive reward modelling as I described it above, but it did a close cousin: instead of breaking "evaluate X" up into subtasks it broke the original task X up into a bunch of subtasks which were easier to evaluate. In this case X = "summarize a book" and the subtasks were "evaluate small chunks of text." I'm not sure how to feel about the result -- the summaries were merely okay. But if you believe RLFHF could be useful as one ingredient in alignment, then further research on whether you could get this to work would seem valuable to me.
(2) Redwood's original research project [AF · GW] used RLFHF (at least, I think so[4]) to train an RL agent to generate text completions in which no human was portrayed as being injured. [EDIT: since I wrote this comment Redwood's report came out. It doesn't look like they did the RLHF part? Rather it seems like they just did the classifier part, and generated non-injurious text completions by generating a bunch of completions and filtering out the injurious ones.] Their goal was to make the RL agent very rarely (like 10^-30 of the time) generate injurious completions. I heard through the grapevine that they were not able to get such a low error rate, which is some evidence that ... something? That modelling the way humans classify things with ML is hard? That distributional shift is a big deal? I'm not sure, but whatever it is it's probably weak evidence against the usefulness of RLFHF.
(3) On the other hand, some of the original work showed that RLFHF seems to have really good sample efficiency, e.g. the agent at the top of this page learned to do a backflip with just 900 bits of human feedback. That seems good to know, and makes me think that if value learning is going to happen at all, it will happen via RLFHF.
From your original question, it seems like what you really want to know is "how does this usefulness of this research compare to the usefulness of other alignment research?" Probably that largely depends on whether you believe the basic story for how RLFHF could be useful (as well as how valuable you think other threads of alignment research are).
- ^
Q: When we first turn on the RL agent -- when it hasn't yet received much human feedback and therefore has a very inaccurate model of human preferences -- won't the agent potentially do lots of really bad things? A: Yeah, this seems plausible, but it might not be an insurmountable challenge. For instance, we could pre-train the agent's reward model from a bunch of training runs controlled by a human operator or a less intelligent RL agent. Or maybe the people who are studying safe exploration will come up with something useful here.
- ^
Q: What about robustness to distributional shift? That is, even if our RL agent learns a good model of human preferences under ordinary circumstances, its model might be trash once things start to get weird, e.g. once we start colonizing space. A: One thing about RLFHF is that you generally shouldn't take the reward model offline, i.e. you should always continue giving the RL agent some amount of feedback on which the reward model continuously trains. So maybe if things get continuously weirder then our RL agents' model of human preferences will continuously learn and we'll be fine? Otherwise, I mainly want to ignore robustness to distributional shift because it's an issue shared by all potential outer alignment solutions that I know of. No matter what approach to alignment you take, you need to hope that either someone else solves this issue or that it ends up not being a big deal for some reason.
- ^
What about mesa-optimizers? Like in footnote 2, this is an issue for every potential alignment solution, and I'm mainly hoping that either someone solves it or it ends up not being a big deal.
- ^
Their write up of the project, consisting of step 1 (train a classifier for text that portrays injury to humans) and step 2 (use the classifier to get an RL agent to generate non-injurious text completions), makes it sounds like they stop training the classifier once they start training the RL agent. This is like doing RLFHF where you take the reward model offline, which on my understanding tends to produce bad results. So I'm guessing that actually they never took the classifier offline, in which case what they did is just vanilla RLFHF.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-22T00:18:34.267Z · LW(p) · GW(p)
Thanks for the detailed answer, I am sheepish to have prompted so much effort on your part!
I guess what I was and am thinking was something like "Of course we'll be using human feedback in our reward signal. Big AI companies will do this by default. Obviously they'll train it to do what they want it to do and not what they don't want it to do. The reason we are worried about AI risk is because we think that this won't be enough."
To which someone might respond "But still it's good to practice doing it now. The experience might come in handy later when we are trying to align really powerful systems."
To which I might respond "OK, but I feel like it's a better use of our limited research time to try to anticipate ways in which RL from human feedback could turn out to be insufficient and then do research aimed at overcoming those ways. E.g. think about inner alignment problems, think about it possibly learning to do what makes us give positive feedback rather than what we actually want, etc. Let the capabilities researchers figure out how to do RL from human feedback, since they need to figure that out anyway on the path to deploying the products they are building. Safety researchers should focus on solving the problems that we anticipate RLHF doesn't solve by itself."
I don't actually think this, because I haven't thought about this much, so I'm uncertain and mostly deferring to other's judgment. But I'd be interested to hear your thoughts! (You've written so much already, no need to actually reply)
Replies from: samuel-marks↑ comment by Sam Marks (samuel-marks) · 2022-04-25T21:11:04.780Z · LW(p) · GW(p)
Ah cool, I see -- your concern is that maybe RLHF is perhaps better left to the capabilities people, freeing up AI safety researchers to work on more neglected approaches.
That seems right to me, and I agree with it as a general heuristic! Some caveats:
- I'm random person who's been learning a lot about this stuff lately, definitely not an active researcher. So my opinions about heuristics for what to work on probably aren't worth much.
- If you think RLHF research could be very impactful for alignment, that could make up for it being less neglected than other areas.
- Distinctive approaches to RLHF (like Redwood's attempts to get their reward model's error extremely low) might be the sorts of things that capabilities people wouldn't try.
Finally, as a historical note, it's hard to believe that a decade ago the state of alignment was like "holy shit, how could we possibly hard-code human values into a reward function this is gonna be impossible." The fact that now we're like "obviously big AI will, by default, build their AGIs with something like RLHF" is progress! And Paul's comment elsethread is heartwarming -- it implies that AI safety researchers helped accelerate the adoption of this safer-looking paradigm. In other words, if you believe RLHF helps improve our odds, then contra some recent pessimistic takes, you believe that progress is being made :)
↑ comment by paulfchristiano · 2022-04-21T23:51:01.261Z · LW(p) · GW(p)
We are moving rapidly from a world where people deploy manifestly unaligned models (where even talking about alignment barely makes sense) to people deploying models which are misaligned because (i) humans make mistakes in evaluation, (ii) there are high-stakes decisions so we can't rely on average-case performance.
This seems like a good thing to do if you want to move on to research addressing the problems in RLHF: (i) improving the quality of the evaluations (e.g. by using AI assistance), and (ii) handling high-stakes objective misgeneralization (e.g. by adversarial training).
In addition to "doing the basic thing before the more complicated thing intended to address its failures," it's also the case that RLHF is a building block in the more complicated things.
I think that (a) there is a good chance that these boring approaches will work well enough to buy (a significant amount) time for humans or superhuman AIs to make progress on alignment research or coordination, (b) when they fail, there is a good chance that their failures can be productively studied and addressed.
Overall it seems to me like the story here is reasonably good and has worked out reasonably well in practice. I think RLHF is being adopted more quickly than it otherwise would, and plenty of follow-up work is being done. I think many people in labs have a better understanding of what the remaining problems in alignment are; as a result they are significantly more likely to work productively on those problems themselves or to recognize and adopt solutions from elsewhere.
Replies from: daniel-kokotajlo, habryka4↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-22T19:46:27.284Z · LW(p) · GW(p)
OK, thanks. I'm new to this debate, I take it I'm wandering in to a discussion that may already have been had to death.
I guess I'm worried that RLHF should basically be thought of as capabilities research instead of alignment/safety research. The rationale for this would be: Big companies will do RLHF before the end by default, since their products will embarrass them otherwise. By doing RLHF now and promoting it we help these companies get products to market sooner & free up their time to focus on other capabilities research.
I agree with your claims (a) and (b) but I don't think they undermine this skeptical take, because I think that if RLHF fails the failures will be different for really powerful systems than for dumb systems.
Replies from: not-relevant↑ comment by Not Relevant (not-relevant) · 2022-04-26T21:03:53.153Z · LW(p) · GW(p)
I think it'd be useful if you spelled out those failures you think will occur in powerful systems, that won't occur in any intermediate system (assuming some degree of slowness sufficient to allow real world deployment of not-yet-AGI agentic models).
For example, deception: lots of parts of the animal kingdom understand the concept of "hiding" or "lying in wait to strike", I think? It already showed up in XLand IIRC. Imagine a chatbot trying to make a sale - avoiding problematic details of the product it's selling seems like a dominant strategy.
There are definitely scarier failure modes that show up in even-more-powerful systems (e.g. actual honest-to-goodness long-term pretending to be harmless in order to end up in situations with more resources, which will never be caught with RLHF), and I agree pure alignment researchers should be focusing on those. But the suggestion that picking the low-hanging fruit won't build momentum for working on the hardest problems does seem wrong to me.
As another example, consider the Beijing Academy of AI's government-academia-industry LLM partnership. When their LLMs fail to do what they want, they'll try RLHF - and it'll kind of work, but then it'll fail in a bunch of situations. They'll be forced to confront the fact that actually, objective robustness is a real thing, and start funding research/taking proto-alignment research way more seriously/as being on the critical path to useful models. Wouldn't it be great if there were a whole literature waiting for them on all the other things that empirically go wrong with RLHF, up to and including genuine inner misalignment concerns, once they get there?
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-27T21:39:48.016Z · LW(p) · GW(p)
Thanks! I take the point about animals and deception.
Wouldn't it be great if there were a whole literature waiting for them on all the other things that empirically go wrong with RLHF, up to and including genuine inner misalignment concerns, once they get there?
Insofar as the pitch for RLHF is "Yes tech companies are going to do this anyway, but if we do it first then we can gain prestige, people will cite us, etc. and so people will turn to us for advice on the subject later, and then we'll be able to warn them of the dangers" then actually that makes a lot of sense to me, thanks. I still worry that the effect size might be too small to be worth it, but idk.
I don't think that there are failures that will occur in powerful systems that won't occur in any intermediate system. However I'm skeptical that the failures that will occur in powerful systems will also occur in today's systems. I must say I'm super uncertain about all of this and haven't thought about it very much.
With that preamble aside, here is some wild speculation:
--Current systems (hopefully?) aren't reasoning strategically about how to achieve goals & then executing on that reasoning. (You can via prompting get GPT-3 to reason strategically about how to achieve goals... but as far as we know it isn't doing reasoning like that internally when choosing what tokens to output. Hopefully.) So, the classic worry of "the AI will realize that it needs to play nice in training so that it can do a treacherous turn later in deployment" just doesn't apply to current systems. (Hopefully.) So if we see e.g. our current GPT-3 chatbot being deceptive about a product it is selling, we can happily train it to not do that and probably it'll just genuinely learn to be more honest. But if it had strategic awareness and goal-directedness, it would instead learn to be less honest; it would learn to conceal its true intentions from its overseers.
--As humans grow up and learn more and (in some cases) do philosophy they undergo major shifts in how they view the world. This often causes them to change their minds about things they previously learned. For example, maybe at some point they learned to go to church because that's what good people do because that's what God says; later on they stop believing in God and stop going to church. And then later still they do some philosophy and adopt some weird ethical theory like utilitarianism and their behavior changes accordingly. Well, what if AIs undergo similar ontological shifts as they get smarter? Then maybe the stuff that works at one level of intelligence will stop working at another. (e.g. telling a kid that God is watching them and He says they should go to church stops working. Later when they become a utilitarian, telling them that killing civilians is murder and murder is wrong stops working too (if they are in a circumstance where the utilitarian calculus says civilian casualties are worth it for the greater good)).
Replies from: not-relevant↑ comment by Not Relevant (not-relevant) · 2022-04-27T23:42:07.548Z · LW(p) · GW(p)
I agree that "concealing intentions from overseers" might be a fairly late-game property, but it's not totally obvious to me that it doesn't become a problem sooner. If a chatbot realizes it's dealing with a disagreeable person and therefore that it's more likely to be inspected, and thus hews closer to what it thinks the true objective might be, the difference in behaviors should be pretty noticeable.
Re: ontology mismatch, this seems super likely to happen at lower levels of intelligence. E.g. I'd bet this even sometimes occurs in today's model-based RL, as it's trained for long enough that its world model changes. If we don't come up with strategies for dealing with this dynamically, we aren't going to be able to build anything with a world model that improves over time. Maybe that only happens too close to FOOM, but if you believe in a gradual-ish takeoff it seems plausible to have vanilla model-based RL work decently well before.
↑ comment by habryka (habryka4) · 2022-05-07T23:53:45.513Z · LW(p) · GW(p)
We are moving rapidly from a world where people deploy manifestly unaligned models (where even talking about alignment barely makes sense) to people deploying models which are misaligned because (i) humans make mistakes in evaluation, (ii) there are high-stakes decisions so we can't rely on average-case performance.
What it feels like to me is that we are rapidly moving from a world where people deploy manifestly unaligned models to people deploying models which are still manifestly unaligned (where even talking about alignment barely makes sense), but which are getting differentially good at human modeling and deception (and maybe at supervising other AIs, which is where the hope comes from).
I don't think the models are misaligned because humans are making mistakes in evaluation. The models are misaligned because we have made no progress at actually pointing towards anything like human values or other concepts like corrigibility or myopia.
In other words, models are mostly misaligned because there are strong instrumental convergent incentives towards agency, and we don't currently have any tools that allow us to shape the type of optimization that artificial systems are doing internally. Learning from human feedback seems if anything to be slightly more the kind of reward that incentivizes dangerous agency. This seems to fit neither into your (1) or (2).
Instruct-GPT is not more aligned than GPT-3. It is more capable at performing many tasks, and we have some hope that some of the tasks at which it is getting better might help with AI Alignment down the line, but right now, at the current state of the AI alignment field, the problem is not that we can't provide good enough evaluation, or that we can only get good "average-case" performance, it's that we have systems with random goals that are very far from human values or are capable of being reliably conservative.
And additionally to that, we now have a tool that allows any AI company to trivially train away any surface-level alignment problems, without addressing any of the actual underlying issues, creating a situation with very strong incentives towards learning human deception and manipulation, and a situation where obvious alignment failures are much less likely to surface.
My guess is you are trying to point towards a much more sophisticated and broader thing by your (2) than I interpret you as saying here, but the above is my response to my best interpretation of what you mean by (2).
Replies from: paulfchristiano↑ comment by paulfchristiano · 2022-05-08T16:16:23.944Z · LW(p) · GW(p)
In other words, models are mostly misaligned because there are strong instrumental convergent incentives towards agency, and we don't currently have any tools that allow us to shape the type of optimization that artificial systems are doing internally.
In the context of my comment, this appears to be an empirical claim about GPT-3. Is that right? (Otherwise I'm not sure what you are saying.)
If so, I don't think this is right. On typical inputs I don't think GPT-3 is instrumentally behaving well on the training distribution because it has a model fo the data-generating process.
I think on distribution you are mostly getting good behavior mostly either by not optimizing, or by optimizing for something we want. I think to the extent it's malign it's because there are possible inputs on which it is optimizing for something you don't want, but those inputs are unlike those that appear in training and you have objective misgeneralization.
In that regime, I think the on-distribution performance is probably aligned and there is not much in-principle obstruction to using adversarial training to improve the robustness of alignment.
Instruct-GPT is not more aligned than GPT-3. It is more capable at performing many tasks, and we have some hope that some of the tasks at which it is getting better might help with AI Alignment down the lin
Could you define the word "alignment" as you are using it?
I'm using roughly the definition here. I think it's the case that there are many inputs where GPT-3 is not trying to do what you want, but Instruct-GPT is. Indeed, I think Instruct-GPT is actually mostly trying to do what you want to the extent that it is trying to do anything at all. That would lead me to say it is more "aligned."
I agree there are subtleties like "If I ask instruct-gpt to summarize a story, is it trying to summarize the story? Or trying to use that as evidence about 'what Paul wants' and then do that?" And I agree there is a real sense in which it isn't smart enough for that distinction to be consistently meaningful, and so in that sense you might say my definition of intent alignment doesn't really apply. (I more often think about models being "benign" or "malign," more like asking: is it trying to optimize for something despite knowing that you wouldn't like it.) I don't think that's what you are talking about here though.
right now, at the current state of the AI alignment field, the problem is not that we can't provide good enough evaluation, or that we can only get good "average-case" performance, it's that we have systems with random goals that are very far from human values or are capable of being reliably conservative.
If you have good oversight, I think you probably get good average case alignment. That's ultimately an empirical claim about what happens when you do SGD, but the on-paper arguments looks quite good (namely: on-distribution alignment would improve the on-distribution performance and seems easy for SGD to learn relative to the complexity of the model itself) and it appears to match the data so far to the extent we have relevant data.
You seem to be confidently stating it's false without engaging at all with the argument in favor or presenting or engaging with any empirical evidence.
Replies from: habryka4↑ comment by habryka (habryka4) · 2022-05-08T23:54:57.108Z · LW(p) · GW(p)
You seem to be confidently stating it's false without engaging at all with the argument in favor or presenting or engaging with any empirical evidence.
But which argument in favor did you present? You just said "the models are unaligned for these 2 reasons", when those reasons do not seem comprehensive to me, and you did not give any justification for why those two reasons are comprehensive (or provide any links).
I tried to give a number of specific alternative reasons that do not seem to be covered by either of your two cases, and included a statement that we might disagree on definitional grounds, but that I don't actually know what definitions you are using, and so can't be confident that my critique makes sense.
Now that you've provided a definition, I still think what I said holds. My guess is there is a large inferential distance here, so I don't think it makes sense to try to bridge that whole distance within this comment thread, though I will provide an additional round of responses.
If so, I don't think this is right. On typical inputs I don't think GPT-3 is instrumentally behaving well on the training distribution because it has a model fo the data-generating process.
I don't think your definition of intent-alignment requires any unaligned system to have a model of the data-generating process, so I don't understand the relevance of this. GPT-3 is not unaligned because it has a model of the data-generating process, and I didn't claim that.
I did claim that neither GPT-3 nor Instruct-GPT are "trying to do what the operator wants it to do", according to your definition, and that the primary reason for that is that in as much as its training process did produce a model that has "goals" and so can be modeled in any consequentialist terms, those "goals" do not match up with trying to be helpful to the operator. Most likely, they are a pretty messy objective we don't really understand (which in the case of GPT-3 might be best described as "trying to generate text that in some simple latent space resembles the training distribution" and I don't have any short description of what the "goals" of Instruct-GPT might be, though my guess is they are still pretty close to GPT-3s goals).
Indeed, I think Instruct-GPT is actually mostly trying to do what you want to the extent that it is trying to do anything at all. That would lead me to say it is more "aligned."
I don't think we know what Instruct-GPT is "trying to do", and it seems unlikely to me that it is "trying to do what I want". I agree in some sense it is "more trying to do what I want", though not in a way that feels obviously very relevant to more capable systems, and not in a way that aligns very well with your intent definition (I feel like if I had to apply your linked definition to Instruct-GPT, I would say something like "ok, seems like it isn't intent aligned, since the system doesn't really seem to have much of an intent. And if there is a mechanism in its inner workings that corresponds to intent, we have no idea what thing it is pointed at, so probably it isn't pointed at the right thing").
And in either case, even if it is the case that if you squint your eyes a lot the system is "more aligned", this doesn't make the sentence "many of today's systems are aligned unless humans make mistakes in evaluation or are deployed in high-stakes environments" true. "More aligned" is not equal to "aligned".
The correct sentence seems to me "many of those systems are still mostly unaligned, but might be slightly more aligned than previous systems, though we have some hope that with better evaluation we can push that even further, and the misalignment problems are less bad on lower-stakes problems when we can rely on average-case performance, though overall the difference in alignment between GPT and Instruct-GPT is pretty unclear and probably not very large".
I think on distribution you are mostly getting good behavior mostly either by not optimizing, or by optimizing for something we want. I think to the extent it's malign it's because there are possible inputs on which it is optimizing for something you don't want, but those inputs are unlike those that appear in training and you have objective misgeneralization.
This seems wrong to me. On-distribution it seems to me that the system is usually optimizing for something that I don't want. For example, GPT-3 primarily is trying to generate text that represents the distribution that its drawn from, which very rarely aligns with what I want (and is why prompt-engineering has such a large effect, e.g. "you are Albert Einstein" as a prefix improves performance on many tasks). Instruct-GPT does a bit better here, but probably most of its internal optimization power is still thrown at reasoning with the primary "intention" of generating text that is similar to its input distribution, since it seems unlikely that the fine-tuning completely rewrote most of these internal heuristics.
My guess is if Instruct-GPT was intent-aligned even for low-impact tasks, we could get it to be substantially more useful on many tasks. But my guess is what we currently have is mostly a model that is still primarily "trying" to generate text that is similar to its training distribution, with a few heuristics baked in in the human-feedback stage that make that text more likely to be a good fit for the question asked. In as much as the model is "trying to do something", i.e. what most of its internal optimization power is pointed at, I am very skeptical that that is aligned with my task.
(Similarly, looking at Redwood's recent model, it seems clear to me that they did not produce a model that "intents" to produce non-injurious completions. The model has two parts, one that is just "trying" to generate text similar to its training distribution, and a second part that is "trying" to detect whether a completion is injurious. This model seems clearly not intent-aligned, since almost none of its optimization power is going towards our target objective.)
If you have good oversight, I think you probably get good average case alignment. That's ultimately an empirical claim about what happens when you do SGD, but the on-paper arguments looks quite good (namely: on-distribution alignment would improve the on-distribution performance and seems easy for SGD to learn relative to the complexity of the model itself) and it appears to match the data so far to the extent we have relevant data.
My guess is a lot of work is done here by the term "average case alignment", so I am not fully sure how to respond. I disagree that the on-paper argument looks quite good, though it depends a lot on how narrowly you define "on-distribution". Given my arguments above, you must either mean something different from intent-alignment (since to me at least it seems clear that Redwood's model is not intent-aligned), or disagree with me on whether systems like Redwood's are intent-aligned, in which case I don't really know how to consistently apply your intent-alignment definition.
I also feel particularly confused about the term "average case alignment", combined with "intent-alignment". I can ascribe goals at multiple different levels to a model, and my guess is we both agree that describing current systems as having intentions at all is kind of fraught, but in as much as a model has a coherent goal, it seems like that goal is pretty consistent between different prompts, and so I am confused why we should expect average case alignment to be very different from normal alignment. It seems that if I have a model that is trying to do something, then asking it multiple times, probably won't make a difference to its intention (I think, I mean, again, this all feels very handwavy, which is part of the reason why it feels so wrong to me to describe current models as "aligned").
Replies from: paulfchristiano↑ comment by paulfchristiano · 2022-05-09T00:41:30.750Z · LW(p) · GW(p)
I currently think that the main relevant similarities between Instruct-GPT and a model that is trying to kill you, are about errors of the overseer (i.e. bad outputs to which they would give a high reward) or high-stakes errors (i.e. bad outputs which can have catastrophic effects before they are corrected by fine-tuning).
I'm interested in other kinds of relevant similarities, since I think those would be exciting and productive things to research. I don't think the framework "Instruct-GPT and GPT-3 e.g. copy patterns that they saw in the prompt, so they are 'trying' to predict the next word and hence are misaligned" is super useful, though I see where it's coming from and agree that I started it by using the word "aligned".
Relatedly, and contrary to my original comment, I do agree that there can be bad intentional behavior left over from pre-training. This is a big part what ML researchers are motivated by when they talk about improving the sample-efficiency of RLHF. I usually try to discourage people from working on this issue, because it seems like something that will predictably get better rather than worse as models improve (and I expect you are even less happy with it than I am).
I agree that there is a lot of inferential distance, and it doesn't seem worth trying to close the gap here. I've tried to write down a fair amount about my views, and I'm always interested to read arguments / evidence / intuitions for more pessimistic conclusions.
Similarly, looking at Redwood's recent model, it seems clear to me that they did not produce a model that "intents" to produce non-injurious completions.
I agree with this, though it's unrelated to the stated motivation for that project or to its relationship to long-term risk.
Replies from: habryka4↑ comment by habryka (habryka4) · 2022-05-09T00:54:14.029Z · LW(p) · GW(p)
I currently think that the main relevant similarities between Instruct-GPT and a model that is trying to kill you, are about errors of the overseer (i.e. bad outputs to which they would give a high reward) or high-stakes errors (i.e. bad outputs which can have catastrophic effects before they are corrected by fine-tuning).
Phrased this way, I still disagree, but I think I disagree less strongly, and feel less of a need to respond to this. I care particularly much about using terms like "aligned" in consistent ways. Importantly, having powerful intent-aligned systems is much more useful than having powerful systems that just fail to kill you (e.g. because they are very conservative), and so getting to powerful aligned systems is a win-condition in the way that getting to powerful non-catastrophic systems is not.
I agree with this, though it's unrelated to the stated motivation for that project or to its relationship to long-term risk.
Yep, I didn't intend to imply that this was in contrast to the intention of the research. It was just on my mind as a recent architecture that I was confident we both had thought about, and so could use as a convenient example.
comment by Sam Marks (samuel-marks) · 2024-02-06T06:56:29.601Z · LW(p) · GW(p)
Some updates about the dictionary_learning repo:
- The repo now has support for ghost grads. h/t g-w1 for submitting a PR for this
ActivationBuffers
now work natively with model components -- like the residual stream -- whose activations are typically returned as tuples; the buffer knows to take the first component of the tuple (and will iteratively do this if working with nested tuples).ActivationBuffers
can now be stored on the GPU.- The file
evaluation.py
contains code for evaluating trained dictionaries. I've found this pretty useful for quickly evaluating dictionaries people send to me. - New convenience: you can do
reconstructed_acts, features = dictionary(acts, output_features=True)
to get both the reconstruction and the features computed bydictionary
.
Also, if you'd like to train dictionaries for many model components in parallel, you can use the parallel
branch. I don't promise to never make breaking changes to the parallel
branch, sorry.
Finally, we've released a new set of dictionaries for the MLP outputs, attention outputs, and residual stream in all layers of Pythia-70m-deduped. The MLP and attention dictionaries seem pretty good, and the residual stream dictionaries seem like a mixed bag. Their stats can be found here.
comment by Sam Marks (samuel-marks) · 2023-02-08T20:44:47.861Z · LW(p) · GW(p)
Somewhat related to the SolidGoldMagicarp discussion, I thought some people might appreciate getting a sense of how unintuitive the geometry of token embeddings can be. Namely, it's worth noting that the tokens whose embeddings are most cosine-similar to a random vector in embedding space tend not to look very semantically similar to each other. Some examples:
v_1 v_2 v_3
--------------------------------------------------
characterized Columb determines
Stra 1900 conserv
Ire sher distinguishes
sent paed emphasizes
Shelter 000 consists
Pil mx operates
stro female independent
wired alt operate
Kor GW encompasses
Maul lvl consisted
Here v_1, v_2, v_3, are random vectors in embedding space (drawn from ), and the columns give the 10 tokens whose embeddings are most cosine-similar to . I used GPT-2-large.
Perhaps 20% of the time, we get something like , where many of the nearest neighbors have something semantically similar among them (in this case, being present tense verbs in the 3rd person singular).
But most of the time, we get things that look like or : a hodgepodge with no obvious shared semantic content. GPT-2-large seems to agree: picking " female" and " alt" randomly from the column, the cosine similarity between the embeddings of these tokens is 0.06.
[Epistemic status: I haven't thought that hard about this paragraph.] Thinking about the geometry here, I don't think any of this should be surprising. Given a random vector , we should typically find that is ~orthogonal to all of the ~50000 token embeddings. Moreover, asking whether the nearest neighbors to should be semantically clustered seems to boil down to the following. Divide the tokens into semantic clusters ; then compare the distribution of intra-cluster variances to the distribution of cosine similiarities of the cluster means . From the perspective of cosine similarity to , we should expect these clusters to look basically randomly drawn from the full dataset , so that each variance in the former set should be . This should be greater than the mean of the latter set, implying that we should expect the nearest neighbors to to mostly be random tokens taken from different clusters, rather than a bunch of tokens taken from the same cluster. I could be badly wrong about all of this, though.
There's a little bit of code for playing around with this here.
comment by Sam Marks (samuel-marks) · 2022-07-07T22:27:56.083Z · LW(p) · GW(p)
Bug report: the "Some remarks" section of this post [LW · GW] has a nested enumerated list. When I open the post in the editor, it displays as
1. [text]
> a. [text]
> b. [text]
> c. [text]
2. [text]
(where the >'s represent indentation). But the published version of the post displays this as
1. [text]
> 1. [text]
> 2. [text]
> 3. [text]
2. [text]
This isn't a huge deal, but it's a bit annoying since I later refer to the things I say in the nested list as e.g. "remark 1(c)."
Replies from: jp, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2022-07-08T00:33:27.145Z · LW(p) · GW(p)
Markdown doesn't support such lists, so it's more a matter of an overly permissive interpreter (or even specification) that accepts the code without complaint.
Replies from: jp↑ comment by jp · 2022-07-08T00:41:51.218Z · LW(p) · GW(p)
Sam's talking about the rich text editor.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2022-07-08T01:08:20.982Z · LW(p) · GW(p)
Somehow this reply didn't create a new/unseen reply notification for me (with the black bell), instead there is an already-seen reply notification on the top of the list of recent notifications. I vaguely recall already having seen this bug in another Shortform reply, though I'd expect that this can't matter. So I probably got confused by clearing notifications about the other thread [LW(p) · GW(p)] I got replies from at about the same time.