Posts
Comments
The ARENA curriculum is very good.
It does seem pretty suspicious.
I'm like 98% confident this was not foul-play, partly because I doubt whatever evidence he had would be that important to the court case and obviously his death is going to draw far more attention to his view.
However, 98% is still quite worrying and I wish I could be >99% confident. I will be interested to see if there is further evidence. Given OpenAI's very shady behavior with the secret non-disparagement agreements that came out a few months, it doesn't seem completely impossible they might do this (but still very very unlikely imo).
The probability of a 26 year old dying of suicide in any given month (within the month of being named the key witness in the OpenAI copyright case, right before deposition) is roughly 1 in 100,000
This prior is a useful starting point, but you've definitely got to account for the stress of leaving OpenAI and going through a lawsuit.
(I downvoted this post for combative tone.)
One of the striking parts is that it sounds like all the pretraining people are optimistic
What's the source for this?
I started working on PhD applications about 12 days ago. I expect to have fairly polished applications for the first deadline on December 1, despite not working on this full time. So I think it's quite possible to do applications for the December 15 deadlines. You would need to contact your referees (and potential supervisors for UK universities) in the next couple of days.
There are two types of people in this world.
There are people who treat the lock on a public bathroom as a tool for communicating occupancy and a safeguard against accidental attempts to enter when the room is unavailable. For these people the standard protocol is to discern the likely state of engagement of the inner room and then tentatively proceed inside if they detect no signs of human activity.
And there are people who view the lock on a public bathroom as a physical barricade with which to temporarily defend possessed territory. They start by giving the door a hearty push to test the tensile strength of the barrier. On meeting resistance they engage with full force, wringing the handle up and down and slamming into the door with their full body weight. Only once their attempts are thwarted do they reluctantly retreat to find another stall.
Tarbell Fellowship at PPF
I think you've massively underrated this. My impression is that Tarbell has had significant effect on the general AI discourse, by allowing a number of articles to be written in mainstream outlets.
karma should also transfer automatically
Unconferences are a thing for this reason
This is fantastic technical writing. It would have taken me hours to understand these papers this deeply, but you convey the core insights quickly in an entertaining and understandable way.
If there are ‘subshards’ which achieve this desirable behavior because they, from their own perspective, ‘intrinsically’ desire power (whatever that sort of distinction makes when you’ve broken things down that far), and it is these subshards which implement the instrumental drive... so what? After all, there has to be some level of analysis at which an agent stops thinking about whether or not it should do some thing and just starts doing the thing. Your muscles “intrinsically desire” to fire when told to fire, but the motor actions are still ultimately instrumental, to accomplish something other than individual muscles twitching. You can’t have ‘instrumental desire’ homunculuses all the way down to the individual transistor or ReLU neuron.
I sent this paragraph to TurnTrout as I was curious to get his reaction. Paraphrasing his response below:
No, that's not the point. That's actually the opposite of what i'm trying to say. The subshards implement the algorithmic pieces and the broader agent has an "intrinsic desire" for power. The subshards themselves are not agentic, and that's why (in context) I substitute them in for "circuits".
It's explained in this post that I linked to. Though I guess in context I do say "prioritize" in a way that might be confusing. Shard Theory argues against homonculist accounts of cognition by considering the mechanistic effects of reinforcement processes. Also the subshards are not implementing an instrumental drive in the sense of "implementing the power-seeking behavior demanded by some broader consequentialist plan" they're just seeking power, just 'cuz.
From my early post: Inner and Outer Alignment Decompose One Hard Problem Into Two Extremely Hard Problems
I literally do not understand what the internal cognition is supposed to look like for an inner-aligned agent. Most of what I’ve read has been vague, on the level of “an inner-aligned agent cares about optimizing the outer objective.”
Charles Foster comments:
- "We are attempting to mechanistically explain how an agent makes decisions. One proposed reduction is that inside the agent, there is an even smaller inner agent that interacts with a non-agential evaluative submodule to make decisions for the outer agent. But that raises the immediate questions of “How does the inner agent make its decisions about how to interact with the evaluative submodule?” and then “At some point, there’s gotta be some non-agential causal structure that is responsible for actually implementing decision-making, right?” and then “Can we just explain the original agent’s behavior in those terms? What is positing an externalized evaluative submodule buying us?"
Perhaps my emphasis on mechanistic reasoning and my unusual level of precision in my speculation about AI internals, perhaps these make people realize how complicated realistic cognition is in the shard picture. Perhaps people realize how much might have to go right, how many algorithmic details may need to be etched into a network so that it does what we want and generalizes well.
But perhaps people don’t realize that a network which is inner-aligned on an objective will also require a precise and conforming internal structure, and they don’t realize this because no one has written detailed plausible stabs at inner-aligned cognition.
Kinda a stretch, but Groundhog Day is about someone becoming stronger. Also just a great film.
I would recommend reading the original reddit post that motivated it: https://www.reddit.com/r/biology/comments/16y81ct/the_case_for_whales_actually_matching_or_even/.
It is meant seriously, but the author is rightly acknowledging how far-fetched it sounds.
[00:31:25] Timothy:... This is going to be like, they didn't talk about any content, like there's no specific evidence,
[00:31:48] Elizabeth: I wrote down my evidence ahead of time.
[00:31:49] Timothy: Yeah, you already wrote down your evidence
I feel pretty uncertain to what extent I agree with your views on EA. But this podcast didn't really help me decide because there wasn't much discussion of specific evidence. Where is all of it written down? I'm aware of your post on vegan advocacy but unclear if there are lots more examples. I also heard a similar line of despair about EA epistemics from other long-time rationalists when hanging around Lighthaven this summer. But basically no one brought up specific examples.
It seems difficult to characterize the EA movement as a monolith in the way you're trying to do. The case of vegan advocacy is mostly irrelevant to my experience of EA. I have little contact with vegan advocates and most of the people I hang around in EA circles seem to have quite good epistemics.
However I can relate to your other example, because I'm one of the "baby EAs" who was vegetarian and was in the Lightcone offices in summer 2022. But my experience provides something of a counter-example. In fact, I became vegetarian before encountering EA and mostly found out about the potential nutritional problems from other EAs. When you wrote your post, I got myself tested for iron deficiency and started taking supplements (although not for iron deficiency). I eventually stopped being vegetarian, instead offsetting my impact with donations to animal charities, even though this isn't very popular in EA circles.
My model is that people exist on a spectrum of weirdness to normy-ness. The weird people are often willing to pay social costs to be more truthful. While the more normy people will refrain from saying and thinking the difficult truths. But most people are mostly fixed at a certain point on the spectrum. The truth-seeking weirdos probably made up a larger proportion of the early EA movement, but I'd guess in absolute terms the number of those sorts of people hanging around EA spaces has not declined, and their epistemics have not degraded - there just aren't very many of them in the world. But these days there is a greater number of the more normy people in EA circles too.
And yes, it dilutes the density of high epistemics in EA. But that doesn't seem like a reason to abandon the movement. It is a sign that more people are being influenced by good ideas and that creates opportunities for the movement to do bigger things.
When you want to have interesting discussions with epistemic peers, you can still find your own circles within the movement to spend time with, and you can still come to the (relative) haven of LessWrong. If LessWrong culture also faced a similar decline in epistemic standards I would be much more concerned, but it has always felt like EA is the applied, consumer facing product of the rationalist movement, that targets real-world impact over absolute truth-seeking. For example, I think most EAs (and also some rationalists) are hopelessly confused about moral philosophy, but I'm still happy there's more people trying to live by utilitarian principles, who might otherwise not be trying to maximize value at all.
Respect for doing this.
I strongly wish you would not tie StopAI to the claim that extinction is >99% likely. It means that even your natural supporters in PauseAI will have to say "yes I broadly agree with them but disagree with their claims about extinction being certain."
I would also echo the feedback here. There's no reason to write in the same style as cranks.
Question –> CoT –> Answer
So to be clear, testing whether this causal relationship holds is actually important, it's just that we need to do it on questions where the CoT is required for the model to answer the question?
Optimize the steering vector to minimize some loss function.
Crossposted from https://x.com/JosephMiller_/status/1839085556245950552
1/ Sparse autoencoders trained on the embedding weights of a language model have very interpretable features! We can decompose a token into its top activating features to understand how the model represents the meaning of the token.🧵
2/ To visualize each feature, we project the output direction of the feature onto the token embeddings to find the most similar tokens. We also show the bottom and median tokens by similarity, but they are not very interpretable.
3/ The token "deaf" decomposes into features for audio and disability! None of the examples in this thread are cherry-picked – they were all (really) randomly chosen.
4/ Usually SAEs are trained on the internal activations of a component for billions of different input strings. But here we just train on the rows of the embedding weight matrix (where each row is the embedding for one token).
5/ Most SAEs have many thousands of features. But for our embedding SAE, we only use 2000 features because of our limited dataset. We are essentially compressing the embedding matrix into a smaller, sparser representation.
6/ The reconstructions are not highly accurate – on average we have ~60% variance unexplained (~0.7 cosine similarity) with ~6 features active per token. So more work is needed to see how useful they are.
7/ Note that for this experiment we used the subset of the token embeddings that correspond to English words, so the task is easier - but the results are qualitatively similar when you train on all embeddings.
8/ We also compare to PCA directions and find that the SAE directions are in fact much more interpretable (as we would expect)!
9/ I worked on embedding SAEs at an @apartresearch hackathon in April, with Sajjan Sivia and Chenxing (June) He.
Embedding SAEs were also invented independently by @Michael Pearce.
Nice post. I think this is a really interesting discovery.
[Copying from messages with Joseph Bloom]
TLDR: I'm confused what is different about the SAE input that causes the absorbed feature not to fire.
Me:
Summary of your findings
- Say you have a “starts with s” feature and a “snake” feature.
- You find that for most words, “starts with s” correctly categorizes words that start with s. But for a few words that start with s, like snake, it doesn’t fire.
- These exceptional tokens where it doesn’t fire, all have another feature that corresponds very closely to the token. For example, there is a “snake” feature that corresponds strongly to the snake token.
- You say that the “snake” feature has absorbed the “starts with s” feature because the concept of snake also contains/entails the concept of ‘start with s’.
- Most of the features that absorb other features correspond to common words, like “and”.
So why is this happening? Well it makes sense that the model can do better on L1 on the snake token by just firing a single “snake” feature (rather than the “starts with s” feature and, say, the “reptile” feature). And it makes sense it would only have enough space to have these specific token features for common tokens.
Joseph Bloom:
rather than the “starts with s” feature and, say, the “reptile” feature
We found cases of seemingly more general features getting absorbed in the context of spelling but they are more rare / probably the exception. It's worth distinguishing that we suspect that feature absorption is just easiest to find for token aligned features but conceptually could occur any time a similar structure exists between features.
And it makes sense it would only have enough space to have these specific token features for common tokens.
I think this needs further investigation. We certainly sometimes see rarer tokens which get absorbed (eg: a rare token is a translated word of a common token). I predict there is a strong density effect but it could be non-trivial.
Me:
We found cases of seemingly more general features getting absorbed in the context of spelling
What’s an example?
We certainly sometimes see rarer tokens which get absorbed (eg: a rare token is a translated word of a common token)
You mean like the “starts with s” feature could be absorbed into the “snake” feature on the french word for snake?
Do this only happen if the french word also starts with s?
Joseph Bloom:
What’s an example?
- Latent aligned for a few words at once. Eg: "assistance" but fires weakly on "help". We saw it absorb both "a" and "h"!
- Latent from multi-token words that also fires on words that share a common prefix (https://feature-absorption.streamlit.app/?layer=16&sae_width=65000&sae_l0=128&letter=c see latent 26348).
- Latent than fires on a token + weakly on subsequent tokens https://feature-absorption.streamlit.app/?layer=16&sae_width=65000&sae_l0=128&letter=c
You mean like the “starts with s” feature could be absorbed into the “snake” feature on the french word for snake?
Yes
Do this only happen if the french word also starts with s?
More likely. I think the process is stochastic so it's all distributions.
↓[Key point]↓
Me:
But here’s what I’m confused about. How does the “starts with s” feature ‘know’ not to fire? How is it able to fire on all words that start with s, except those tokens (like “snake”) that having a strongly correlated feature? I would assume that the token embeddings of the model contain some “starts with s” direction. And the “starts with s” feature input weights read off this direction. So why wouldn’t it also activate on “snake”? Surely that token embedding also has the “starts with s” direction?
Joseph Bloom:
I would assume that the token embeddings of the model contain some “starts with s” direction. And the “starts with s” feature input weights read off this direction. So why wouldn’t it also activate on “snake”? Surely that token embedding also has the “starts with s” direction?
I think the success of the linear probe is why we think the snake token does have the starts with s direction. The linear probe has much better recall and doesn't struggle with obvious examples. I think the feature absorption work is not about how models really work, it's about how SAEs obscure how models work.
But here’s what I’m confused about. How does the “starts with s” feature ‘know’ not to fire? Like what is the mechanism by which it fires on all words that start with s, except those tokens (like “snake”) that having a strongly correlated feature?
Short answer, I don't know. Long answer - some hypotheses:
- Linear probes, can easily do calculations of the form "A AND B". In large vector spaces, it may be possible to learn a direction of the form "(^S.*) AND not (snake) and not (sun) ...". Note that "snake" has a component seperate to starts with s so this is possible. To the extent this may be hard, that's possibly why we don't see more absorption but my own intuition says that in large vector spaces this should be perfectly possible to do.
- Encoder weights and Decoder weights aren't tied. If they were, you can imagine the choosing these exceptions for absorbed examples would damage reconstruction performance. Since we don't tie the weights, the model can detect "(^S.*) AND not (snake) and not (sun) ..." but write "(^S.*)". I'm interested to explore this further and am sad we didn't get to this in the project.
Anyone who harbors such an intense attachment to specific gendered pronoun preferences clearly sees it as much more than a superficial aesthetic designator.
This makes you sound like a bit of a straw vulcan imo. All I have to do is imagine how jarring and upsetting it would be to have everyone start calling me "she" and it's very obvious how, for almost all people, what pronoun others call them is deeply emotionally salient.
I agree, I'm a fan of lsusr's writing, so I don't think it's very inaccurate. In particular
a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions
might be gesturing at some concrete distinctive feature.
However, it's sufficiently close to horoscope flattery, that I couldn't quite believe lsusr would, with a straight face, present this as some great insight into his writing style.
I very confused how seriously this post is intended
Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website.
What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They’re willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights.
⋮
In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system.
I think this isn't meant seriously because it's basically just saying lsusr is better than most rationalist bloggers, not any concrete distinctive features of lsusr's writing.
I think this argument mostly centers on the definition of certain words, and thus does not change my views on whether I should upload my mind if given the choice.
But can this person be said to understand Chinese? My answer is no.
What you have shown here is what you think the word "understands" means. But everyone agrees about the physical situation here - everyone anticipates the same experiences.
This shows that our brains are highly resilient and adaptive to changes experienced by our minds. By comparison, a digital simulation is very brittle and non-adaptive to change.
The substrate of the simulation, ie. a silicon chip, is brittle (at our current level of tech) but it can still run a simulation of a neuroplastic brain - just program it to simulate the brain chemistry. Then if the simulated brain is damaged, it will be able to adapt.
The bigger point here is that you are implicitly asserting that in order to be "sentient" a mind must have similar properties to a human brain. That's fine, but it's is purely a statement about how you like to define the word "sentient".
Only living organisms can possess sentience because sentience provides introspective knowledge that enables them to keep surviving;
"Sentience" has no widely agreed concrete definition, but I think it would be relatively unusual to say it "provides introspective knowledge". Do you agree that any questions about the actual computation, algorithms or knowledge in a brain can be answered by only considering the physical implementation of neurons and synapses?
sentience would not emerge in artificial systems because they are not alive in the first place.
Again, I think this is purely a statement about the definition of the word "alive". Someone who disagrees would not anticipate any different experiences as a consequence of thinking an artificial system is "alive".
Nice scholarship
If this is a pattern with new, more capable models, this seems like a big problem. One major purpose of this kind of evaluation to set up thresholds that ring alarms bells when they are crossed. If it takes weeks of access to a model to figure out how to evaluate it correctly, the alarm bells may go off too late.
METR had only ~10 days to evaluate.
Should it really take any longer than 10 days to evaluate? Isn't it just a matter of plugging it into their existing framework and pressing go?
As the author of example 2, this is very helpful!
The impression I have from reading Chip War is that EUV is a pretty massive hurdle which took the West well over a decade to conquer. However, I also thought that 5nm was impossible without EUV, which seems to be no longer true, so this may be too complex a topic to make meaningful predictions about without deeper expertise.
- Created a popular format for in-person office spaces that heavily influenced Constellation and FAR Labs
This one seems big to me. There are now lots of EA / AI Safety offices around the world and I reckon they are very impactful for motivating people, making it easier to start projects and building a community.
One thing I'm not clear about is to what extent the Lightcone WeWork invented this format. I've never been to Trajan House but I believe it came first, so I thought it would have been part of the inspiration for the Lightcone WeWork.
Also my impression was that Lightcone itself thought the office was net negative, which is why it was shut down, so I'm slightly surprised to see this one listed.
Looking forward to the crossover "Dath Ilan vs Wentworld" to find out which is the most adequate civilization.
llama had to be so big to be SOTA,
How many parameters do you estimate for other SOTA models?
Joseph: That's a straw-man. [implying they do have a real plan?]
I explicitly said "However I think the point is basically correct" in the next sentence.
Anthropic is attempting to build a new mind vastly smarter than any human, and as I understand it, plans to ensure this goes well basically by “doing periodic vibe checks”
This obvious straw-man makes your argument easy to dismiss.
However I think the point is basically correct. Anthropic's strategy to reduce x-risk also includes lobbying against pre-harm enforcement of liability for AI companies in SB 1047.
I realized the previous experiment might be importantly misleading because it's on a small 12 layer model. In larger models it would still be a big deal if the effective layer horizon was like 20 layers.
Previously the code was too slow to run on larger models. But I made an faster version and ran the same experiment on GPT-2 large (48 layers):
We clearly see the same pattern again. As TurnTrout predicted, there seems be something like an exponential decay in the importance of previous layers as you go futher back. I expect that on large models an the effective layer horizon is an importnat consideration.
Updated source: https://gist.github.com/UFO-101/41b7ff0b250babe69bf16071e76658a6
Sad that this kind of nitpicking is so wildly upvoted compared to the main post. It's really non-central to the point.
The important information is:
- The most important piece of legislation on AI Safety in history is currently under consideration in the California legislature.
- Tech companies and AI labs are lobbying against it.
- You can help it pass by contacting some people.
Whether or not this affects the Suspense File in particular is irrelevant.
[To be clear this criticism is not directed at cfoster0 - it's perfectly fine to correct mistakes - but at the upvoters who I imagine are using this to deflect responsibility with the "activism dumb" reflex, while not doing the thing that would actually reduce x-risk. Hopefully this is purely in my imagination and all those upvoters subsequently went and contacted the people - or at least had some other well considered reason to do nothing, which they chose not to share.]
This feels somewhat similar to medical research with and without a detailed understanding of cell biochemistry. You can get very far without a deep understanding of the low level details and often this seems to be the best method for producing useful drugs (as I understand it most drugs are discovered through trial and error).
But understanding the low level details can get you important breakthroughs like mRNA vaccines. And if you're maybe betting the future of humanity that you can create a cure for a new type of deadly virus (superintelligent AI) on the first try you would prefer to be confident that you know what you're doing.
Actually I think the residual decomposition is incorrect - see my other comment.
Yes is what I'm saying.
- Yes I agree
- (Firstly note that it can be true without being useful). In the Residual Networks Behave Like Ensembles of Relatively Shallow Networks paper, they discover that long paths are mostly not needed for the model. In Causal Scrubbing they intervene on the treeified view to understand which paths are causally relevant for particular behaviors.
@Oliver Daniels-Koch's reply to my comment made me read this post again more carefully and now I think that that your formulation of the residual expansion is incorrect.
Given it does not follow that because is a non-linear operation. It cannot be decomposed like this.
My understanding of your big summation (with representing any MLP or attention head):
again does not hold because the s are non-linear.
There are two similar ideas which do hold, namely (1) the treeified / unraveled view and (2) the factorized view (both of which are illustrated in figure 1 here), but your residual expansion / big summation is not equivalent to either.
The treeified / unraveled view is the most similar. It separates each path from input to output, but the difference is that this does not claim that the output is the sum of all separate paths.
The factorized view follows from treeified view and is just the observation that any point in the residual stream can be decomposed into the outputs of all previous components.
There should probably be guidance on this when you go to add a tag. When I write a post I just randomly put some tags and have never previously considered that it might be prosocial to put more or less tags on my post.
The treeified view is different from the factorized view! See figure 1 here.
I think the factorized view is pretty useful. But on other hand I think MLP + Attention Head circuits are too coarse-grained to be that interpretable.
If I understand correctly, this residual decomposition is equivalent to the edge / factorized view of a transformer described here.
Update: actually the residual decomposition is incorrect - see my other comment.
Computing the exact layer-truncated residual streams on GPT-2 Small, it seems that the effective layer horizon is quite large:
I'm mean ablating every edge with a source node more than n layers back and calculating the loss on 100 samples from The Pile.
Source code: https://gist.github.com/UFO-101/7b5e27291424029d092d8798ee1a1161
I believe the horizon may be large because, even if the approximation is fairly good at any particular layer, the errors compound as you go through the layers. If we just apply the horizon at the final output the horizon is smaller.
However, if we apply at just the middle layer (6), the horizon is surprisingly small, so we would expect relatively little error propagated.
But this appears to be an outlier. Compare to 5 and 7.
Source: https://gist.github.com/UFO-101/5ba35d88428beb1dab0a254dec07c33b
In this piece, we want to paint a picture of the possible benefits of AI, without ignoring the risks or shying away from radical visions.
Thanks for this piece! In my opinion you are still shying away from discussing radical (although quite plausible) visions. I expect the median good outcome from superintelligence involves everyone being mind uploaded / living in simulations experiencing things that are hard to imagine currently.
Even short of that, in the first year after a singularity, I would want to:
- Use brain computer interfaces to play videogames / simulations that feel 100% real to all senses, but which are not constrained by physics.
- Go to Hogwarts (in a 100% realistic simulation) and learn magic and make real (AI) friends with Ron and Hermione.
- Visit ancient Greece or view all the most important events of history based on superhuman AI archeology and historical reconstruction.
- Take medication that makes you always feel wide awake, focused etc. with no side effects.
- Engineer your body / use cybernetics to make yourself never have to eat, sleep, wash, etc. and be able to jump very high, run very fast, climb up walls, etc.
- Use AI as the best teacher ever to learn maths, physics and every subject and language and musical instruments to super-expert level.
- Visit other planets. Geoengineer them to have crazy landscapes and climates.
- Play God and oversee the evolution of life on other planets.
- Design buildings in new architectural styles and have AI build them.
- Genetically modify cats to play catch.
- Listen to new types of music, perfectly designed to sound good to you.
- Design the biggest roller coaster ever and have AI build it.
- Modify your brain to have better short term memory, eidetic memory, be able to calculate any arithmetic super fast, be super charismatic.
- Bring back Dinosaurs and create new creatures.
- Ask AI for way better ideas for this list.
I expect UBI, curing aging etc. to be solved within a few days of a friendly intelligence explosion.
Although I think we also plausibly will see a new type of scarcity. There is limited amount of compute you can create using the materials / energy in the universe. And if in fact most humans are mind-uploaded / brains in vats living in simulations, we will have to divide this among ourselves in order to run the simulations. If you have twice as much compute, you can simulate your brain twice as fast (or run two of you in parallel?), and thus experience twice as much subjective time - and so live twice as long until the heat death of the universe.
Note that the group I was in only played on the app. I expect this makes it significantly harder to understand what's going on.
Yes that's correct, this wording was imprecise.
Would you ever really want mean ablation except as a cheaper approximation to resample ablation?
Resample ablation is not more expensive than mean (they both are just replacing activations with different values). But to answer the question, I think you would - resample ablation biases the model toward some particular corrupt output.
It seems to me that if you ask the question clearly enough, there's a correct kind of ablation. For example, if the question is "how do we reproduce this behavior from scratch", you want zero ablation.
Yes I agree. That's the point we were trying to communicate with "the ablation determines the task."
- direct effect vs indirect effect corresponds to whether you ablate the complement of the circuit (direct effect) vs restoring the circuit itself (indirect effect, mediated by the rest of the model)
- necessity vs sufficiency corresponds to whether you ablate the circuit (direct effect necessary) / restore the complement of the circuit (indirect effect necessary) vs restoring the circuit (indirect effect sufficient) / ablating the complement of the circuit (direct effect sufficient)
Thanks! That's great perspective. We probably should have done more to connect ablations back to the causality literature.
- "all tokens vs specific tokens" should be absorbed into the more general category of "what's the reference dataset distribution under consideration" / "what's the null hypothesis over",
- mean ablation is an approximation to resample ablation which itself is an approximation to computing the expected/typical behavior over some distribution
These don't seem correct to me, could you explain further? "Specific tokens" means "we specify the token positions at which each edge in the circuit exists".
I think so. Mostly we learned about trading and the price discovery mechanism that is a core mechanic of the game. We started with minimal explanation of the rules, so I expect these things can be grokked faster by just saying them when introducing the game.
We just played Figgie at MATS 6.0, most players playing for the first time. I think we made lots of clearly bad decisions for the first 6 or 7 games. And reached a barely acceptable standard by about 10-15 games (but I say this as someone who was also playing for the first time).
(crossposted to the EA Forum)
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.
These are not just vibes - they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It's not epistemically poor to say these things if they're actually true.
If this were the case, wouldn't you expect the mean of the code steering vectors to also be a good code steering vector? But in fact, Jacob says that this is not case. Edit: Actually it does work when scaled - see nostalgebraist's comment.