Posts

A short 'derivation' of Watanabe's Free Energy Formula 2024-01-29T23:41:44.203Z
Steering Llama-2 with contrastive activation additions 2024-01-02T00:47:04.621Z
Simulators Increase the Likelihood of Alignment by Default 2023-04-30T16:32:43.651Z
If Wentworth is right about natural abstractions, it would be bad for alignment 2022-12-08T15:19:02.084Z
A caveat to the Orthogonality Thesis 2022-11-09T15:06:51.427Z
Who is doing Cryonics-relevant research? 2022-03-15T10:26:21.001Z
There is a line in the sand, just not where you think it is 2022-01-22T10:33:37.607Z

Comments

Comment by Wuschel Schulz (wuschel-schulz) on What's up with all the non-Mormons? Weirdly specific universalities across LLMs · 2024-04-21T13:18:00.256Z · LW · GW

13. an X that isn’t an X

 

I think this pattern is common because of the repetition. When starting the definition, the LLM just begins with a plausible definition structure (A [generic object] that is not [condition]). Lots of definitions look like this. Next it fills in some common [gneric object].Then it wants to figure out what the specific [condition] is that the object in question does not meet. So it pays attention back to the word to be defined, but it finds nothing. There is no information saved about this non-token. So the attention head which should come up with a plausible candidate for [condition] writes nothing to the residual stream. What dominates the prediction now are the more base-level predictive patterns that are normally overwritten, like word repetition (this is something that transformers learn very quickly and often struggle with overdoing). The repeated word that at least fits grammatically is [generic object], so that gets predicted as the next token.

Here are some predictions I would make based on that theory:
- When you suppress attention to [generic object] at the sequence position where it predicts [condition], you will get a reasonable condition.
- When you look (with logit lens) at which layer the transformer decides to predict [generic object] as the last token, it will be a relatively early layer.
- Now replace the word the transformer should define with a real, normal word and repeat the earlier experiment. You will see that it decides to predict [generic object] in a later layer.

Comment by Wuschel Schulz (wuschel-schulz) on Gated Attention Blocks: Preliminary Progress toward Removing Attention Head Superposition · 2024-04-18T10:40:17.072Z · LW · GW

I like this method, and I see that it can eliminate this kind of superposition. 
You already address the limitation, that these gated attention head blocks do not eliminate other forms of attention head superposition, and I agree.
It feels kind of specifically designed to deal with the kind of superposition that occurs for Skip Trigrams and I would be interested to see how well it generalizes to superpositions in the wild.


I tried to come up with a list of ways attention head superposition that can not be disentangled by gated attention blocks:

  • multiple attention heads perform a distributed computation, that attends to different source tokens
    This was already addressed by you, and an example is given by Greenspan and Wynroe
  • The superposition is across attention heads on different layers
    These are not caught because the sparsity penalty is only applied to attention heads within the same layer.
    Why should there be superposition of attention heads between layers?
    As a toy model let us imagine the case of a 2 layer attention only transformer, with n_head heads in each layer, given a dataset with >n_head^2+n_head skip trigrams to figure out.
    Such a transformer could use the computation in superposition described in figure 1 to correctly model all skip trigrams, but would run out of attention head pairs within the same layer for distributing computation between.
    Then it would have to revert to putting attention head pairs across layers in superposition.
  • Overlapping necessary superposition.
    Let's say, there is some computation, for wich you need two attention heads, attending to the same token position.
    The easiest example of a situation, where this is necessary is when you want to copy information from a source token, that is "bigger" than the head dimension. The transformer can then use 2 heads, to copy over twice as much  information.
    Let us now imagine, there are 3 cases, where information has to be copied from the source token. A,B,C, and we have 3 heads: 1,2,3. and the information that has to be copied over can be stored in 2*d_head dimensions. Is there a  way to solve this task? Yes!
    heads 1&2 work in superposition to copy the information in task A, 2&3 in task B and 3&1 in task C.
    In theory, we could make all attention heads monosemantic, by having a set of 6 attention heads, trained to perform the same computation: A: 1&2, B: 3&4, C:5&6. But the way that the L.6 norm is applied, it only tries to reduce the  number of times, that  2 attention heads attend to the same token. And this happens the same amount for both possibilities where the computation happens.
Comment by Wuschel Schulz (wuschel-schulz) on Believing In · 2024-02-14T14:34:17.886Z · LW · GW

Under an Active Inference perspective, it is little surprising, that we use the same concepts for [Expecting something to happen], and [Trying to steer towards something happenig], as they are the same thing happening in our brain. 

I don't know enough about this know, whether the active inference paradigm predicts, that this similarity on a neuronal level plays out as humans using similar language to describe the two phenomena, but if it does the common use of this "beliving in" - concept might count as evidence in its favour.

Comment by Wuschel Schulz (wuschel-schulz) on A short 'derivation' of Watanabe's Free Energy Formula · 2024-01-29T23:59:18.289Z · LW · GW

Ok, the sign error was just in the end, taking the -log of the result of the integral vs. taking the log. fixed it, thanks.

Comment by Wuschel Schulz (wuschel-schulz) on A short 'derivation' of Watanabe's Free Energy Formula · 2024-01-29T23:53:20.979Z · LW · GW

Thanks, Ill look for the sign-error!

I agree, that K is symmetric around our point of integration, big the prior phi is not. We integrate over e-(nk)*phi, wich does not have have to be symetric, right?

Comment by Wuschel Schulz (wuschel-schulz) on Experiments in Evaluating Steering Vectors · 2023-06-20T10:14:49.586Z · LW · GW

The top performing vector is odd in another way. Because the tokens of the positive and negative side are subtracted from each other, a reasonable intuition is that the subtraction should point to a meaningful direction. However, some steering vectors that perform well in our test don't have that property. For the steering vector “Wedding Planning Adventures” - “Adventures in self-discovery”, the positive and negative side aren't well aligned per token level at all:

I think I don't see the Mystrie here.
When you directly subtract the steering prompts from each other, most of the results would not make sense, yes. But this is not what we do. 
We feed these Prompts into the Transformer and then subtract the residual stream activations after block n from each other. Within the n layers, the attention heads have moved around the information between the positions. Here is one way, this could have happened:

The first 4 Blocks assess the sentiment of a whole sentence, and move this information to position 6 of the residual stream, the other positions being irrelevant. So, when we constructed the steering vector and recorded the activation after block 4, we have the first 5 positions of the steering vector being irrelevant and the 6th position containing a vector that points in a general "Wedding-ness" direction. When we add this steering vector to our normal prompt, the transformer acts as if the previous vector was really wedding related and 'keeps talking' about weddings.

Obviously, all the details are made up, but I don't see how a token for token meaningful alignment of the prompts of the steering vector should intuitively be helpful for something like this to work.

Comment by Wuschel Schulz (wuschel-schulz) on Empirical Findings Generalize Surprisingly Far · 2023-06-13T13:19:30.556Z · LW · GW

The analogy to molecular biology you've drawn here is intriguing. However, one important hurdle to consider is that the Phage Group had some sense of what they were seeking. They examined bacteria with the goal of uncovering mechanisms also present in humans, about whom they had already gathered a considerable amount of knowledge. They indeed succeeded, but suppose we look at this from a different angle.

Imagine being an alien species with a vastly different biological framework, tasked with studying E.Coli with the aim of extrapolating facts that also apply to the "General Intelligences" roaming Earth - entities that you've never encountered before. What conclusions would you draw? Could you mistakenly infer that they reproduce by dividing in two, or perceive their surroundings mainly through chemical gradients?

I believe this hypothetical scenario is more analogous to our current position in AI research, and it highlights the difficulty in uncovering empirical findings that can generalize all the way up to general intelligence.

Comment by Wuschel Schulz (wuschel-schulz) on If Wentworth is right about natural abstractions, it would be bad for alignment · 2022-12-20T17:24:50.121Z · LW · GW

Thanks a lot for the comment and correction :) 

I updated "diamond maximization problem" to "diamond alignment problem".

I didn't understand your proposal to involve surgically inserting the drive to value "diamonds are good", but instead systematically rewarding the agent for acquiring diamonds so that a diamond shard forms organically. I also edited that sentence. 

I am not sure I get your Nitpick: "Just as you can deny that Newtonian mechanics is true, without denying that heavy objects attract each other." was supposed to be an example of "The specific theory is wrong, but the general phenomenon which it tries to describe exists". In the same way that I think Natural Abstractions exist but (my flawed understanding) of  Wentworths theory of natural abstractions is wrong. It was not supposed to be an example of a natural abstraction itself.

Comment by Wuschel Schulz (wuschel-schulz) on Decision Theory but also Ghosts · 2022-11-20T17:56:12.117Z · LW · GW

Very interesting Idea!

I am a bit sceptical about the part, where the Ghosts should mostly care about what will happen to their actual version, and not care about themselfs.

Lets say I want you to cooperate in a prisoner's dilemma. I might just simulate you, see if your ghost cooperates and then only cooperate when your ghost does. But I could also additionally reward?punnish your ghosts directly depending wether they cooperate or defect. 

Wouldn't that also be motivating to the ghosts, that they suspect that I might just get reward or punishment even if they are the Ghosts and not the actual person?

Comment by Wuschel Schulz (wuschel-schulz) on A caveat to the Orthogonality Thesis · 2022-11-14T18:25:45.863Z · LW · GW

Yes, I would consider humans to already be unsafe, as we already made a sharp left turn that left us unaligned relative to our outer optimiser.

Dogs are a good point, thank you for that example. Not sure if dogs have our exact notion of corrigibility, but they definitely seem to be friendly in some relevant sence.

Comment by Wuschel Schulz (wuschel-schulz) on Understanding and avoiding value drift · 2022-11-04T08:30:00.350Z · LW · GW

I am confused by the part, where the Rick-shard can anticipate wich plan the other shards will bit for. If I understood shard-theory correctly, shards do not have their own world model, they can just bid up or down actions, according to the consequences they might have according to the worldmodel that is available to all shards. Please correct me if I am wrong about this point.

So I don’t see how the Rick-Shard could really „trick“ the atheism-shard via rationalisation.

If the Rick-shard sees that „church-going for respect-reasons“ will lead to conversion, then the atheism-shard has to see that too, because they query the same world-model. So the atheism-shard should bid against that plan just as heavily as against „going to church for conversion reasons“.

I think there is something else going on here. I think the Rick-shard does not trick the Atheism-Shard, but the Concious-Part that is not described by shard theory.

Comment by Wuschel Schulz (wuschel-schulz) on We may be able to see sharp left turns coming · 2022-10-19T13:22:02.415Z · LW · GW

In particular, these results suggest that we may be able to predict power-seeking, situational awareness, etc. in future models by evaluating those behaviors in terms of log-likelihood.

I am skeptical that this methodology could work for the following reason:

I think it is generally useful for thinking about the sharp left turn, to keep the example of chimps/humans in mind. Chimps as a pre-sharp left turn example and humans as a post-sharp left turn example.

Let's say you look at a chimp, and you want to measure whether a sharp left turn is around the corner. You reason, that post-sharp left turn animals should be able to come up with algebra. (so far, so correct)

And now what you do, is that you measure the log likelihood that a chimp would come up with algebra. I expect you get a value pretty close to -inf, even though sharp left turn homo sapiens is only one species down the line. 

Comment by Wuschel Schulz (wuschel-schulz) on The Halo Effect · 2021-03-09T16:00:00.591Z · LW · GW

I am also still looking for a reference on that one...

Comment by Wuschel Schulz (wuschel-schulz) on The LessWrong 2018 Book is Available for Pre-order · 2021-02-18T09:07:38.052Z · LW · GW

You could make it even more accessible if Credit card was not the only payment option. In some places (like here in Germany) having a credit card is somewhat less common. Adding Paypal would be nice.

Comment by Wuschel Schulz (wuschel-schulz) on Hammertime Final Exam · 2020-03-27T20:40:18.616Z · LW · GW

Rationality framework: The Greenland effect:

Remember the first time, you looked at a world map: one thing that maybe cached your eye was Greenland: That huge Island, almost as big as Africa, up there in the north.

Now remember the first time, you took a closer look at a globe (or a non-Mercator projection for that matter) Greenland is a bit disappointing, isn’t it? Doesn’t seem to be THAT big at all.

Now remember that time in geography class, when you held presentations on the countries in Europe: In comparison to these folks, the icy planes of Denmark´s pet island seem gigantic. Now, not as gigantic as Africa, but still…

Depending on how much time you spend with geography, I can well imagine that cycle going back and forth some more.

What is important here, is the following: even though your knowledge about the size of Greenland ever increased over your life, your emotional attitude “oh, quite big” or “nah, it´s an island, bruh” switched around quite a lot in both directions.

Now in the case of Greenland this is all well and fine, but other scenarios in can lead to pseudo disagreements or confused arguments: Beware the Greenland effect. Beware that your emotional dispossession towards an issue, often reflects your last update on that issue (which should vary unpredictably) and not your overall believes on an issue (which should converge).

Example of Greenland effects:

“The church is good, it teaches me about God”->”God is fake, the priest must be a moron, the world lied to me” -> “These religious people are actually using a lot of their recouces to help people in need” -> “all those religious charities are so ineffective.” …

“I can’t stop this project now, I have already invested so many recources”->”I know about sunk cost bias. I will abandon my projects, whenever they seem to be a bad Idea” -> “I should carry through projects despite having downs: sunk cost faith.”…

Comment by Wuschel Schulz (wuschel-schulz) on Focusing · 2020-03-17T14:02:11.300Z · LW · GW

Ok, I'm kind of new to the whole LessWrong Buissness, so can someone please explain to me:

What is your thing with Jordan Peterson? I get, that he is a Psychologist and so on, but there are a lot of people out there, who not just take his 101 life advice by heart, but also his political .... Ideas?

From the way he is quoted in this sequence and the fact that there seems to be no discussion about this in the comments, you seem to see him as a legitimate expert on rationality? Or do you seperate between his psychology and politics? Or does no one know him here except alkjash? I'd love to hear from you all!

Comment by Wuschel Schulz (wuschel-schulz) on The Adventure: a new Utopia story · 2019-09-24T21:29:16.930Z · LW · GW

I laughed so hard at the  "...and then, finally, he truly knew what it was like to be a bat..." part. Every time a Philosophy course at my Uni gets to the topic of qualia, someone brings the exactly same example of the difference of knowing, how I would feel being a at, and how the bat feels...   ...that reference came so unexpected.

Otherwise also nice story, and interesting universe. Thanks for posting it.