## Posts

Understanding the tensor product formulation in Transformer Circuits 2021-12-24T18:05:53.697Z
How should my timelines influence my career choice? 2021-08-03T10:14:33.722Z

Comment by Frederik on Understanding the tensor product formulation in Transformer Circuits · 2021-12-28T14:12:35.906Z · LW · GW

Ah yes that makes sense to me. I'll modify the post accordingly and probably write it in the basis formulation.

ETA: Fixed now, computation takes a tiny bit longer but hopefully still readable to everyone.

Comment by Frederik on Should I delay having children to take advantage of polygenic screening? · 2021-12-19T11:33:46.679Z · LW · GW

Seems like this could be circumvented relatively easily by freezing gametes now.

Comment by Frederik on Important ML systems from before 2012? · 2021-12-18T15:57:19.017Z · LW · GW

Are you asking exclusively about "Machine Learning" systems or also GOFAI? E.g. I notice that you didn't include ELIZA in your database, but that was a hard coded program so maybe doesn't match your criteria.

Comment by Frederik on DL towards the unaligned Recursive Self-Optimization attractor · 2021-12-18T15:52:07.721Z · LW · GW

Trying to summarize your viewpoint, lmk if I'm missing something important:

•  Training self-organizing models on multi-modal input will lead to increased modularization and in turn to more interpretability
• Existing interpretability techniques might more or less transfer to self-organizing systems
• There are low-hanging fruits in applied interpretability that we could exploit should we need them in order to understand self-organizing systems
• (Not going into the specific proposals for sake of brevity and clarity)
Comment by Frederik on DL towards the unaligned Recursive Self-Optimization attractor · 2021-12-18T15:47:34.487Z · LW · GW

Out of curiosity, are you willing to share the papers you improved upon?

Comment by Frederik on DL towards the unaligned Recursive Self-Optimization attractor · 2021-12-18T11:09:18.617Z · LW · GW

Interpretability will fail - future DL descendant is more of a black box, not less

It certainly makes interpretability harder, but it seems like the possible gain is also larger, making it a riskier bet overall. I'm not convinced that it decreases the expected value of interpretability research though. Do you have a good intuition for why it would make interpretability less valuable or at least lower value compared to the increased risk of failure?

IRL/Value Learning is far more difficult than first appearances suggest, see #2

That's not immediately clear to me. Could you elaborate?

Comment by Frederik on DL towards the unaligned Recursive Self-Optimization attractor · 2021-12-18T10:53:59.952Z · LW · GW

Small nitpick: I would cite The Bitter Lesson in the beginning.

Comment by Frederik on The Case for Radical Optimism about Interpretability · 2021-12-18T08:46:04.343Z · LW · GW

I can only speculate, but the main researchers are now working on other stuff, like e.g. Anthropic. As to why they switched, I don't know. Maybe they were not making progress fast enough or Anthropic's mission seemed more important?

However, at least Chris Olah believes this is still a tractable and important direction, see the recent RFP by him for Open Phil.

Comment by Frederik on Perishable Knowledge · 2021-12-18T08:38:37.192Z · LW · GW

I like the framing of perishable vs non-perishable knowledge and I like that the post is short and concise.

However, after reading this I'm left feeling "So what now?" and would appreciate some more actionable advice or tools of thought. What I got out so far is:

1. Things that have been around for longer are more likely to stay around longer (seems like a decent prior)
2. Keep tabs on a few major event categories and dump the rest of the news cycle (checks out -- not sure how that would work as a categorical imperative, but seems like the right choice for an individual)

I think the concept can be applied pretty broadly. Some more ideas:

• when learning about a new field, in general, go for textbooks rather than papers
• if you use spaced repetition, regularly ask yourself whether the cards you are studying have passed their shelf life --> this can help reduce frustration/annoyance/boredom when reviewing cards
• some skills have extremely long shelf-life and they seem to overlap with those that compound:
• learning basic life admin skills
• learning how to take care of your mental health (e.g. CBT methods)
• learning how to learn
• basic social skills

I'm sure there is much more here.

Comment by Frederik on EfficientZero: How It Works · 2021-11-26T17:35:45.537Z · LW · GW

In appendix A.6 they state "To train an Atari agent for 100k steps, it only needs 4 GPUs to train 7 hours." I don't think they provide a summary of the total number of parameters. Scanning the described architecture though, it does not look like a lot -- almost surely < 1B.

Comment by Frederik on AI Safety Needs Great Engineers · 2021-11-24T21:35:53.717Z · LW · GW

Oh yes I'm aware that he expressed this view. That's different however from it being objectively plausible (whatever that means). I have the feeling we're talking past each other a bit. I'm not saying "no-one reputable thinks OpenAI is net-negative for the world". I'm just pointing out that it's not as clear-cut as your initial comment made it seem to me.

Comment by Frederik on AI Safety Needs Great Engineers · 2021-11-24T15:58:28.814Z · LW · GW

No I take specific issue with the term 'plausibly'. I don't have a problem with the term 'possibly'. Using the term plausibly already presumes judgement over the outcome of the discussion which I did not want to get into (mostly because I don't have a strong view on this yet). You could of course argue that that's false balance and if so I would like to hear your argument (but maybe not under this particular post, if people think that it's too OT)

ETA: if this is just a disagreement about our definitions of the term 'plausibly' then nevermind, but your original comment reads to me like you're taking a side.

Comment by Frederik on AI Safety Needs Great Engineers · 2021-11-24T12:56:54.778Z · LW · GW

I don't like your framing of this as "plausible" but I don't want to argue that point.

Afaict it boils down to whether you believe in (parts of) their mission, e.g. interpretability of large models and how much that weighs against the marginal increase in race dynamics if any.

Comment by Frederik on My ML Scaling bibliography · 2021-10-24T11:14:36.388Z · LW · GW

Great stuff, thanks!

Is there are reason you're not including https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play? Is it not explicit enough in terms of being a scaling paper?

Comment by Frederik on [deleted post] 2021-09-25T19:41:08.826Z

Background: AI Master Student, some practice in RL

I don't think there is a fundamental reason that we can't but it's rather that no one did it. I don't know a definitive answer as to why but here are some options:

• too obscure ('no-one has thought of it', or 'no-one thought it was a good idea, it's only assembly-like code after all')
• high barrier of entry (you need to write an RL environment that you can query fast, and you need a lot of compute)
• this makes it harder for individuals or small teams to do this, and larger players like DeepMind and OpenAI might have different priorities
• now that we have Codex (and soon^TM its newer, allegedly much better version), there might not be any (economic or scientific) reason to do this
• What's the value you'd get out of a code wars expert model? I glanced at the Wikipedia page. How would you convert its outputs to a useful program, that does more than gobble up all your memory?
Comment by Frederik on How should my timelines influence my career choice? · 2021-08-04T12:25:17.618Z · LW · GW

True, it's always good to remind oneself of a broader option space.

Could you elaborate what you mean by 'working at orgs'... since engineering would meet that definition, or do you mean explicitly other roles than engineering, such as ops or management?

I don't think I'd be a good fit for computer security, both in terms of pre-existing skills and interest, but I get your general point. (My undergrad was in physics, not CS, so I'm lacking quite a few of the traditional CS skills, except in the more theoretical subjects). Do you have a pointer to resources discussing the most needed skill sets in this broader cause area?

Comment by Frederik on How should my timelines influence my career choice? · 2021-08-04T01:03:25.695Z · LW · GW

Thanks! Yeah, that's a good point mulling over. I guess it would hinge on the marginal improvement of EV in the doomed scenario to make that assertion. I don't necessarily see things being completely, hopelessly doomed in a 15-20-year-to-AGI world. But I am also uncertain as to which role is more useful in the short-timeline world, aside from an engineer being able to contribute earlier. In the medium-term timeline world it seems to me like the marginal researcher has higher EV.

So if I would be completely uncertain, i.e. 50/50, which one is better in a short timeline world, then becoming a researcher would seem like the safer choice.

Comment by Frederik on How should my timelines influence my career choice? · 2021-08-03T11:41:45.430Z · LW · GW

In terms of flexibility, a PhD seems to score higher since it would be relatively straightforward to just stop it and do engineering instead, whereas the reverse would then require doing the full 4-5 years. I suppose it is also easier to automate an engineering job compared to research job.

Comment by Frederik on Expected utility and repeated choices · 2019-12-28T08:20:01.609Z · LW · GW

Well, if you assume these agents do not employ time-discounting then you indeed cannot compare trajectories, since all of them might have infinite utility (and are computationally intractable as you say) if they don't terminate.

We do run into the same problem if we assume realistic action spaces, i.e. consider all the things we could possibly do, as there are too many even for a single time step.

RL algorithms "solve" this by working with constrained action spaces and discounting future utility.. and also by often having terminating trajectories. Humans also work on (highly) constrained action spaces and have strong time discounting [citation needed], and every model of a rational human should take that into account.

I admit those points are more like hacks we've come up with for practical situations, but I suppose the computational intractability is a reason why we can't already have all the nice things ;-)

Comment by Frederik on Expected utility and repeated choices · 2019-12-27T21:49:31.162Z · LW · GW

The intuitive result you would expect only holds for utility function which are linear in x (I believe..), since we could then apply the utility function at each step and it would yield the same value as if applied to the whole amount.

Another case would be if you were to receive your utility immediately after playing each game (like in a reinforcement learning algorithm). In those cases is also applied to each outcome separately and would yield the result you would expect.

Also: (b) has a better EV in terms of raw $and due to law of large numbers we would expect the actual amount of money won by repeatedly playing (b) to approach that EV. So for many games we should expect any monotonic increasing utility function to favor (b) over (a) as the number of games approaches infinity. The only reason your U favors (a) over (b) for a single game is that it is risk-averse, i.e. sub-linear in x. As the amount of games approaches infinity the risk of choosing to play b becomes less and less until it is the choice between (essentially) winning 0.5$ for sure or 0.67\$ for sure in every game. If you think about it in these terms it becomes more intuitive why the behaviour observed by you is reasonable.

In other words: Yes! You do have to think about the amount of games you play if your utility function is not linear (or you have a strong discount factor).

Comment by Frederik on Deducing Impact · 2019-09-26T21:38:53.830Z · LW · GW

While I agree that using percentages would make impact more comparable between agents and timesteps, it also leads to counterintuitive results (at least counterintuitive to me)

Consider a sequence of utilities at times 0, 1, 2 with , and .

Now the drop from to would be more dramatic (decrease by 100%) compared to the drop from to (decrease by 99%) if we were using percentages. But I think the agent should 'care more' about the larger drop in absolute utility (i.e. spend more resources to prevent it from happening) and I suppose we might want to let impact correspond to something like 'how much we care about this event happening'.