Posts

What problem would you like to see Reinforcement Learning applied to? 2020-07-08T02:40:17.146Z

Comments

Comment by Julian Schrittwieser (julian-schrittwieser) on chinchilla's wild implications · 2022-07-31T12:29:15.615Z · LW · GW

An important distinction here is that the number of tokens a model was trained for should not be confused with the number of tokens in a dataset: if each token is seen exactly once during training then it has been trained for one "epoch".

In my experience scaling continues for quite a few epochs over the same datset, only if the model has more parameters than the datset tokens and training for >10 epochs does overfitting kick in and scaling break down.

Comment by Julian Schrittwieser (julian-schrittwieser) on REPL's: a type signature for agents · 2022-02-17T14:03:18.549Z · LW · GW

Could you explain how this differs from the standard Reinforcement Learning formulation? (See eg. http://incompleteideas.net/book/first/ebook/node28.html for an introduction)

Comment by Julian Schrittwieser (julian-schrittwieser) on OpenAI Solves (Some) Formal Math Olympiad Problems · 2022-02-04T10:05:30.327Z · LW · GW

This is indeed amusing. In reality, the action space can be taken to be of size 256 (the number of possible byte values), with the number of bytes in the solution as the episode length. Note also that 256 is an upper bound, not all byte values are valid at all points, and most of the time only the 128 ASCII values are used. Using a tokenizer as is standard in language models simply reduces the episode length by increasing the action space, it does not change the size of the overall state space.

This also means that, despite their claims, the search space for the example solutions shown on their website is similar or smaller than for board games such as Chess and Go :D

Comment by Julian Schrittwieser (julian-schrittwieser) on EfficientZero: How It Works · 2021-12-05T12:10:16.345Z · LW · GW

Nice summary! I agree, this is an interesting paper :)

But learning to be predictive of such random future states seems like it falls subject to exactly the same problem as learning to be predictive of future observations: you have no guarantee that EfficientZero will be learning relevant information, which means it could be wasting network capacity on irrelevant information. There's a just-so story you could tell where adding this extra predictive loss results in worse end-to-end behavior because of this wasted capacity, just like there's a just-so story where adding this extra predictive loss results in better end-to-end behavior because of faster training. I'm not sure why one turned out to be true rather than the other.

This mostly depends on the size of your dataset. For very small datasets (100k frames here), the network is overparameterized and can easily overfit, adding the consistency loss provides regularisation that can prevent this.

For larger datasets (eg standard 200 million frame setting in Atari) you'll see less overfitting, and I would expect the impact of consistency loss to be much smaller, possibly negative. The paper doesn't include ablations for this, but I might test it if I have time.

To phrase differently: the less data you have for your real objective the more you can benefit from auxiliary losses and regularisation.

Comment by Julian Schrittwieser (julian-schrittwieser) on Omicron Post #3 · 2021-12-03T09:35:16.452Z · LW · GW

Does Omicron already having spread through community transmission in the Netherlands (and other European countries) before the reports from South Africa, yet still not being as widespread in Europe, suggest that it's not that transmissive after all?

Comment by Julian Schrittwieser (julian-schrittwieser) on Parameter counts in Machine Learning · 2021-08-31T20:31:17.740Z · LW · GW

The difference in compute between AlexNet and AlphaZero is because for AlexNet you are only counting the flops during training, while for AlphaZero you are counting both the training and the self-play data generation (which does 800 forwards per move * ~200 moves to generate each game).

If you were to compare supervised training numbers for both (e.g. training on human chess or Go games) then you'd get much closer.

Comment by Julian Schrittwieser (julian-schrittwieser) on How much compute was used to train DeepMind's generally capable agents? · 2021-07-30T20:05:36.676Z · LW · GW

The TOPS numbers from the wiki page seem wrong. TPUv1 had 92 TOPS (uint8); for TPUv3 the "90 TOPS" refers to a single chip, but I'm fairly sure that when the paper says "8 TPUv3s" they mean 8 cards, as that's how they are available on Google Cloud (1 card = 4 chips).

Comment by Julian Schrittwieser (julian-schrittwieser) on How much compute was used to train DeepMind's generally capable agents? · 2021-07-30T20:02:37.555Z · LW · GW

Only Anakin actually runs the environment on the TPU, and this only works for pretty simple environments (basically: can you implement it in JAX?) Sebulba runs environments on the host, which is what would have been done for this paper too (no idea if they used Sebulba or had a different setup).

This doesn't really matter though, because for these simulated environments it's fairly simple to fully utilize the TPUs by running more (remote) environments in parallel.