Posts

An Introduction to AI Sandbagging 2024-04-26T13:40:00.126Z
Simple distribution approximation: When sampled 100 times, can language models yield 80% A and 20% B? 2024-01-29T00:24:27.706Z
List of projects that seem impactful for AI Governance 2024-01-14T16:53:07.854Z
Evaluating Language Model Behaviours for Shutdown Avoidance in Textual Scenarios 2023-05-16T10:53:32.968Z

Comments

Comment by Teun van der Weij (teun-van-der-weij) on Simple distribution approximation: When sampled 100 times, can language models yield 80% A and 20% B? · 2024-02-03T18:37:18.876Z · LW · GW

Might be a good way to further test this indeed. So maybe something like green and elephant?

Comment by Teun van der Weij (teun-van-der-weij) on Simple distribution approximation: When sampled 100 times, can language models yield 80% A and 20% B? · 2024-01-31T20:25:55.023Z · LW · GW

Interesting. I'd guess that the prompting is not clear enough for the base model. The Human/Assistant template does not really apply to base models. I'd be curious to see what you get when you do a bit more prompt engineering adjusted for base models.

Comment by Teun van der Weij (teun-van-der-weij) on Comparing representation vectors between llama 2 base and chat · 2023-10-29T10:04:09.724Z · LW · GW

Cool work.

I was briefly looking at your code, and it seems like you did not normalize the activations when using PCA. Am I correct? If so, do you expect that to have a significant effect?

Comment by Teun van der Weij (teun-van-der-weij) on CAIS-inspired approach towards safer and more interpretable AGIs · 2023-03-28T07:05:13.153Z · LW · GW

I think your policy suggestion is reasonable. 

However, implementing and executing this might be hard: what exactly is an LLM? Does a slight variation on the GPT architecture count as well? How are you going to punish law violators? 

How do you account for other worries? For example, like PeterMcCluskey points out, this policy might lead to reduced interpretability due to more superposition. 

Policy seems hard to do at times, but others with more AI governance experience might provide more valuable insight than I can. 

Comment by Teun van der Weij (teun-van-der-weij) on What 2026 looks like · 2022-11-23T21:20:32.690Z · LW · GW

Why do you think a similar model is not useful for real-world diplomacy?

Comment by Teun van der Weij (teun-van-der-weij) on What 2026 looks like · 2022-11-23T20:17:17.008Z · LW · GW

It's especially dangerous because this AI is easily made relevant in the real world as compared to AlphaZero for example. Geopolitical pressure to advance these Diplomacy AIs is far from desirable.

Comment by Teun van der Weij (teun-van-der-weij) on What 2026 looks like · 2022-11-23T18:23:27.740Z · LW · GW

After years of tinkering and incremental progress, AIs can now play Diplomacy as well as human experts.[6]

It seems that human-level play is possible in regular Diplomacy now, judging by this tweet by Meta AI. They state that:

We entered Cicero anonymously in 40 games of Diplomacy in an online league of human players between August 19th and October 13th, 2022. Over the course of 72 hours of play involving sending 5,277 messages, Cicero ranked in the top 10% of participants who played more than one game.