MMLU’s Moral Scenarios Benchmark Doesn’t Measure What You Think it Measures 2023-09-27T17:54:39.598Z


Comment by corey morris (corey-morris) on MMLU’s Moral Scenarios Benchmark Doesn’t Measure What You Think it Measures · 2023-10-05T02:25:42.731Z · LW · GW

Thanks for your comment and letting me know about that work! Yeah it does look like with GPT-4 that the difference goes away. After a quick look at that paper, it appears that the tasks that were considered were the high performing MMLU tasks. The moral scenarios task seems harder in that the answers themselves don’t have inherent meaning, so it almost seems like there is a second mapping or reasoning step that needs to take place. Maybe you or someone else can better articulate the semantic challenge than I can at the moment.

The smaller model that performs well on the original task is one that is trained with an orca style dataset(dataset rich in reasoning). I found it interesting that it performed well on the original task, but not better on the single scenarios. Curious if you have done any interpretability work on models trained with datasets rich in reasoning and how they differ from others.

Comment by corey morris (corey-morris) on Meta Questions about Metaphilosophy · 2023-09-05T21:40:49.055Z · LW · GW

I'm currently investigating the moral reasoning capabilities of AI systems. Given your previous focus on decision theory and subsequent shift to Metaphilosophy, I'm curious to get your thoughts.

Say an AI system was an excellent moral reasoner prior to having especially dangerous capability. What might be missing to ensure it is safe? What do you think the underlying capabilities to getting to be an excellent moral reasoner would be ?

I am new to considering this as a research agenda. It seems important and neglected, but I don’t have a full picture of the area yet or all of the possible drawbacks of pursuing it.

Comment by corey morris (corey-morris) on You Are Not Measuring What You Think You Are Measuring · 2023-06-22T19:53:19.758Z · LW · GW

One of the key statements made in this post is that measuring more stuff is better than measuring less stuff.  Have your beliefs on that updated at all since the original post ? What evidence would cause you to become more certain or less certain of this claim ?