Posts

Improving Model-Written Evals for AI Safety Benchmarking 2024-10-15T18:25:08.179Z

Comments

Comment by Sunishchal Dev (sunishchal-dev) on Bounty: Diverse hard tasks for LLM agents · 2023-12-26T06:38:42.212Z · LW · GW

Thanks, this is helpful!

I noticed a link in the template.py file that I don't have access to. I imagine this repo is internal only, so could you provide the list of permissions as a file in the starter pack? 

# search for Permissions in https://github.com/alignmentrc/mp4/blob/v0/shared/src/types.ts

Comment by Sunishchal Dev (sunishchal-dev) on Bounty: Diverse hard tasks for LLM agents · 2023-12-24T02:38:51.556Z · LW · GW

Thanks for the detailed instructions for the program! Just a few clarifications before I dive in:

  1. The README file's airtable links for task idea & specification submission seem to be the same. Did you mean to paste a different link for task ideas? 
  2. Are the example task definitions in the PDF all good candidates for implementation? Is there any risk of doing duplicate work if someone else chooses to do the same implementation as me?
  3. If I want to do an implementation that isn't in the examples list, is it a good idea to first submit it as an idea and wait for approval before working on the specification & implementation? 
  4. Are we allowed to use an LLM to automatically score a task? This seems useful for tasks with fuzzy outputs like answers to research questions. If so, would it need to be a locally hosted LLM like LLAMA? I imagine using an API-based model like GPT-4 would be susceptible to an external dependency that could change in reliability and possibly leak data.