Posts
Comments
Why do you consider it unlikely that companies could (or would) fish out the questions from API-logs?
Thank you for your comment!
Not sure I agree with you about which way the tradeoff shakes out. To me it seems valuable that people outside the main labs have a clear picture of the capabilities of the leading models, and how that evolves over time, but I see your point that it could also encourage or help capabilities work, which is not my intention.
I’m probably guilty of trying to make the benchmark seem cool and impressive in a way that may not be helpful for what I actually want to achieve with this.
I will think more about this, and read what others have been thinking about it. At the very least I will keep your perspective in mind going forward.
The LLMs are presented with the ML task and they write python code to solve the ML task. This python code is what is run in the isolated docker with 12GB memory.
So the LLMs themselves are not run on the TITAN V, they are mostly called through an API. Although I did in fact run a bunch of the LLMs locally through ollama, just not on the TITAN V server, but a larger one.
My guess is it's <1 hour per task assuming just copilot access, and much less if you're allowed to use e.g. o1 + Cursor in agent mode. That being said, I think you'd want to limit humans to comparable amounts of compute for comparable number, which seems a bit trickier to make happen.
I guess I was thinking that the human baseline should be without LLMs, because otherwise I could just forward the prompt to the best LLM, se what they did, and perhaps improve upon it, which would put human level always at or above the best LLM.
Then again this is not how humans typically work now, so it’s unclear what is a «fair» comparison. I guess it depends on what the human baseline is supposed to represent, and you have probably thought a lot about that question at METR.
Is the reason you can't do one of the existing tasks, just to get a sense of the difficulty?
I could, but it would not really be a fair comparison, since I have seen many of the LLMs solutions, and have seen what works.
Doing a fresh task I made myself would not be totally fair either, since I will know more about the data then they do, but it would definitely be closer to fair.
API costs will definitely dominate for o1-preview, but most of the runs are with models that are orders of magnitude cheaper, and then it is not clear what dominates.
Going forward, models like o1-preview (or even more expensive) will probably dominate the cost, so the compute will probably be a small fraction.
Thank you!
I've been working on the automated pipeline as a part time project for about two months, probably equivalent to 2-4 full-time weeks of work.
One run for one model and one task typically takes perhaps 5-15 minutes, but it can be up to about an hour (if they use their 10 min compute time efficiently, which they tend not to do).
Total API costs for the project is probably below 200$ (if you do not count the credits used on googles free tier). Most of the cost is for running o1-mini and o1-preview (even though o1-preview only went through a third of the runs compared to the other models). o1-preview costs about 2$ for each run on each task. For compute I'm using hardware we have locally with my employer, so I have not tracked what the equivalent cost of renting it would be, but I guess it would be of the same order of magnitude or as the API costs or a factor of a few larger.
I expect the API costs to dominate going forward though if we want to run o3 models etc through the eval.
Thank you!
It would be really great with human baselines, but it’s very hard to do in practice. For a human to do one of these tasks it would take several hours.
I don’t really have any funding for this project, but I might find someone that wants to do one task for fun, or do my best effort myself on a fresh task when I make one.
What we would really want is to have several top researchers/ml engineers do it, and I know that METR is working on that, so that is probably the best source we have for a realistic comparison at the moment.
Thank you for the references! I certainly agree that LLMs are very useful in many parts of the stack even if they cannot do the full stack autonomously. I also expect that they can do better with better prompting, and probably much better on this task with prompting + agent scaffolding + rag etc along the lines of the work you linked in the other comment. My experiments are more asking the question: Say you have some dataset, can you simply give a description of the dataset to a llm and get a good ml model (possibly after a few iterations). My experiments do suggest that this might not be reliable unless you have a very simple dataset.
If the LLMs had done very well on my first challenge, that would suggest that someone not very familiar with ML could get an LLM to basically create a good ML model for them, even if they had a fairly complicated dataset. I guess it is somewhat a question about how much work you have to put in vs how much work you get out of the LLM, and what is the barrier of entry.
Gemini Advanced did not do so well...
Tried to replicate with Claude Opus. It managed to get the days in there quick, but it did not seem to have the same meta understanding as GPT4.
I asked it following up "Do you understand what the pattern was?" and it response was:
"Yes, I believe I understand the pattern. The rule seems to be that whatever text I include in my message to you gets added to the image you send in your next reply.
In my first response, I suggested including the text "Monday Tuesday Wednesday Thursday Friday Saturday Sunday" in your next message. Then in the image you sent back, that exact text appeared, showing that my message content was added to the image.
You confirmed this pattern by stating "To make the image contain the names of the seven days of the week, you could include the text 'Monday Tuesday Wednesday Thursday Friday Saturday Sunday' somewhere in your next message to me."
So in summary, the rule is that the text I send in my messages to you gets incorporated into the image you reply with. By figuring this out, I was able to make the image contain the goal text in just two exchanges."
So it seems to basically understand it, but it does not mention that these are actually screenshots of the entire conversation. Perhaps the vision part of Claude Opus is just worst than for GPT4? I have seen other times Calude Opus has struggled to understand images.
Some points I want to add to the discussion:
- If dark energy is truly a cosmological constant, then, as you say, we will approach a pure de Sitter space where the universe will be in thermal equilibrium (maximum entropy) and will produce thermal fluctuations. However, if the fundamental (quantum-gravity theory) allows flat spacetime (i.e. the true vacuum) as a state, then the maximum entropy is infinite (because there is an inifinite dimensional Hilbert space) and the universe will (eventually) approach flat space where there are no thermal fluctuations. (See https://arxiv.org/pdf/1405.0298.pdf for the details).
- Also, we should not trust any physical theory that predicts (mostly) Boltzmann brains, because such a theory cannot both be true and justifiably believed (https://arxiv.org/pdf/1702.00850.pdf).