Touch reality as soon as possible (when doing machine learning research)
post by LawrenceC (LawChan) · 2023-01-03T19:11:58.915Z · LW · GW · 9 commentsContents
Introduction: two common mistakes in ML research Why touch reality? Your ideas may be bad Other people's ideas may be bad or misleading Your tools may not work the way you think they do It helps you explain your ideas to other people Why don't people touch reality? Idea scarcity Deference to authority Aversion to Schlepping Concrete ways to touch reality faster Minimize time to (possible) failure Create toy examples Mock or simplify difficult components Have good collaborators None 9 comments
TL;DR: I think new machine learning researchers often make one of two kinds of mistakes: not making enough contact with reality, and being too reluctant to form gears-level models of ML phenomena. Stereotypically, LW/AF researchers tend to make the former mistake, while academic and industry researchers tend to make the latter kind. In this post, I discuss what I mean by “touching reality” and why it’s important, speculate a bit on why people don’t do this, and then give concrete suggestions.
Related to: Making Beliefs Pay Rent [? · GW], The Feeling of Idea Scarcity [LW · GW], Micro-Feedback Loops and Learning [LW · GW], The Three Stages of Rigor [LW(p) · GW(p)], Research as a Stochastic Decision Process, Chapter 22 of HPMOR.[1]
Epistemic status: Written quickly in ~3 hours as opposed to carefully, but I'm pretty sure it's directionally correct. [2]
Acknowledgments: Thanks to Adrià Garriga-Alonso for feedback on a draft of this post and Justis Mills for copyediting help.
Introduction: two common mistakes in ML research
Broadly speaking, I think new researchers in machine learning tend to make two kinds of mistakes:
- Not making contact with reality. This is the failure mode where a new researcher reads a few papers that their friends are excited about, forms an ambitious hypothesis about how to solve a big problem in machine learning, and then spends months drafting a detailed plan. Unfortunately, after months of effort, our new researcher realizes that the components they were planning to use do not work nearly as well as expected, and as a result they’ve wasted months of effort on a project that wasn’t going to succeed.
- Not being willing to make gears-level models. This is the failure mode where a new researcher decides to become agnostic to why anything happens, and believes empirical results and only empirical results even when said results don’t “make sense” on reflection. The issue here is that they tend to be stuck implementing an inefficient variant of grad student descent, only able to make small amounts of incremental progress via approximate blind search, and end up doing whatever is popular at the moment.
That’s not to say that these mistakes are mutually exclusive: embarrassingly, I think I’ve managed to fail in both ways in the past.
That being said, this post is about the first failure mode, which I think is far more common in our community than the second. (Though I might write about the second if there's enough interest!)
Here, by “touching reality”, I mean running experiments where you check that your beliefs are right, either via writing code and running empirical ML experiments, or (less commonly) grounding your ideas either in a detailed formalism (to the level where you can write proofs for new, non-trivial theorems about said ideas)[3]. I don’t think writing code or inventing a formalism qualify by themselves (though they are helpful); touching reality requires receiving actual concrete feedback on your ideas.
Why touch reality?
I think there’s four main reasons why you should do this:
Your ideas may be bad
When you’re new to a field, it’s probably the case that you don’t fully understand all of the key results and concepts in the field. As a result, it’s very likely the case that the ideas you come up with are bad. This is especially true for fields like machine learning that have significant amounts of tacit knowledge. By testing your ideas against reality, you get feedback on where your model of the field is deficient, and thereby can develop better ideas. Touching reality as soon as possible lets you shorten your feedback cycles, and more quickly develop an understanding of important ideas in the field.
Other people's ideas may be bad or misleading
Many machine learning papers published in conferences (let alone ArXiv preprints) have misleading abstracts, where the results don’t support some of the headline claims. Sometimes this happens because of white lies or omissions on the authors’ part. More benignly, this often happens because the authors’ results don’t generalize as far as they thought they would. Machine learning is especially susceptible to this issue, as many ML results can be finicky and the authors’ results depend on particular quirks of their setup. Before spending months of your life building on some ideas, it’s prudent to make sure that the ideas are actually good.
Your tools may not work the way you think they do
Relatedly, algorithms presented in papers without misleading claims can still fail due to said papers not writing down all of their key assumptions or code level optimizations. I think this rarely occurs due to deliberate deception from paper authors, instead, I think this almost entirely comes from the fact that it can be challenging to get machine learning algorithms to work reliably. Even in cases where the algorithm generally works as expected on similar domains to that in the paper, it’s often the case that errors in understanding accumulate when you put together many unfamiliar algorithms together. As a result, it’s almost always worth reproducing each of the algorithms independently, and testing that they work as expected as soon as possible.
It helps you explain your ideas to other people
When trying to get feedback for any idea, it’s often the case that the person giving you feedback won’t fully understand it. Even worse, you might have a double illusion of transparency: both you and the other person falsely believe the communication was successful. This often happens in machine learning because of a relative lack of standard terminology in many new subfields (and especially amongst novices, who might not know the standard terminology that does exist). As a result, said feedback can be worse than useless, leading to more wasted effort. Concrete examples both help you explain your ideas more clearly, and also help you and others notice when miscommunication has occurred. As a result, I think it’s good practice to include at least a toy example (if not a preliminary result) when communicating with people you aren’t regularly collaborating with.
Why don't people touch reality?
I don’t think that "contact reality as soon as possible" is particularly novel advice – for example, I think much of academic machine learning has absorbed this ethos (perhaps a bit too much, even), and there are many similar ideas floating around in the LessWrong/Alignment Forum. However, it’s still often the case that new researchers fail to contact reality for long periods of time. Here’s my speculations as to why this happens, which I’ll ground in my own experiences (though I have also seen them in others'):
Idea scarcity
As John Wentworth says in The Feeling of Idea Scarcity [LW · GW], many new researchers feel that ideas are much more scarce than they actually are, and stick to failing ideas for too long. This makes it tempting to continue polishing the first idea you have, as opposed to testing a half-baked idea.
In my case, back in late 2016 and early 2017, I spent a month of my life working on tree-structured RNNs with attention mechanisms, since clearly natural language should be tree-shaped (and I didn’t have any other ML ideas)! However, I got bogged down on implementation details thanks to Tensorflow 0.X, and spent most of my time cleaning those up as opposed to running new experiments. It turns out that no, tree-structured RNNs are not the correct way to model language.[4] I think I would’ve noticed this a lot sooner if I spent some time constructing small toy tasks where I thought tree-structured RNNs would be better, and then training small models on those, even though I hadn’t worked out all the fiddly implementation details. And I would’ve been a lot more willing to take the troubles I had with implementation as evidence against the idea, if I didn’t feel like it was the only ML idea I would have.
Similarly, the (false) feeling of idea scarcity often causes new people to work too much on their one idea, instead of testing their half-baked ideas on reality.
Deference to authority
I think a lot of new researchers come in with a strong belief that academic papers (especially from prestigious authors) are authoritative sources, and therefore that the claims made in them are definitely correct and generalizable. I also think that many new researchers are (correctly) skeptical of their ability to generate true claims that contradict published results, and so tend to take published results on faith.
One of the first projects I was involved in at CHAI involved using Bayesian neural networks to do active value learning. It seemed to me like a pretty straightforward idea: we’ll implement some Bayesian neural networks, do some variational inference to update them, and then use the resulting posterior to estimate algorithms that used value of information to select queries. At the time, I (along with many people at CHAI) were very bullish on Bayesian neural networks, given the recent slate of papers around that time (2015-2017) from impressive-seeming professors showing impressive seeming results. Unfortunately, it turned out that Bayesian neural networks were significantly trickier to get working in practice on the value learning tasks we were working with, and nothing came of the project despite several months of effort. A few months later, a research engineer at CHAI found that many Bayesian neural network algorithms (including the one we were using for our project) often failed to to approximate some toy 4-d distributions—if I had been less trusting of authoritative papers and more willing to try some toy problems, I think I would’ve saved myself a lot of effort.
Note that I’m not saying that new researchers should throw away all of conventional wisdom. Instead, I think that new researchers should be more willing to quickly verify claims made by authoritative figures.
Aversion to Schlepping
Finally, I think the biggest reason new ML researchers avoid contacting reality is that doing machine learning experiments or coming up with formalism to write non-trivial theorems involves a lot of tedious, unglamorous tasks—that is, it can involve a lot of schlepping. For example, data munging can be incredibly tedious, even for relatively simple NLP datasets. In contrast, thinking about new ideas and discussing them with collaborators is fun and often significantly easier. It also doesn’t help that many sources present a skewed picture of research that focuses too much on the new ideas and too little on the day-to-day work.
In my case, I’ve put off writing code for simple experiments many, many times. In a different active value learning project, I put off doing experiments (and indeed, basically the whole project) for a full month and a half due to a strong ugh field around dealing with the fiddly bits. Probably the worst case of this for me was me not wanting to do some simple human subject studies for a paper, despite said paper being rejected from a conference explicitly because it lacked a human study. I ended up just dropping the project.[5] That being said, I think I’ve become significantly better along this axis, as I’ve done more schlep work for more projects and realized that I was overestimating the pain and tedium required to do said work.
Of course, it’s definitely possible to go too far, and end up only doing low value, schleppy work. And obviously, I think you should always try to avoid unnecessary suffering. But as a whole, I think new researchers tend to overestimate the pain involved in schleppy work and underestimate how said work gets less tedious over time, and could benefit from some amount of pushing past their aversion.
Concrete ways to touch reality faster
I’ll conclude with some strategies for touching reality faster:
Minimize time to (possible) failure
Insofar as you have any uncertainties that might threaten the viability of a project, you should test them as soon as possible. I often find that I’m aware of many of the ways that the projects I’m working on could go wrong. As a result, I find the cognitive strategy of trying to expose as many of a project’s points of failure as soon as possible to be helpful for coming up with experiments. In my case, I also find it helpful to directly try to show that my projects are nonviable as soon as possible.
See Jacob Steinhardt’s Research as a Stochastic Decision Process for a more detailed discussion of this strategy.
Create toy examples
Real machine learning applications (and machine learning theory) often feature many complexities and practical difficulties that are irrelevant to the validity of the core insights behind your project. Not only can it take quite long to get any results at all, your experiment can often be invalidated by implementation details. In contrast, a good toy example abstracts away all of the complexity, which lets you get information about the viability of your project much faster. Personally, I find it helpful to think about the minimal case that shows my insight is correct.[6]
Mock or simplify difficult components
Similarly, when working with components that are difficult to implement or train, but aren’t key uncertainties as to the viability of your project, it’s often a good idea to replace said component with a cheating implementation. For example, if you’re studying a new protocol for debate using language models, you can replace the language models with humans, which probably provides a weak upper bound on your technique’s performance. A related strategy is to replace complicated components with simple baselines. For example, even if your plan is to finetune a large language model on the debate protocol, you might be able to get some signal as to its viability by using text-davinci-003
with a well-designed prompt.
Have good collaborators
Finally, I think that having good collaborators was by far the most helpful strategy in helping me ground my ideas. I find that it’s significantly harder to come up with obvious tests for your own ideas than it is for others to. A good collaborator on a research project can regularly save me hours of schlepping, for example by suggesting simple tests, sharing code, or even performing the tests directly (especially in cases where they have a comparative advantage). This is especially the case when they also prioritize touching reality as soon as possible. :)
- ^
- ^
Detailed epistemic status: I'm pretty frustrated with how slow I write, so this is an experiment in writing fast as opposed to carefully. That being said, this is ~the prevailing wisdom amongst many ML practitioners and academics, and similar ideas have been previously discussed in the LessWrong/Alignment Forum communities, so I'm pretty confident that it's directionally correct. I also believe (less confidently) that this is good advice for most kinds of research or maybe even for life in general.
- ^
As Michael Dennis pithily puts it, this is the point at which the process goes from only you correcting the theory, to the theory being able to correct you.
- ^
Famously, you don’t even need the RNN parts, you only need attention.
- ^
Though, to be fair, there were other circumstances - it was during the pandemic and I was feeling incredibly gloomy in general.
- ^
(Edited to add:) That being said, as Scott Emmons points out in a comment below [LW(p) · GW(p)], it's important to not just have results on toy examples!
9 comments
Comments sorted by top scores.
comment by Neel Nanda (neel-nanda-1) · 2023-01-03T19:44:14.456Z · LW(p) · GW(p)
Thanks for writing this post! (And man, if this is you deliberately writing fast and below your standards, you should lower your standards way more!). I very strongly agree with this within mechanistic interpretability and within pure maths (and it seems probably true in ML and in life generally, but those are the two areas I feel vaguely qualified to comment on).
Aversion to Schlepping
Man, I strongly relate to this one... There have been multiple instances of me having an experiment idea I put off for days to weeks, only to do it in 1-3 hours and get really useful results. I've had some success experimenting with things like speedrunning afternoons, where I drop all of my ongoing tasks, try to pick a self-contained thing that seems high priority, and sprint on getting it done ASAP (this doesn't work well for day to week schleppy tasks, but I'm more OK with sucking at those)
Under why touch reality, IMO the most important reason is that it'll help you form ideas that are good! It's much much easier to do this when you have a lot of surface area on what's actually going on, and enough experience and loose threads to spark curiosities and new ideas.
Under why don't people touch reality, honestly the strongest reason for me is just procrastination/lacking urgency (which is somewhat aversion to schlepping, but less central) - even if I know exactly what it'd be sensible to do, there's rarely a reason to do it right now rather than later.
Some more strategies I like for touching reality faster (there's some overlap with your's):
- Try explaining your understanding to other people. Notice when you're confused about a concept, and go and try to figure out what's going on (ideally by building some kind of toy model and coding something yourself)
- Meta strategy - learn how to use good tooling, debug issues in your workflow, and just practice running a lot of quick experiments. I find that being able to test a hypothesis about GPT-2 Small in a few minutes makes it much easier to touch reality, in a way that I just wouldn't if it took hours to days. Even if the difference in time isn't that stark, the more you have the right muscle memory, the lower the activation energy
- Try to Murphyjitsu your ideas - assume things will go wrong, or that there's some crucial flaw in your beliefs, and use your intuition to fill in the blanks re why. Use this to generate ideas to try falsifying your plan
↑ comment by LawrenceC (LawChan) · 2023-01-03T19:56:03.664Z · LW(p) · GW(p)
Thanks!
just procrastination/lacking urgency
This is probably true in general, to be honest. However, it's an explanation for why people don't do anything, and I'm not sure this differentially leads to delaying contact with reality more than say, delaying writing up your ideas in a Google doc.
Some more strategies I like for touching reality faster
I like the "explain your ideas to other people" point, it seems like an important caveat/improvement to the "have good collaborators" strategy I describe above. I also think the meta strategy point of building a good workflow is super important!
Replies from: neel-nanda-1↑ comment by Neel Nanda (neel-nanda-1) · 2023-01-03T20:00:43.285Z · LW(p) · GW(p)
I like the "explain your ideas to other people" point, it seems like an important caveat/improvement to the "have good collaborators" strategy I describe above
Importantly, the bar for "good person to explain ideas to" is much lower than the bar for "is a good collaborator". Finding good collaborators is hard!
comment by Scott Emmons · 2023-01-03T19:36:47.608Z · LW(p) · GW(p)
Thanks for writing this! I appreciate it and hope you share more things that you write faster without totally polishing everything.
One word of caution I'd share is: beware of spending too much effort running experiments on toy examples. I think toy examples are useful to gain conceptual clarity. However, if your idea is primarily empirical (such as an improvement to a deep neural network architecture), then I would recommend spending basically zero time running toy experiments.
With deep learning, it's often the case that improvements on toy examples don't scale to being improvements on real examples. In my experience, lots of papers in reinforcement learning don't actually work because the authors only tried out the method on toy examples. (Or, they tried out the method on more complex examples, but they didn't publish those experiments because the method didn't work.) So trying out a new empirical method on a toy example provides little information about how valuable the empirical method will be on real examples.
The flipside of this warning is advice: for empirical projects, test your idea on as diverse and complex a set of tasks as is possible. The good empirical ideas are few, and extensive empirical testing is the best way a researcher can determine if their idea will stand the test of time.
When running diverse and complex experiments, it is still important to design the simplest possible experiment that will be informative, as Lawrence describes in the section "Mock or simplify difficult components." I suggest being simple (such as Lawrence's example of using text-davinci-003
instead of finetuning one's own model) rather than being toy (using a tiny or hard-coded language model).
↑ comment by LawrenceC (LawChan) · 2023-01-03T19:40:27.834Z · LW(p) · GW(p)
I think this is a good word of caution. I'll edit in a link to this comment.
comment by LawrenceC (LawChan) · 2024-12-23T04:14:41.045Z · LW(p) · GW(p)
I think this post was useful in the context it was written in and has held up relatively well. However, I wouldn't active recommend it to anyone as of Dec 2024 -- both because the ethos of the AIS community has shifted, making posts like this less necessary, and because many other "how to do research" posts were written that contain the same advice.
Background
This post was inspired by conversations I had in mid-late 2022 with MATS mentees, REMIX participants, and various bright young people who were coming to the Bay to work on AIS (collectively, "kiddos"). The median kiddo I spoke with had read a small number of ML papers and a medium amount of LW/AF content, and was trying to string together an ambitious research project from several research ideas they recently learned about. (Or, sometimes they were assigned such a project by their mentors in MATS or REMIX.)
Unfortunately, I don't think modern machine learning is the kind of field where you can take several where research consistently works out of the box. Many high level claims even in published research papers are just... wrong, it can be challenging to reproduce results even when they are right, and even techniques that work reliably may not work for the reasons people think they do.
Hence, this post.
What do I think of the content of the post?
I think the core idea of this post held up pretty well with time. I continue to think that making contact with reality is very important, and I think the concrete suggestions for how to make contact with reality are still pretty good.
If I were to write it today, I'd probably add a fifth major reason for why it's important to make quick contact with reality: mental health/motivation. That is, producing concrete research outputs, even small ones, feels pretty essential to maintaining motivation for the vast majority of researchers. My guess is I missed this factor because I focused on the content of research projects, as opposed to the people doing the research.
Where do I feel the post stands now?
Over the past two years, the ethos of the AIS community has changed substantially toward empirical work, over the past two years, and especially in 2024.
The biggest part of this is because of the pace of AI. When this post was written, ChatGPT was a month old, and GPT-4 was still more than 2 months away. People both had longer timelines and thought of AIS in more conceptual terms. Many research conceptual research projects of 2022 have fallen into the realm of the empirical as of late 2024.
Part of this is due to the rise of (dangerous capability) evals as a major AIS focus in 2023, which is both substantially more empirical compared to the median 2022 AIS research topic, and an area where making contact with reality can be as simple as "pasting a prompt into claude.ai".
Part of this is due to Anthropic's rise to being the central place for AIS researchers. "Being able to quickly produce ML results" is a major part of what it takes to get hired there as a junior researcher, and people know this.
Finally, there's been a decent amount of posts or write-ups giving the same advice, e.g. Neel's written advice for his MATS scholars and a recent Alignment Forum post by Ethan Perez.
As a result, this post feels much less necessary or relevant in late December 2024 than in December 2022.
comment by gwern · 2023-07-12T21:03:29.143Z · LW(p) · GW(p)
Unfortunately, it turned out that Bayesian neural networks were significantly trickier to get working in practice on the value learning tasks we were working with, and nothing came of the project despite several months of effort. A few months later, a research engineer at CHAI found that many Bayesian neural network algorithms (including the one we were using for our project) often failed to to approximate some toy 4-d distributions
Was that ever written up? I don't recall that result.
Replies from: LawChan↑ comment by LawrenceC (LawChan) · 2023-07-13T02:08:12.679Z · LW(p) · GW(p)
I don't think so, unfortunately, and it's been so long that I don't think I can find the code, let alone get it running.
comment by Review Bot · 2024-06-09T01:44:38.466Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?