A Small Negative Result on Debate

post by Sam Bowman (sbowman) · 2022-04-12T18:19:25.927Z · LW · GW · 11 comments

Some context for this new arXiv paper from my group at NYU:

11 comments

Comments sorted by top scores.

comment by Beth Barnes (beth-barnes) · 2022-04-12T22:25:14.042Z · LW(p) · GW(p)

crossposting my comments from Slack thread:

Here are some debate trees from experiments I did on long-text QA  on this example short story:

Tree

Debater view 1

Debater view 2

Our conclusion was that we don’t expect debate to work robustly in these cases. In our case this was mostly because in cases where the debate is things like ’is there implied subtext A?’,  human debaters don’t really know why they believe some text does or doesn’t have a particular implication. They have some mix of priors about what the text might be saying (which can’t really be justified with debate), and various updates to that based on style, word choice, etc, where humans don’t necessarily have introspective access to what exactly in the text made them come to the conclusion.My guess is that’s not the limitation you’re running into here - I’d expect that to just be the depth.

There are other issues with text debates, like if the evidence is distributed across many quotes that each only provide a small amount of evidence - in this case the honest debater needs to have decent estimates for how much evidence each quote provides, so they can split their argument into something like ‘there are 10 quotes that weakly support position A’; ‘the evidence that these quotes provide is additive rather than redundant’.

[edited to fix links]

Replies from: sbowman
comment by Sam Bowman (sbowman) · 2022-04-13T16:26:05.557Z · LW(p) · GW(p)

Yep. (Thanks for re-posting.) We're pretty resigned to the conclusion that debate fails to reach a correct conclusion in at least some non-trivial cases—we're mainly interested in figuring out (i) whether there are significant domains or families of questions for which it will often reach a conclusion, and (ii) whether it tends to fail gracefully (i.e., every outcome is either correct or a draw).

comment by Sam Bowman (sbowman) · 2022-10-27T22:30:39.467Z · LW(p) · GW(p)

Update: We did a quick follow-up study adding counterarguments, turning this from single-turn to two-turn debate, as a quick way of probing whether more extensive full-transcript debate experiments on this task would work. The follow-up results were negative. 

Tweet thread here: https://twitter.com/sleepinyourhat/status/1585759654478422016

Direct paper link: https://arxiv.org/abs/2210.10860 (To appear at the NeurIPS ML Safety workshop.)

We're still broadly optimistic about debate, but not on this task, and not in this time-limited, discussion-limited setting, and we're doing a broader more fail-fast style search of other settings. Stay tuned for more methods and datasets.

comment by paulfchristiano · 2022-04-15T21:32:09.278Z · LW(p) · GW(p)

I think that one of the key difficulties for debate research is having good tasks that call for more sophisticated protocols. I think this dataset seems great for that purpose, and having established a negative result for 1-turn debate seems like a good foundation for follow-up work exploring more sophisticated protocols. (It seems like a shame that people don't normally publish early-stage and negative results.)

In comparison with other datasets (e.g. in the negative results described by Beth), it seems like QuALITY is identifying pretty crisp failures and is within striking distance for modern ML. I haven't looked at the dataset beyond the samples in the paper but tentatively I'm pretty excited about more people working on it (and excited to see future work from your group!)

I do strongly suspect that multi-turn debates could handle these questions, and if not it would be a pretty significant update about debate / the nature of human reasoning / etc. I think it's possible those debates would have to get pretty complicated, and it's also quite plausible that it will be easier to get something else to work. In any case, i feel like the problem is a close enough match for what we care about that doing "whatever it takes" will probably generally be pretty interesting.

comment by A Ray (alex-ray) · 2022-04-12T23:12:08.232Z · LW(p) · GW(p)

Do you have suggestions for domains where you do expect one-turn debate to work well, now that you've got these results?

Replies from: sbowman
comment by Sam Bowman (sbowman) · 2022-04-13T16:35:37.390Z · LW(p) · GW(p)

I have no reason to be especially optimistic given these results, but I suppose there may be some fairly simple questions for which it's possible to enumerate a complete argument in a way that flaws will be clearly apparent.

In general, it seems like single-turn debate would have to rely on an extremely careful judge, which we don't quite have, given the time constraint. Multi-turn seems likely to be more forgiving, especially if the judge has any influence over the course of the debate.

comment by TLW · 2022-04-12T18:55:19.288Z · LW(p) · GW(p)

If there are high-quality arguments for multiple answers, doesn't that "just" mean that the multiple-choice question is itself low-quality?

Replies from: sbowman
comment by Sam Bowman (sbowman) · 2022-04-12T19:12:11.665Z · LW(p) · GW(p)

One of the arguments is quite misleading in most cases, so probably not high-quality by typical definitions. Unfortunately, under the time limit, our readers can't reliably tell which one is misleading.

Without arguments and without the time limit, annotators get the questions right with ~90% accuracy: https://arxiv.org/abs/2112.08608

Replies from: TLW
comment by TLW · 2022-04-12T23:18:42.716Z · LW(p) · GW(p)

Did your description to the participants state that the arguments were high-quality?

Replies from: sbowman
comment by Sam Bowman (sbowman) · 2022-04-13T16:37:47.835Z · LW(p) · GW(p)

I can look up the exact wording if it's helpful, but I assume it's clear from the basic setup that at least one of the arguments has to be misleading.

Replies from: TLW
comment by TLW · 2022-04-13T20:37:17.605Z · LW(p) · GW(p)

I don't know about anyone else; under time pressure I personally would go about looking for 'one wrong argument in a sea of high-quality arguments' very differently than I would go about looking for 'one misleading but superficially high-quality argument in a sea of high-quality arguments' or 'one very-high-quality argument in a sea of high-quality arguments'.