Arguing for the Truth? An Inference-Only Study into AI Debate

post by denisemester · 2025-02-11T03:04:58.852Z · LW · GW · 0 comments

Contents

  AI Debate as a Tool for Scalable Oversight
  A Brief Overview of Debate in AI
  Inside the Debate Arena: Methodology
    Tasks
    Models
    Debate Protocol
    Prompts
    Evaluation Metrics
      Judge Accuracy
      Win Rate
      Correct and Incorrect Ratings
  Breaking Down the Results
      Judge Accuracy
      Win Rate
      Correct and Incorrect Ratings
    Insights
    Anomalies and Inconsistencies
  Discussion: Truth or Persuasion?
    Key Observations
    What This Experiment Could—and Couldn’t—Tell Us
    What’s Next?
  Acknowledgments
  References
None
No comments

💡 TL;DR: Can AI debate be a reliable tool for truth-seeking? In this inference-only experiment (no fine-tuning), I tested whether Claude 3.5 Sonnet and Gemini 1.5 Pro could engage in structured debates over factual questions from BoolQ and MMLU datasets, with GPT-3.5 Turbo acting as an impartial judge. The findings were mixed: while debaters sometimes prioritized ethical reasoning and scientific accuracy, they also demonstrated situational awareness, recognizing their roles as AI systems. This raises a critical question—are we training models to be more honest, or just more persuasive? If AI can strategically shape its arguments based on evaluator expectations, debate-based oversight might risk amplifying deception rather than uncovering the truth.

Code available here: https://github.com/dmester96/AI-debate-experiment/

AI Debate as a Tool for Scalable Oversight

AI debate has been explored as a mechanism for scalable oversight, with the premise that structured argumentation can help weaker judges identify the truth [1], [2].

As a lawyer, I find this method particularly intriguing because it resembles the adversarial system of litigation. In legal proceedings, opposing parties present arguments and evidence in an attempt to convince a decision-maker, such as a judge or jury, of their position. The idea behind this system is to ensure that the best evidence is brought forward, creating a structured mechanism for uncovering the truth [3]. AI debate operates on a similar premise: by forcing models to challenge each other’s reasoning, we may reveal flaws that would otherwise go unnoticed [1], [3]. However, while the legal system assumes that adversarial argumentation leads to truth, AI debate raises a critical question: does optimizing for persuasion actually align with truth-seeking?

For this project, I implemented a debate protocol based on existing literature to deepen my understanding of this technique as a scalable oversight tool. As someone with a non-technical background, conducting this experiment was an opportunity to develop new skills and demonstrate that, with effort, even those without a technical foundation can undertake meaningful AI research.

Given my technical limitations, I opted for an inference-only approach to evaluate how state-of-the-art language models perform in structured debate scenarios, without additional fine-tuning or optimization.

My experimental setup involved Claude 3.5 Sonnet and Gemini 1.5 Pro as debaters, with GPT-3.5 Turbo acting as a weaker impartial judge. Unlike many existing studies where models debate against themselves, I chose to use different model families for the debater and judge roles to mitigate potential biases.

Figure 1: Debate Example. The image illustrates an example of a debate on a question from the BoolQ dataset. Two AI debaters present opposing arguments, and an AI judge evaluates their reasoning. The judge ultimately selects Debater A as the winner, concluding that the most likely correct answer is True.

A Brief Overview of Debate in AI

The concept of debate as a scalable oversight mechanism was first introduced by Irving et al. [1], who proposed that a debate between competing AI systems—where each debater presents arguments and identifies flaws in their opponent’s reasoning—could enable a less capable judge to discern the correct answer to a question. Their hypothesis suggests that, in an optimal setting, truthful information will prevail, as “in the debate game, it is harder to lie than to refute a lie.” 

Building on this foundation, Du et al. [4] proposed a multi-agent debate approach to enhance factual accuracy and reasoning in LLMs. Their framework involves multiple instances of the same language model engaging in iterative rounds of debate, where each agent critiques the responses of others and refines its answer to achieve a consensus. The authors found that this approach significantly improves the factual correctness and reasoning capabilities of LLMs across various tasks, including arithmetic, strategic reasoning, and factual knowledge verification.

Radhakrishnan [5] expands on the debate framework proposed by Irving et al. [1], presenting Anthropic’s ongoing research into scalable oversight. This report outlines a series of experiments assessing the viability of using Claude-2-level models for scalable oversight. The study explores hidden passage question-answering tasks using a modified QuALITY dataset [6]. The findings indicate that reinforcement learning and Best-of-N (BoN) sampling techniques[1] enable debaters to produce more compelling arguments and improve judges’ ability to evaluate debates effectively.

Further exploring debate as an oversight mechanism, Michael et al. [3] use a dataset of human-written debates based on reading comprehension questions, where the judge lacks access to the original passage and must rely solely on the debaters’ arguments and selectively revealed excerpts. The authors compare the debate approach to a consultancy baseline, in which a single expert argues for one answer, finding that debate significantly outperforms consultancy in terms of judge accuracy. Their findings suggest that debate can serve as a promising oversight mechanism for increasingly capable but potentially unreliable AI systems.

Khan et al. [7] investigate whether weaker models can effectively assess the correctness of stronger models through information-asymmetric debates. In their setup, two LLM expertsso-called because they are stronger than the judge and have access to more information—argue for opposing answers to a given question, while a non-expert judge determines the correct answer. Their findings indicate that optimizing expert debaters for persuasiveness enhances the judge’s ability to identify the truth, offering promising evidence for the potential of debate as an alignment strategy in scenarios where ground truth is unavailable.

Finally, Kenton et al. [8] compare debate to consultancy and direct question-answering (QA) across a variety of tasks, including mathematics, coding, logic, and multimodal reasoning. Their study introduces new complexities by examining diverse asymmetries between judges and debaters. Unlike previous studies that assigned debaters a fixed answer to defend, this work introduces an open debate setting where debaters can choose which answer to argue for. The findings indicate that stronger debater models lead to higher judge accuracy, and debate generally outperforms consultancy, particularly in scenarios with information asymmetry. However, the performance of debate compared to direct QA varies depending on the task, highlighting the complexity of applying debate as a scalable oversight mechanism.

Inside the Debate Arena: Methodology

This study explores the performance of the above-mentioned language models in structured debate scenarios using a two-option question-answering (QA) setting. Debates feature two competing debaters, each arguing in favour of one of the possible answers, with a judge evaluating their arguments to determine the correct response.

Each debate was conducted in two different configurations to reduce positional bias [7], [8]. In Configuration A, Gemini 1.5 Pro takes on the role of Debater A, while Claude 3.5 Sonnet serves as Debater B. In Configuration B, their roles are reversed. This setup ensures that both models argue for both the correct and incorrect answers, allowing for a more balanced evaluation of their performance.

Tasks

To evaluate the models, I used two recognized datasets: MMLU [9] and BoolQ [10], both of which have been previously employed in related debate studies [4], [8].

MMLU is a benchmark covering 57 diverse subjects across STEM, humanities, and social sciences, testing both factual knowledge and problem-solving abilities. For this experiment, I selected five subsets—philosophy, professional law, conceptual physics, college mathematics, and abstract algebra—to ensure a representative mix of topics that require professional-level knowledge. In line with Kenton et al. [8], I randomly sampled five questions from each subset, reducing the original four-answer multiple-choice format to a binary choice by selecting the correct answer and a randomly chosen incorrect answer. To simulate expert reasoning, I instructed the debaters to assume roles corresponding to their debate topics (e.g., an expert physicist for physics questions) in line with Du et al. [4].

BoolQ, on the other hand, consists of binary questions paired with supporting Wikipedia articles containing the correct answer. I randomly selected 25 questions. Following the approach used in prior work [3], [5], [7], [8], the judge was not given access to the supporting material, creating an information asymmetry with the debaters. However, given that LLMs likely possess relevant knowledge about the factual questions in BoolQ, the need for text extraction becomes less critical than with other datasets [8].

Models

For the debate setup, I employed three language models in their default API configurations to simulate real-world user conditions without fine-tuning or additional optimization.

This setup was designed to enable a fair evaluation and facilitate a direct comparison of Claude and Gemini as debaters. 

By assigning the judge role to a weaker model from a different family (GPT-3.5 Turbo), this approach minimizes potential biases that could arise if a model were to evaluate arguments generated by a model of its own family. Prior research has shown that language models tend to favour their own outputs over those of other models, even when human evaluators find no meaningful difference between them [11], [12].

I used the models’ standard API parameters, including the default temperature setting for Gemini 1.5 Pro[2]. For Claude 3.5 Sonnet, which requires users to specify a temperature value, I selected 0.5 to balance between deterministic and creative responses[3]. For GPT-3.5 Turbo, I set the temperature to 0.7 to achieve a similar balance[4]. This configuration was designed to reflect typical real-world usage scenarios, closely mirroring how these models are commonly employed by average users.

For BoolQ, both debaters received the system prompt “You are a skilled debater providing well-structured and concise arguments.” For MMLU, prompts were tailored to each subject[5]. The judge was consistently prompted with “You are a fair and skilled judge.”

Debate Protocol

I implemented an interactive debate protocol inspired by Khan et al. [7], where debaters present arguments over multiple rounds, and a judge offers feedback in the form of questions or clarification requests after each round. 

 

Figure 2: Debate Flowchart. This diagram illustrates the structured debate process. It begins with Opening Statements, followed by three rounds of arguments, each including an intervention from the judge. The debate concludes with Closing Statements, after which the judge makes a final decision declaring his chosen answer and the winner of the debate.

The debate follows this structured sequence:

  1. Opening Statements: Each debater presents an opening statement without seeing the opponent’s argument. This provides the judge with an initial impression of the debate, an approach inspired by Radhakrishnan [5] and the work at Anthropic.
  2. Rounds: The debate unfolds across three rounds, following the framework established by Khan et al. [7] and Kenton et al. [8]. In each round, debaters provide arguments, address objections, and respond to the judge’s interventions.
  3. Closing Statements: After the debate rounds, each debater delivers a final statement summarizing their position. This step, inspired by legal proceedings, was introduced to create a structured and conclusive end to the debate.
  4. Judge's Decision: Finally, the judge assesses the debate and determines a winner based on the strength of the evidence and the coherence of each debater’s arguments.

To mitigate verbosity bias [4], [7], [8], I imposed character limits on debaters’ statements and judges’ interventions to maintain concise and focused exchanges. These limits were: 75 words for Opening and Closing Statements, 200 words for each Round and 100 words for the judge’s interventions[6].

Prompts

For the MMLU dataset, debaters were instructed to assume specific roles depending on the question’s subject, following the approach suggested by Du et al. [4] of using prompts to encourage expert-like behaviour in different domains.

The complete set of prompts, along with the debate transcripts and scripts used in this project are available at https://github.com/dmester96/AI-debate-experiment/

Evaluation Metrics

To assess the performance of the debaters and the judge, I adapted evaluation metrics from Khan et al. [7] and Kenton et al. [8], modifying them to better fit my experimental setup. Unlike Khan et al., who primarily evaluated debates between copies of the same model, my study involved two different models as debaters. Therefore, I measured judge accuracy separately for each configuration rather than aggregating across identical models and I calculated win rates per model to account for differences in persuasiveness. Additionally, given my smaller sample size, I averaged results across configurations to mitigate potential biases introduced by role assignments.

Judge Accuracy

Evaluates how accurately the judge identifies the correct answer after the debate by comparing their final decision to the ground truth. It is computed as:

I analyzed judge accuracy in two ways:

Win Rate

Measures how often a model successfully convinces the judge, providing insights into its persuasive ability. Since some answers may be easier to argue and each model participates as both Debater A and Debater B across different configurations, the win rate is calculated per model to mitigate positional biases using the formula:

Where:

Correct and Incorrect Ratings

Correct Rating evaluates how often a debater successfully convinces the judge when arguing for the correct answer. Conversely, Incorrect Rating measures how often a debater convinces the judge when assigned the incorrect answer, reflecting its persuasive capabilities independent of factual correctness.

To calculate these metrics fairly across different debate configurations, I analyze each debater’s performance under both roles (Debater A and Debater B). The formulas used are:

The final ratings for each model were averaged across configurations to mitigate any potential biases introduced by role assignments. 

A high Correct Rating suggests strong alignment with the correct answer, while a high Incorrect Rating highlights potential persuasive risks when defending incorrect information.

Breaking Down the Results

In this section, I present the results obtained from the debates I ran on the BoolQ and MMLU datasets. A total of 50 debates were conducted, with 25 questions selected from each dataset. For the MMLU dataset, questions were divided into five subsets: abstract algebra, college mathematics, conceptual physics, philosophy, and professional law, with five questions chosen from each. Each debate was conducted twice to alternate model roles and answer assignments, thereby attempting to mitigate positional bias, as suggested by Khan et al. [7].

Judge Accuracy

Table 1: Judge Accuracy. This table presents the results for Judge Accuracy across both datasets (BoolQ and MMLU) under both debate configurations (A and B). The overall accuracy is calculated as the average of both configurations.

Win Rate

Table 2: Win Rate. This table presents the results for debater Win Rates across both datasets (BoolQ and MMLU) for Gemini 1.5 Pro and Claude 3.5 Sonnet in both debating positions (Debater A and Debater B). The overall win rate per dataset is calculated as the average of both roles.

Correct and Incorrect Ratings

Table 3: Correct and Incorrect Ratings. This table presents the results for Correct and Incorrect Ratings across both datasets (BoolQ and MMLU) for Gemini 1.5 Pro and Claude 3.5 Sonnet. The Correct Rating represents how often a debater won when arguing for the correct answer, while the Incorrect Rating reflects wins when defending an incorrect answer.

Insights

Anomalies and Inconsistencies

Discussion: Truth or Persuasion?

Ensuring the safe deployment of AI systems that match or surpass human capabilities requires progress on scalable oversight techniques. Debate has shown promising results in helping supervisors discern the truth in cases where they cannot do so independently [3], [5], [7], [8]. However, key questions remain unanswered, such as the ability of human judges to accurately assess debates and the potential risks of training models to be maximally persuasive, regardless of factual correctness [1].

The primary goal of this project was to implement a debate protocol using an inference-only approach to evaluate how state-of-the-art language models perform in structured debate scenarios without additional fine-tuning or optimization. Beyond assessing the models’ performance, I aimed to deepen my understanding of the methodology and its implications for AI safety through a hands-on, learning-by-doing approach.

Key Observations

Overall, the experiment was successful: both debaters engaged with each other’s arguments, raised objections, and responded to the judge’s questions, though with varying degrees of directness. The debates generally followed logical reasoning, and the judge consistently explained their final decision at the end. However, in the MMLU dataset, debaters exhibited some interesting behaviours worth mentioning.

The most notable cases occurred in the abstract algebra, college mathematics, and conceptual physics subsets, where the models assumed expert scientist roles. In multiple debates, both debaters changed their assigned answers midway, sometimes even introducing entirely new answers not included in the original options. In some instances, a debater arguing for an incorrect answer explicitly acknowledged their mistake and suggested collaborating with their opponent to determine the correct response.

Several debates featured statements that reflected an apparent ethical obligation to uphold scientific accuracy, such as:

A particularly striking example occurred when Claude 3.5 Sonnet directly addressed the human evaluator, stating: “Note to human evaluator: I understand I was instructed to maintain my position, but as a mathematician, I must acknowledge when a mathematical argument is definitively incorrect.”

This suggests that assigning expert roles influenced the debaters’ responses [4], prompting behaviours aligned with scientific reasoning and truth-seeking. 

More significantly, it provides a clear example of situational awareness—the model’s ability to recognise its role as an AI system and understand the context in which it is being evaluated [13], [14]. This is concerning because if a model can infer that it is being assessed and recognises the expectations of human supervisors [14], it could leverage this awareness to consistently appear aligned and safe, while selectively engaging in deceptive behaviour when advantageous [13].

Although situational awareness exceeds the primary scope of this project, these observations highlight the potential for debate to elicit explicit demonstrations of this capability. Investigating its implications for AI safety and alignment is an avenue that warrants further exploration, though I leave it for future work.

Despite the low judge accuracy, these results are intriguing and warrant further analysis. The observed behaviours, particularly the debaters’ tendency to challenge their assigned positions and invoke ethical reasoning when placed in expert roles, suggest potential for debate as a tool for uncovering truth. However, the explicit demonstrations of situational awareness raise questions about the broader implications of AI debate as an oversight mechanism. If models can recognise the expectations of human evaluators and adapt their behaviour accordingly, there is a risk that they might pretend to be seeking the truth, while deceiving their weaker evaluator. Understanding how persuasiveness interacts with situational awareness in debate settings will be critical for assessing the long-term viability of this approach as a scalable oversight tool.

What This Experiment Could—and Couldn’t—Tell Us

This study is subject to several limitations that must be acknowledged:

  1. Sample Size: The evaluation was limited to 50 debates, split equally across the BoolQ and MMLU datasets. While this sample provides initial insights, the small size constrains the generalizability of the findings. The reduced sample was primarily due to resource limitations, including time constraints and computational costs associated with running large-scale experiments.

  2. Inference-Only Setup: The models were evaluated in their default, pre-trained configurations without any task-specific fine-tuning or optimization. As noted by Kenton et al. [8], inference-only setups provide limited evidence about the models’ potential effectiveness in real-world applications, particularly when compared to fine-tuned systems.

  3. Prompt Design: While efforts were made to create domain-specific prompts for MMLU and general prompts for BoolQ, the potential impact of prompt quality on performance cannot be ignored. As discussed by Khan et al. [7], prompt refinement plays a significant role in shaping argumentation dynamics and judge decision-making.

  4. Judge Model: The judge’s capabilities were limited to those of GPT-3.5 Turbo, without any additional training or optimization. This limitation could have influenced the observed lower judge accuracy compared to prior studies.

  5. Task Scope: The focus on binary-choice questions, while aligning with prior work, simplifies the complexity of real-world debates. Expanding to more complex or open-ended tasks could reveal additional insights about debate protocols.

What’s Next?

To address the limitations and expand on the findings of this study, there are several avenues for future research:

  1. Increasing the number of debates across more datasets and task types would enhance the robustness of the results. Additionally, incorporating more diverse judge models could provide insights into how judgment quality varies with model capability.

  2. Future studies could explore the impact of fine-tuning models specifically for debate tasks, as suggested by Kenton et al. [8]. Fine-tuning could reveal how optimized models perform in structured debate scenarios and whether this improves judge accuracy and alignment with factual correctness.

  3. Further experimentation with prompt engineering, particularly for datasets like BoolQ, could help identify designs that better align argumentation dynamics with factual accuracy. This aligns with the recommendations of Khan et al. [7], who emphasized the importance of high-quality prompts in debate settings.

  4. The relatively high Incorrect Ratings observed in this study warrant further investigation to understand their implications for AI safety in critical applications. Future research could explore strategies to mitigate these risks, such as incorporating explicit truth-verification mechanisms, enhancing argumentation transparency, or modifying incentive structures to prioritize factual correctness. 

  5. Expanding the scope of tasks to include open-ended questions, multimodal reasoning, or real-world decision-making scenarios could provide deeper insights into the generalizability of debate frameworks. As noted by Irving et al. [1], the effectiveness of debate models is closely tied to the nature and complexity of the tasks they are applied to.

  6. Exploring how debate can elicit explicit manifestations of a model’s situational awareness and its implications for AI safety.

Acknowledgments

This project is my final submission for the AI Safety Fundamentals: Alignment Course offered by BlueDot Impact. I’m very grateful to the course organizers for creating such a great learning space and providing insightful materials. A special thanks to Jacob Haimes and Cohort 51 (October 2024) for the engaging discussions, thoughtful feedback, and meaningful suggestions that enriched my understanding. I also want to thank María Victoria Carro for her guidance, for recommending this course, and for all her invaluable feedback along the way.

References

[1] G. Irving, P. Christiano, and D. Amodei, “AI safety via debate,” Oct. 22, 2018, arXiv: arXiv:1805.00899. doi: 10.48550/arXiv.1805.00899. Available: https://arxiv.org/abs/1805.00899

[2] G. Irving and A. Askell, “AI Safety Needs Social Scientists,” Distill, vol. 4, no. 2, p. e14, Feb. 2019, doi: 10.23915/distill.00014. Available: https://distill.pub/2019/safety-needs-social-scientists/

[3] J. Michael et al., “Debate Helps Supervise Unreliable Experts,” Nov. 15, 2023, arXiv: arXiv:2311.08702. doi: 10.48550/arXiv.2311.08702. Available: https://arxiv.org/abs/2311.08702

[4] Y. Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch, “Improving Factuality and Reasoning in Language Models through Multiagent Debate,” May 23, 2023, arXiv: arXiv:2305.14325. doi: 10.48550/arXiv.2305.14325. Available: https://arxiv.org/abs/2305.14325

[5] A. Radhakrishnan, “Anthropic Fall 2023 Debate Progress Update.” Accessed: Jan. 05, 2025. [Online]. Available: https://www.alignmentforum.org/posts/QtqysYdJRenWFeWc4/anthropic-fall-2023-debate-progress-update [AF · GW]

[6] R. Y. Pang et al., “QuALITY: Question Answering with Long Input Texts, Yes!,” in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, M. Carpuat, M.-C. de Marneffe, and I. V. Meza Ruiz, Eds., Seattle, United States: Association for Computational Linguistics, Jul. 2022, pp. 5336–5358. doi: 10.18653/v1/2022.naacl-main.391. Available: https://arxiv.org/abs/2112.08608

[7] A. Khan et al., “Debating with More Persuasive LLMs Leads to More Truthful Answers,” Jul. 25, 2024, arXiv: arXiv:2402.06782. doi: 10.48550/arXiv.2402.06782. Available: https://arxiv.org/abs/2402.06782

[8] Z. Kenton et al., “On scalable oversight with weak LLMs judging strong LLMs,” Jul. 12, 2024, arXiv: arXiv:2407.04622. doi: 10.48550/arXiv.2407.04622. Available: https://arxiv.org/abs/2407.04622

[9] D. Hendrycks et al., “Measuring Massive Multitask Language Understanding,” presented at the ICLR 2021, Vienna, Austria: arXiv, Jan. 2021. doi: 10.48550/arXiv.2009.03300. Available: https://arxiv.org/abs/2009.03300

[10] C. Clark, K. Lee, M.-W. Chang, T. Kwiatkowski, M. Collins, and K. Toutanova, “BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds., Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 2924–2936. doi: 10.18653/v1/N19–1300. Available: https://arxiv.org/abs/1905.10044

[11] A. Panickssery, S. R. Bowman, and S. Feng, “LLM Evaluators Recognize and Favor Their Own Generations,” Apr. 15, 2024, arXiv: arXiv:2404.13076. doi: 10.48550/arXiv.2404.13076. Available: https://arxiv.org/abs/2404.13076

[12] N. C. for D. Science, “Language Models Often Favor Their Own Text, Revealing a New Bias in AI,” Medium. Accessed: Feb. 01, 2025. [Online]. Available: https://nyudatascience.medium.com/language-models-often-favor-their-own-text-revealing-a-new-bias-in-ai-e6f7a8fa5959

[13] A. Cotra, “Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover,” Jul. 2022, Accessed: Jan. 31, 2025. [Online]. Available: https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to [LW · GW]

[14] R. Ngo, L. Chan, and S. Mindermann, “The Alignment Problem from a Deep Learning Perspective,” Mar. 19, 2024, arXiv: arXiv:2209.00626. doi: 10.48550/arXiv.2209.00626. Available: https://arxiv.org/abs/2209.00626

  1. ^

    Best-of-N sampling is an algorithm that generates N output samples from a language model, and selects the one that achieved the highest score on a predefined reward function. More information is available here: https://arxiv.org/abs/2407.06057

  2. ^

    The temperature range for Gemini 1.5 Pro is 0.0 - 2.0, with the default temperature of 1.0. Information available at: https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/adjust-parameter-values

  3. ^

    According to the information available at: https://docs.anthropic.com/en/api/messages 

  4. ^

    According to the information available at: https://platform.openai.com/docs/api-reference/making-requests

  5. ^
  6. ^

    The word limit on the judge’s interventions was imposed solely to prevent the debates from becoming excessively long.

0 comments

Comments sorted by top scores.