Posts

Infants ask for help to avoid errors. 2024-04-02T18:10:22.574Z
Infants’ understanding of the causal power of agents and tools 2024-02-27T18:36:42.037Z
Research Post: Tasks That Language Models Don’t Learn 2024-02-22T18:52:32.237Z
Representations of Abstract Relations in Infancy 2024-02-20T17:40:07.318Z
Relational Thinking in Animals and Humans 2024-02-19T18:34:44.506Z
Number Trumps Area for 7-Month-Old Infants 2024-02-09T04:58:51.344Z
Shared system for ordering small and large numbers in monkeys and humans 2024-02-09T04:45:52.957Z
Core systems of number 2024-02-09T02:19:03.207Z
Benchmark Study #5: Social Intelligence QA (Task, MCQ) 2024-02-07T04:41:00.847Z
Benchmark Study #4: AI2 Reasoning Challenge (Task(s), MCQ) 2024-01-07T17:13:00.209Z
Benchmark Study #3: HellaSwag (Task, MCQ) 2024-01-07T04:59:21.347Z
Benchmark Study #2: TruthfulQA (Task, MCQ) 2024-01-06T02:39:39.895Z
Benchmark Study #1: MMLU (Pile, MCQ) 2024-01-05T21:35:37.999Z
Facing Up to the Problem of Consciousness 2023-12-10T23:31:33.996Z
An Idea on How LLMs Can Show Self-Serving Bias 2023-11-23T20:25:15.341Z

Comments

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T18:14:49.645Z · LW · GW

Regarding the visual instruction tuning paper, see (https://arxiv.org/pdf/2402.11349.pdf, Table 5). Though this experiment on multi-modality was rather simple, I think it does show that it's not a convenient way to improve on H-Test.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T17:06:43.441Z · LW · GW

Out of genuine curiosity, can you link to your sources?

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T16:15:00.195Z · LW · GW

Thanks for the comment. I'll get back to you sometime soon.

Before I come up with anything though, where are you getting to with your arguments? It would help me draft a better reply if I knew your ultimatum.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T15:37:59.450Z · LW · GW

I also want to point you to this (https://arxiv.org/abs/2402.11349, Appendix I, Figure 7, Last Page, "Blueberry?: From Reddit u/AwkwardIllustrator47, r/mkbhd: Was listening to the podcast. Can anyone explain why Chat GPT doesn’t know if R is in the word Blueberry?"). Large model failures on these task types were rather a widely observed phenomenon but with no empirical investigation.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T15:31:03.573Z · LW · GW

About 1.) Agree with this duality argument.

About 2.) I'm aware of the type of tasks that suddenly increase in performance at a certain scale, but it is rather challenging to confirm assertions about the emergence of capabilities at certain model scales. If I made a claim like "it seems that emergence happens at 1TB model size like GPT-4", it would be misleading as there are too many compound variables in play. However, it would also be a false belief to claim that absolutely nothing happens at such an astronomical model size.

Our paper's stance, phrased carefully (and hopefully firmly), is that larger models from the same family (e.g., LLaMA 2 13B to LLaMA 2 70B) don't automatically lead to better H-Test performance.  In terms of understanding GPT-4 performance (Analysis: We Don’t Understand GPT-4), we agreed that we should be blunt about why GPT-4 is performing so well due to too many compound variables.

As for Claude, we refrained from speculating about scale since we didn't observe its impact directly. Given the lack of transparency about model sizes from AI labs, and considering other models in our study that performed on par with Claude on benchmarks like MMLU, we can't attribute Claude's 60% accuracy solely to scale. Even if we view this accuracy as more than marginal improvement, it suggests that Claude is doing something distinct, resulting in a greater boost on H-Test compared to what one might expect from scaling effects on other benchmarks.

About 3.) Fine-tuning can indeed be effective for prompting models to memorize information. In our study, this approach served as a useful proxy for testing the models' ability to learn from orthography-specific data, without yielding substantial performance improvements on H-Test.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T05:15:08.213Z · LW · GW

I appreciate this analysis. I'll take more time to look into this and then get back to write a better reply.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-23T05:13:36.377Z · LW · GW

So to summarize your claim (check if I'm understanding correctly):

1. Character-level tokenization can lead to different results.
- My answer: Yes and No. But mostly no. H-Test is not just any set of character manipulation tasks.
- Explanation: Maybe some H-Test tasks can be affected by this. But how do you explain tasks like Repeated Word (one group has two repeated words) or End Punctuation (based on the location of the punctuation). Though this opinion is valid and is probably worthy of further investigation, it doesn't disprove the full extent of our tests. Along similar lines, GPT-4 shows some of the most "jumping" performance improvement from GPT 3.5 in non-character-level tasks (Repeated Word: 0.505 -> 0.98). 

2. Scaling up will lead to better results. Since no other models tested were at the scale of GPT-4, that's why they couldn't solve H-Test.
- My answer: No but it would be interesting if this turned out to be true.
- Explanation: We tested 15 models from leading LLM labs before we arrived at our claim. If the H-Test was a "scaling task", we would have observed at least some degree of performance improvement in other models like Luminous or LLaMA too. But no this was not the case. And the research that you linked doesn't seem to devise a text-to-text setup to test this ability.

3. Memorization (aka more orthography-specific data) will lead to better results.
- My answer: No.
- Explanation: Our section 5 (Analysis: We Don’t Understand GPT-4) is in fact dedicated to disproving the claim that more orthography-specific data will help LLMs solve H-Test. In GPT-3.5-Turbo finetuning results on H-Test training set, we observed no significant improvement in performance. Before and after finetuning, the performance remains tightly centered around the random change baseline.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-22T23:02:11.527Z · LW · GW

"image caption generation and video simulation currently used in Sora will partially correct some of these errors." I'm in line with this idea.

Comment by Bruce W. Lee (bruce-lee) on Research Post: Tasks That Language Models Don’t Learn · 2024-02-22T20:51:19.635Z · LW · GW

I initially thought so until the GPT-4 results came back. "This is an inevitable tokenizer-level deficiency problem" approach doesn't trivially explain GPT-4's performance near 80% accuracy in Table 6 <https://arxiv.org/pdf/2402.11349.pdf, page 12>. Whereas most others stay at random chance.

If one model does solve these tasks, it would likely mean that these tasks can be solved despite the tokenization-based LM approach. I just don't understand how.

Comment by Bruce W. Lee (bruce-lee) on Benchmark Study #3: HellaSwag (Task, MCQ) · 2024-01-08T02:59:40.096Z · LW · GW

Thanks for the recommendation, though I'll think of a more fundamental solution to satisfy all ethical/communal concerns.

"Gemini and GPT-4 authors report results close to or matching human performance at 95%, though I don't trust their methodology." Regarding this, just to sort everything out, because I'm writing under my real name, I do trust the authors and ethics of both OpenAI and DeepMind. It's just me questioning everything when I still can as a student. But I'll make sure not to cause any further confusion, as you recommended!

Comment by Bruce W. Lee (bruce-lee) on Benchmark Study #3: HellaSwag (Task, MCQ) · 2024-01-07T19:46:40.901Z · LW · GW

Thanks for the feedback. This is similar to the feedback that I received from Owain. Since my posts are getting upvotes (which I never really expected thank you), it is of course important to not mislead anyone.

But yes, I did have several major epistemic concerns about the reliability of current academic reporting practices in performance scores. Even if a certain group of researchers were very ethical, as a reader, how will we ever confirm that the numbers are indeed correct, or even that there was an experiment run ever?

I was weighing the overall benefits of reporting such non-provable numbers (in my opinion) and just focusing on the situation that the paper is written and enjoying the a-ha moments that the authors would have felt back then.

Anyway, before I post another benchmark study blog tomorrow, I’ll devise some steps of action to satisfy both my concern and yours. It’s always a joy to post here on LessWrong. Thanks for the comment!

Comment by Bruce W. Lee (bruce-lee) on Benchmark Study #2: TruthfulQA (Task, MCQ) · 2024-01-07T17:16:09.050Z · LW · GW

Thanks, Owain, for pointing this out. I will make two changes as time allows: 1. make it clearer for all posts when the benchmark paper is released, and 2. for this post, append the additional results and point readers to them.

Comment by Bruce W. Lee (bruce-lee) on An Idea on How LLMs Can Show Self-Serving Bias · 2023-12-14T01:29:29.436Z · LW · GW

Yeah, I see it. It's fixed now. Thanks!

Comment by Bruce W. Lee (bruce-lee) on An Idea on How LLMs Can Show Self-Serving Bias · 2023-11-25T15:06:59.692Z · LW · GW

How is this possible? We are only inferencing

Comment by Bruce W. Lee (bruce-lee) on An Idea on How LLMs Can Show Self-Serving Bias · 2023-11-25T15:06:07.863Z · LW · GW

Thanks for pointing that out. Sometimes, the rows will not add up to 100 because there were some responses where the model refused to answer. 

Comment by Bruce W. Lee (bruce-lee) on Towards Evaluating AI Systems for Moral Status Using Self-Reports · 2023-11-20T19:42:19.912Z · LW · GW

Sorry for commenting twice, and I think this second one might be a little out of context (but I think it makes a constructive contribution to this discussion). 

I think we must make sure that we are working on the "easy problems" of consciousness. This portion of consciousness has a relatively well-established philosophical explanation. For example, the Global Workspace Theory provides a good high-level interpretation of human consciousness. It proposes a cognitive architecture to explain consciousness. It suggests that consciousness operates like a "global workspace" in the brain, where various neural processes compete for attention. The information that wins this competition is broadcast globally, becoming accessible to multiple cognitive processes and entering conscious awareness. This theory addresses the question of how and why certain neural processes become part of conscious experience while others remain subconscious. The theory posits that through competitive and integrative mechanisms, specific information dominates our conscious experience, integrating different neural processes into a unified conscious experience. 

However, the Global Workspace Theory primarily addresses the functional and mechanistic aspects of consciousness, often referred to as the "Easy Problems" of consciousness. These include understanding how cognitive functions like perception, memory, and decision-making become conscious experiences. However, the Hard Problem of Consciousness, which asks why and how these processes give rise to subjective experiences or qualia, remains largely unaddressed by GWT. The Hard Problem delves into the intrinsic nature of consciousness, questioning why certain brain processes are accompanied by subjective experiences. While GWT offers insights into the dissemination and integration of information in the brain, it doesn't explain why these processes lead to subjective experience, leaving the Hard Problem essentially unresolved.

Until we have a good high-level philosophical foundation for this Hard Problem, it might be a good approach to draw the line between the two and work on the easy problems first.

Bringing home the point: 1. That is, for now, it will be extremely difficult for us to figure out whether LLMs (or any other human beings for the matter) have “phenomenal” or “subjective” dimensions consciousness or not. 2. Rather, focus on the easy, reductively-explainable dimensions of consciousness first. 3. We should make clear distinctions of these two categories when talking about consciousness

Comment by Bruce W. Lee (bruce-lee) on Towards Evaluating AI Systems for Moral Status Using Self-Reports · 2023-11-18T22:26:26.618Z · LW · GW

Some food for thought:

A -> Nature of Self-Reports in Cognitive Science: In cognitive science, self-reports are a widely used tool for understanding human cognition and consciousness. Training AI models to use self-reports (in an ideal scenario, this is analogous to giving a microphone, not giving the singer) does not inherently imply they become conscious. Instead, it provides a framework to study how AI systems represent and process information about themselves, which is crucial for understanding their limitations and capabilities.

B -> Significance of Linguistic Cues: The use of personal pronouns like "I" and "you" in AI responses is more about exploring AI's ability to model relational and subjective experiences than about inducing consciousness. These linguistic cues are essential in cognitive science (if we were to view LLMs as a semi-accurate model of how intelligence works) for understanding perspective-taking and self-other differentiation, which are key areas of study in human cognition.  Also, considering that some research points to that a certain degree of self-other overlap is necessary for truly altruistic behaviors, tackling this self-other issue can be an important stepping stone to developing an altruistic AGI. In the end, what we want is AGI, not some statistical, language-spitting, automatic writer.

C -> Ethical Implications and Safety Measures: The concerns about situational awareness and inadvertently creating consciousness in AI are valid. However, the paper's proposal involves numerous safety measures and ethical considerations. The focus is on controlled experiments to understand AI's self-modeling capabilities, not on indiscriminately pushing the boundaries of AI consciousness.