Distillation of Meta's Large Concept Models Paper

post by NickyP (Nicky) · 2025-03-04T17:33:40.116Z · LW · GW · 3 comments

Contents

  "Large Concept Models" (LCM) Paper
    ARCHITECTURES
    BENCHMARKS
    CONCLUSION
None
3 comments

Note: I had this as a draft for a while. I think it is accurate, but there may be errors. I am not in any way affiliated with the authors of the paper. 

Below I briefly discuss the "Large Concept Models" paper released by Meta, which tries to change some of the paradigm of doing language modelling. It has some limitations that are not present for normal language models, but I read spent the time to read the paper in relative depth so I am here to provide a brief summary of it.

"Large Concept Models" (LCM) Paper

Large Concept Models aim to be a way to "improve language modelling" by "being more hierarchical". I think the easiest way to explain is to compare to normal decoder-only language models.

A normal LLM works by passing in and getting out single tokens. For the LCM, we instead pass and get out semantic vectors. The key model that makes this possible is SONAR, which is a text auto-encoder.

The main benefit I can see is that it likely is much better at long-contexts. Otherwise, it comes with some disadvantages.

ARCHITECTURES

The key difficulty: How to output new sentence embed vectors, since it a continuous space. They try a few approaches.

They find diffusion model work significantly better. They had two variations that seemed to work equally well 

These two models performed very similarly, so they decided to focus on the "Two Towers" architecture as it's less compute intensive.[3]

In the rest of the paper, they try various methods for optimising hyperparameters and such, and they try scaling up the model, and compare to other normal LLM models.

 

BENCHMARKS

Disappointingly, they do not benchmark the model on any "normal benchmarks" like MMLU or anything similar. They state: "As longform text generation is the main challenge for LCM, our benchmarking is mainly focused on generative tasks". I will just provide two representitive benchmark results from the paper

First, they compare the different approaches for Large Concept Models that are instruction fine tuned. For the 1.6B size models, we see that the two diffusion models significantly outperform the other methods for Large Concept Models. However, we also see that the "SmaLLAMA" model of the same size performs better.

MetricCoherence ↑R-L ↑
What it measuresHow naturally text flows and connects (0-1 scale)Overlap between generated & reference text
BASE-LCM0.48223.69
QUANT-LCM0.84730.87
ONE-TOWER0.96833.40
TWO-TOWER0.93833.64
SMALLAMA0.98434.88

 

They scale up the model to 8B to compare against other models. 

Here is a representative benchmark that they used, which compares summary quality from LCM vs some similar sized models.

LCFO 10% - Summarise text in 10% of original length 
Model

Word

Ratio

R-L(↑)OVL-3 (↑)REP-4 (↓)CoLA (↑)SH-4 (↑)SH-5 (↑)
GEMMA-7B-IT0.15029.250.1646.4270.6670.3770.194
MISTRAL-7B-V0.3-IT0.54925.000.5376.2890.8480.6600.306
LLAMA-3.1-8B-IT0.12842.850.2433.8040.9070.4860.310
TWO-TOWER-7B-IT0.08929.380.2023.000.7910.6230.183

This table shows performance on the LCFO.10% task (long context summarization, where output should be 10% of input length). I don't intuitively understand most of the metrics that well, but they are:

The model seem to be OK but maybe not spectacular. 

 

CONCLUSION

Overall, the LCM seems like an interesting model in some ways, and perhaps has the benefit of using context much more slowly than in other models, but at the moment doesn't seem like much of an improvement to other models. It loses some of the properties you get from tokenization that make training models easy. 

 

  1. ^

    Note that they actually have two methods for LCM-QUANT but as they don't decide to pursue either approach, I won't go into much detail here. You can see the original paper for details on that.

  2. ^

    Note that the diffusion decoder actually has: self-attention + cross-attention + MLP, but they do not let the token attend to any tokens other than itself, so it is pointless. They state:

    The self-attention layers in the denoiser do only attend to the current position i.e., we do not attend to the preceding noised context. The self-attention layers were kept for consistency with a standard Transformer block and for the possible extension of denoising multiple vectors at once.

  3. ^

    It is less compute intensive since for one-tower, you need to pass through all 32 layers for all N diffusion steps, but for the two-tower you only need to pass through the encoder once and through the decoder for the N diffusion steps.

3 comments

Comments sorted by top scores.

comment by Martin Vlach (martin-vlach) · 2025-03-07T05:44:03.734Z · LW(p) · GW(p)

Comparing to Gemma1, classic BigTech😅

 

And I seem to miss info on the effective context length..?

Replies from: Nicky
comment by NickyP (Nicky) · 2025-03-07T21:51:47.037Z · LW(p) · GW(p)

Yeah, the context length was 128 concepts for the small tests they did between architectures, and 2048 concepts for the larger models.

How this exactly translates is kind of variable. They limit the concepts to be around 200 characters, but this could be any number of tokens. They say they trained the large model on 2.7T tokens and 142B concepts, so on average 19 tokens per concept.

The 128 would translate to 2.4k tokens, and the 2048 concepts would translate to approx 39k tokens.

comment by Martin Vlach (martin-vlach) · 2025-03-07T05:35:05.877Z · LW(p) · GW(p)

read spent the time to read

typo?