Is LLM Translation Without Rosetta Stone possible?
post by cubefox · 2024-04-11T00:36:46.568Z · LW · GW · 10 commentsThis is a question post.
Contents
Answers 5 localdeity 2 MichaelStJules None 10 comments
Suppose astronomers detect a binary radio signal, an alien message, from a star system many light years away. The message contains a large text dump (conveniently, about GPT-4 training text data sized) composed in an alien language. Let's call it Alienese.[1]
Unfortunately we don't understand Alienese.
Until recently, it seemed impossible to learn a language without either
- correlating it to sensory experiences shared between the learner and other proficient speakers (like children learn their first language) or
- having access to a dictionary which translates the unknown language into another, known language. (The Rosetta Stone served as such a dictionary, which enabled deciphering Egyptian hieroglyphs.)
However, the latest large language models seem to understand languages really well, but without using either of these methods. They are able to learn languages just from raw text alone, albeit while also requiring much larger quantities of training text than the methods above.
This poses a fundamental question:
If an LLM understands language and language , is this sufficient for it to translate between and ?[2]
Unfortunately, it is hardly possible to answer this question empirically using data from human languages. Large text dumps of, say, English and Chinese contain a lot of "Rosetta Stone" content. Bilingual documents, common expressions, translations into related third languages like Japanese, literal English-Chinese dictionaries etc. Since LLMs require a substantial amount of training text, it is not feasible to reliably filter out all this translation content.
But if we received a large text dump in Alienese, we could be certain that no dictionary-like connections to English are present. We could then train a single foundation model (a next token predictor, say a GPT-4 sized model) on both English and Alienese.
By assumption, this LLM would then be able, using adequate prompt engineering, to answer English questions with English answers, and Alienese questions with Alienese answers.
Of course we can't simply ask any Alienese questions, as we don't know the language. But we can create a prompt like this:
The following document contains accurate translations of text written in various languages (marked as "Original") into English.
Original: /:wYfh]%xy&v[$49F[CY1.JywUey03ei8EH:KWKY]xHRS#58JfAU:z]L4[gkf*ApjP+T!QYYVTF/F00:;(URv4vci$NU:qm2}$-!R3[BiL.RqwzP!6CCiCh%:wjzB10)xX}%Y45=kV&BFA&]ubnFz$i+9+#$(z;0FK(JjjWCxNZTPdr,v0].6G(/mKCr/J@c0[73M}{Gqi+d11aUe?J[vf4YXa4}w4]6)H]#?XBr:Wg35%)T#60B2:d+Z;jJ$9WgE?;u}uR)x1911k-CE?XhmUYMgt9(:CY7=S)[cKKLbZuU
English:
(Assume the garbled text are Alienese tokens taken from a random document in the alien text dump.)
Can we expect a prompt like this, or a similar one, to produce a reasonably adequate translation of the Alienese text into English?
Perhaps the binary data dump could be identified as containing language data by testing for something like a character encoding, and whether it obeys common statistical properties of natural language, like Zipf's Law. ↩︎
There is a somewhat similar question called Molyneux's problem, which asks whether agents can identify objects between two completely unrelated sensory modalities. ↩︎
Answers
Without regard to anything specific to LLMs... Math works the same for all conceivable beings. Beings that live in our universe, of sufficient advancedness, will almost certainly know about hydrogen and other elements, and fundamental constants like Planck lengths. So there will exist commonalities. And then you can build everything else on top of those. If need be, you could describe the way things looked by giving 2D pixel-grid pictures, or describe an apple by starting with elements, molecules, DNA, and so on. (See Contact and That Alien Message [LW · GW] for explorations of this type of problem.)
It's unlikely that any LLM resembling those of today would translate the word for an alien fruit into a description of their own DNA-equivalent and their entire biosphere... But maybe a sufficiently good LLM would have that knowledge inside it, and repeatedly querying it could draw that out.
↑ comment by cubefox · 2024-04-11T12:58:18.131Z · LW(p) · GW(p)
I guess my question would then be whether the translation would work if neither language contained any information on microphysics or advanced math. Would the model be able to translate e.g. "z;0FK(JjjWCxN" into "fruit"?
Replies from: Protagoras↑ comment by Protagoras · 2024-04-12T23:29:51.785Z · LW(p) · GW(p)
The chances of the LLM being able to do this depend heavily on how similar the subjects discussed in the alien language are to things humans discuss. Removing areas where there is most likely to be similarity would reduce the chance that the LLM would find matching patterns in both. Indeed, that we're imagining aliens for the example already probably greatly increases the difficulty for the LLM.
It's conceivable how the characters/words are used across English and Alienese have a strong enough correspondence that you can guess matching words much better than chance. But, I'm not confident that you'd have high accuracy.
Consider encryption. If you encrypted messages by mapping the same character to the same character each time, e.g. 'd' always gets mapped to '6', then this can be broken with decent accuracy by comparing frequency statistics of characters in your messages with the frequency statistics of characters in the English language.
If you mapped whole words to strings instead of character to character, you could use frequency statistics for whole words in the English language.
Then, between languages, this mostly gets way harder, but you might be able to make some informed guesses, based on
- how often you expect certain concepts to be referred to (frequency statistics, although even between human languages, there are probably very important differences)
- guesses about extremely common words like 'a', 'the', 'of'
- possible grammars
- similar words being written similarly, like verb tenses of the same verb, noun and verb forms of the same word, etc..
- (EDIT) Fine-grained associations between words, e.g. if a given word is used in a random sentence, how often another given word is used in that same sentence. Do this for all ordered pairs of words.
An AI might use similar facts or others, and many more, about much fine-grained and specific uses of words and associations, to guess, but I’m not sure an LLM token predictor mostly just trained on both languages in particular would do a good job.
EDIT: Unsupervised machine translation as Steven Byrnes pointed out seems to be on a better track.
Also, I would add that LLMs trained without perception of things other than text don't really understand language. The meanings of the words aren't grounded, and I imagine it could be possible to swap some in a way that would mostly preserve the associations (nearly isomorphic), but I’m not sure.
10 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2024-04-11T01:23:30.872Z · LW(p) · GW(p)
I think the standard technical term for what you’re talking about is “unsupervised machine translation”. Here’s a paper on that, for example, although it’s not using the LLM approach you propose. (I have no opinion about whether the LLM approach you propose would work or not.)
Replies from: cubefox↑ comment by cubefox · 2024-04-11T03:14:25.949Z · LW(p) · GW(p)
Interesting reference! So an unsupervised approach from 2017/2018, presumably somewhat primitive by today's standards, already works quite well for English/French translation. This provides some evidence that the (more advanced?) LLM approach, or something similar, would actually work for English/Alienese.
Of course English and French are historically related, and arose on the same planet while being used by the same type of organism. So they are necessarily quite similar in terms of the concepts they encode. English and Alienese would be much more different and harder to translate.
But if it worked, it would mean that sufficiently long messages, with enough effort, basically translate themselves. A spiritual successor to the Pioneer plaque and the Arecibo message, instead of some galaxy brained hopefully-universally-readable message, would simply consist of several terabytes of human written text. Smart aliens could use the text to train a self-supervised Earthling/Alienese translation model, and then use this model to translate our text.
comment by Buck · 2024-04-12T00:31:18.724Z · LW(p) · GW(p)
Paul Christiano discusses this in “Unsupervised” translation as an (intent) alignment problem (2020)
Replies from: cubefox↑ comment by cubefox · 2024-04-12T13:00:27.646Z · LW(p) · GW(p)
From the post:
Suppose that we want to translate between English and an alien language (Klingon). We have plenty of Klingon text, and separately we have plenty of English text, but it’s not matched up and there are no bilingual speakers.
We train GPT on a mix of English and Klingon text and find that it becomes fluent in both. In some sense this model “knows” quite a lot about both Klingon and English, and so it should be able to read a sentence in one language, understand it, and then express the same idea in the other language. But it’s not clear how we could train a translation model.
So he talks about the difficulty of judging whether an unsupervised translation is good, since there are no independent raters who understand both English and Alienese, so translations can't be improved with RLHF.
He posted this before OpenAI succeeded in applying RLHF to LLMs. I now think RLHF generally doesn't improve translation ability much anyway compared to prompting a foundation model. Based on what we have seen, it seems generally hard to improve raw LLM abilities with RLHF. Even if RLHF does improve translation relative to some good prompting, I would assume doing RLHF on some known translation pairs (like English and Chinese) would also help for other pairs which weren't mentioned in the RLHF data. E.g. by encouraging the model to mention it's uncertainty about the meaning of certain terms when doing translations. Though again, this could likely be achieved with prompting as well.
He also mentions the more general problem of language models not knowing why they believe what they believe. If a model translates X as Y rather than as Z, it can't provide the reasons for its decision (like pointing to specific statistics about the training data), except via post hoc rationalisation / confabulation.
comment by ryan_greenblatt · 2024-04-11T01:02:46.505Z · LW(p) · GW(p)
Unfortunately, it is hardly possible to answer this question empirically using data from human languages. Large text dumps of, say, English and Chinese contain a lot of "Rosetta Stone" content. Bilingual documents, common expressions, translations into related third languages like Japanese, literal English-Chinese dictionaries etc. Since LLMs require a substantial amount of training text, it is not feasible to reliably filter out all this translation content.
I don't think this is clear. I think you might be able to train an LLM a conlang created after the data cutoff for instance.
As far as human languages, I bet it works ok for big LLMs.
Replies from: JBlack↑ comment by JBlack · 2024-04-11T03:09:48.270Z · LW(p) · GW(p)
I don't think this was a statement about whether it's possible in principle, but about whether it's actually feasible in practice. I'm not aware of any conlangs, before the cutoff date or not, that have a training corpus large enough for the LLM to be trained to the same extent that major natural languages are.
Esperanto is certainly the most widespread conlang, but (1) is very strongly related to European languages, (2) is well before the cutoff date for any LLM, (3) all training corpora of which I am aware contain a great many references to other languages and their cross-translations, and (4) the largest corpora are still less than 0.1% of those available for most common natural languages.
comment by Yair Halberstadt (yair-halberstadt) · 2024-04-11T03:11:58.789Z · LW(p) · GW(p)
I think this is a really interesting question since it seems like it should neatly split the "LLMs are just next token predictors" crows from the "LLMs actually display understanding" crowd.
If in order to make statements about chairs and tables an LLM builds a model of what a chair and a table actually are, and to answer questions about fgeyjajic and chandybsnx it builds a model of what they are, it should be able to notice that these models correspond. At the very least it should be surprising if it can't do that.
If it can't generalize beyond stuff in the training set, and doesn't display any 'true' intelligence, then it would be surprising if it can translate between two languages where it's never seen any examples of translation before.
comment by faul_sname · 2024-04-12T23:32:43.249Z · LW(p) · GW(p)
Similar question: Let's start with an easier but I think similarly shaped problem.
We have two next-token predictors. Both are trained on English text, but each one was trained on a slightly different corpus (let's say one the first one was trained on all arxiv papers and the other one was trained on all public domain literature), and each one uses a different tokenizer (let's say the arxiv one used a BPE tokenizer and the literature one used some unknown tokenization stream).
Unfortunately, the tokenizer for the second corpus has been lost. You still have the tokenized dataset for the second corpus, and you still have the trained sequence predictor, but you've lost the token <-> word mapping. Also due to lobbying, the public domain is no longer a thing and so you don't have access to the original dataset to try to piece things back together.
You can still feed a sequence of integers which encode tokens to the literature-next-token-predictor, and it will spit out integers corresponding to its prediction of the next token, but you don't know what English words those tokens correspond to.
I expect, in this situation, that you could do stuff like "create a new sequence predictor that is trained on the tokenized version of both corpora, so that the new predictor will hopefully use some shared machinery for next token prediction for each dataset, and then do the whole sparse autoencoder thing to try and tease apart what those shared abstractions are to build hypotheses".
Even in that "easy" case, though, I think it's a bit harder than "just ask the LLM", but the easy case is, I think, viable.
comment by Morpheus · 2024-04-11T09:27:36.672Z · LW(p) · GW(p)
Trying to learn a language from scratch, just from text is a fun exercise for humans also. I recently tried this with Hindi after I had an disagreement with someone about the exact question of this post. I didn't get very far in 2 hours though.
Replies from: cubefox↑ comment by cubefox · 2024-04-11T12:41:05.274Z · LW(p) · GW(p)
I think this is almost impossible for humans to do. Even with a group of humans and decades of research. Otherwise we wouldn't have needed the Rosetta Stone to read Egyptian hieroglyphs. And would long have deciphered the Voynich manuscript.