Posts
Comments
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki's Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is "quite intelligible" compared to "neuralese", but at the end, it is not very intelligible.
I understand many people here are native English speakers, but I am not, and one thing I think about a lot is how much people should spend on learning English. Learning English is a big investment. Will AI advances make language barriers irrelevant? I am very uncertain about this and I would like to hear your opinions.
This is a good idea and it already works, it is just that AI is wholly unnecessary. Have a look at 2018 post Protecting Applications with Automated Software Diversity.
If we do get powerful AI, it seems highly plausible that even if we stay in control we will 'go too fast' in deploying it relative to society's ability to adapt, if only because of the need to grow fast and stay ahead of others, and because the market doesn't care that society wants it to go slower.
After reading my interpretation was this: assuming we stay in control, that happens only if powerful AI is aligned. The market doesn't care that society wants to go slower, but AI will care that society wants to go slower, so when the market tries to force AI to go faster, AI will refuse.
I reflected on whether I am being too generous, but I don't think I am. Other readings didn't make sense to me, and I am assuming Dario is trying to make sense, while you seem doubtful. That is, I think this is plausibly Dario's actual prediction of how fast things will go, not a hope it won't go faster. But importantly, that is assuming alignment. Since that assumption is already hopeful, it is natural the prediction under that assumption sounds hopeful.
Paul Crowley: It's a strange essay, in that it asks us to imagine a world in which a single datacenter contains 1E6 Nobelists expert in every field and thinking at 100x speed, and asks what happens if "sci-fi" outcomes somehow don’t happen. Of course "sci-fi" stuff happens almost immediately.
I mean, yes, sci-fi style stuff does seem rather obviously like it would happen? If it didn't, then that’s a rather chilling indictment of the field of sci-fi?
To re-state, sci-fi outcomes don't happen because AI is aligned. Proof: if sci-fi outcomes happened, AI would be unaligned. I actually think this point is extremely clear in the essay. It literally states: "An aligned AI would not want to do these things (and if we have an unaligned AI, we're back to talking about risks)".
If you enjoyed Inventing Temperature, Is Water H2O? is pretty much the same genre from the same author.
My another favorite is The Emergence of Probability by Ian Hacking. It gets you feeling of how unimaginably difficult for early pioneers of probability theory to make any advance whatsoever, as well as how powerful even small advances actually are, like by enabling annuity.
I actually learned the same thing from studying early history of logic (Boole, Peirce, Frege, etc), but I am not aware of good distillation in book form. It is my pet peeve that people don't (maybe can't) appreciate how great intellectual achievement first order logic really is, being the end result of so much frustrating effort. Because learning to use first order logic is kind of trivial, compared to inventing it.
I think it is important to be concrete. Jean-Baptiste Jeannin's research interest is "Verification of cyber-physical systems, in particular aerospace applications". In 2015, nearly a decade ago, he published "Formal Verification of ACAS X, an Industrial Airborne Collision Avoidance System". ACAS X is now deployed by FAA. So I would say this level of formal verification is a mature technology now. It is just that it has not been widely adopted outside of aerospace applications, mostly due to cost issues and more importantly people not being aware that it is possible now.
Result: humanity is destroyed as soon as the patent expires.
The plain interpretation is that only statements to be proved (or disproved) were sourced from human data, without any actual proof steps. In Go analogy, it is like being given Go board positions without next moves.
It makes a lot of sense this is needed and helpful, because winning a game of Go from the empty board is a different and easier problem than playing best moves from arbitrary Go positions. Igo Hatsuyoron mentioned in the original post is a good example; additional training was needed, because such positions never come up in actual games.
Imagine AlphaZero trained from randomly sampled Go positions, each intersection being black/white/empty with uniform probability. It would play much worse game of Go. Fortunately, how to sample "relevant" Go positions is an easy problem: you just play the game, initial N moves sampled at higher temperature for diversity.
In comparison, how to sample relevant math positions is unclear. Being good at finding proofs in arbitrary formal systems from arbitrary set of axioms is actually quite different from being good at math. Using human data sidesteps this problem.
Namely translating, and somehow expanding, one million human written proofs into 100 million formal Lean proofs.
We obviously should wait for the paper and more details, but I am certain this is incorrect. Both your quote and diagram is clear that it is one million problems, not proofs.
It feels to me like it shouldn't be so hard to teach an LLM to convert IMO problems into Lean or whatever
To the contrary, this used to be very hard. Of course, LLM can learn to translate "real number" to "R". But that's only possible because R is formalized in Lean/Mathlib! Formalization of real number is a research level problem, which in history occupied much of the 19th century mathematics.
Recently I came across a paper Codification, Technology Absorption, and the Globalization of the Industrial Revolution which discusses the role of translation and dictionary in industrialization of Japan. The following quote is illustrative.
The second stylized fact is that the Japanese language is unique in starting at a low base of codified knowledge in 1870 and catching up with the West by 1887. By 1890, there were more technical books in the Japanese National Diet Library (NDL) than in either Deutsche Nationalbibliotek or in Italian, as reported by WorldCat. By 1910, there were more technical books written in Japanese in our sample than in any other language in our sample except French.
How did Japan achieve such a remarkable growth in the supply of technical books? We show that the Japanese government was instrumental in overcoming a complex public goods problem, which enabled Japanese speakers to achieve technical literacy in the 1880s. We document that Japanese publishers, translators, and entrepreneurs initially could not translate Western scientific works because Japanese words describing the technologies of the Industrial Revolution did not exist. The Japanese government solved the problem by creating a large dictionary that contained Japanese jargon for many technical words. Indeed, we find that new word coinage in the Japanese language grew suddenly after a massive government effort to subsidize translations produced technical dictionaries and, subsequently, a large number of translations of technical books.
Just as, say, translating The Wealth of Nations to Japanese is of entirely different difficulty between the 19th century and 20th century (the 19th century Japanese started by debating how to translate "society"), formalizing IMO problems in Lean is only workable thanks to Mathlib. It would not be workable in other formal systems lacking similarly developed math library, and formalizing research mathematics in Lean is similarly unworkable at the moment, until Mathlib is further developed to cover definitions and background theorems. In the past, ambitious formalization projects usually spent half their time formalizing definitions and background results needed.
First, I would like to second that the world is incredibly small. It bears repeating. I am repeating it to myself to get courage to write this comment. Maybe this is obvious, but maybe it is not. It could be helpful.
Random thoughts on alleged OpenAI memo on selling AGI to highest bidder including China and Russia. This sounds plausible to me, because as I understand before the split with Anthropic OpenAI was very much "team humanity, not team USG or team CCP". I think this should be understood in context that getting aligned AI is higher priority than geopolitical competition.
Random thoughts on AI labs and coup. Could Los Alamos coup? I mean, obviously no in the real timeline, they didn't have delivery, none of bomber, ICBM, and nuclear submarine. Let's just assume after the Trinity test they could unilaterally decide to put a newly produced nuke not yet delivered to the army on ICBM and point that to Washington DC. Can Los Alamos force Truman, say, to share the nuke with Soviet Union (which many scientists actually wanted)?
By assumption, Truman should surrender (even unconditionally), but it is hard to imagine he would. Nuclear threats not only need to be executable, it also needs to be understandable. Also Los Alamos would depend on enriched uranium supply chain which is large industry not under its control, physical security of Los Alamos is under army control and what if security guards just go into Technical area?
Applying this to OpenAI or possible OpenAI-in-the-desert, OpenAI would depend on trillion dollars cluster and its supply chain, large industry not under its control, and same physical security problem. How does OpenAI defend against tanks on the street of San Francisco? With ASI-controlled drones? Why does OpenAI conveniently happen to have drones and drone factories on premise?
I am trying to push back against "if you have ASI you are the government". If the government is monopoly on violence, millions of perfectly coordinated Von Neumanns do not immediately overthrow USG, key word being immediately. Considering Von Neumann's talk of nuking Moscow today instead of tomorrow and lunch instead of dinner it will be pretty quick, but it still takes time to have fabs and power plants and data centers and drone factories etc. Even if you use nanotechnology to build them, it still takes time to research nanotechnology.
Maybe they develop mind control level convincing argument and send it to key people (president, congress, NORAD, etc) or hack their iPhones and recursively down to security guards of fabs/power plants/data centers/drone factories. That may be quick enough. The point is that it is not obvious.
Random thoughts on Chinese AI researchers and immigration. US's track record here is extremely bad, even with cold war. Do you know how China got nukes and missiles? US deported Qian Xuesen, MIT graduate, who founded JPL. He had US military ranks in WW2. He interrogated Werner von Braun for USG! Then USG decided Qian is a communist, which was completely ridiculous. Then Qian went back and worked for communists whoops. Let me quote Atomic Heritage Foundation:
Deporting Qian was the stupidest thing this country ever did. He was no more a communist than I was, and we forced him to go.
US would be well advised to avoid repeating this total fiasco. But I am not optimistic.
Unclear. I think there is a correlation, but: one determinant of crawl completeness/quality/etc is choice of seeds. It is known that Internet Archive crawl has better Chinese data than Common Crawl, because they made specific effort to improve seeds for Chinese web. Such missing data originating from choice of seeds bias probably is not particularly low quality than average of what is in Common Crawl.
(That is, to clarify, yes in general effort is spent for quality writing to be easily accessible (hence easily crawlable), but accessibility is relative to choice of seeds, and it in fact is the case that being easily accessible from Chinese web does not necessarily entail being easily accessible from English web.)
Modern AI is trained on a huge fraction of the internet
I want to push against this. The internet (or world wide web) is incredibly big. In fact we don't know exactly how big it is, and measuring the size is a research problem!
When they say this what they mean is it is trained on a huge fraction of Common Crawl. Common Crawl is a crawl of world wide web that is free to use. But there are other crawls, and you could crawl world wide web yourself. Everyone uses Common Crawl because it is decent and crawling world wide web is itself a large engineering project.
But Common Crawl is not at all a complete crawl of world wide web. It is very far from being complete. For example, Google has its own proprietary crawl of world wide web (which you can access as Google cache). Probabilistic estimate of size of Google's search index suggests it is 10x size of Common Crawl. And Google's crawl is also not complete. Bing also has its own crawl.
It is known that Anthropic runs its own crawler called ClaudeBot. Web crawl is highly non-trivial engineering project, but it is also kind of well understood. (Although I heard that you continue to encounter new issues as you approach more and more extreme scales.) There are also failed search engines with their own web crawls and you could buy them.
There is also another independent web crawl that is public! Internet Archive has its own crawl, it is just less well known than Common Crawl. Recently someone made a use of Internet Archive crawl and analyzed overlap and difference with Common Crawl, see https://arxiv.org/abs/2403.14009.
If the data wall is a big problem, making a use of Internet Archive crawl is like the first obvious thing to try. But as far as I know, that 2024 paper is first public literature to do so. At least, any analysis of data wall should take into account both Common Crawl and Internet Archive crawl with overlap excluded, but I have never seen anyone doing this.
My overall point is that Common Crawl is not world wide web. It is not complete, and there are other crawls, both public and private. You can also crawl yourself, and we know AI labs do. How much does it help is unclear, but I think 10x is very likely, although probably not 100x.
I don't think the post is saying the result is not valuable. The claim is that it underperformed expectation. Stock prices fall if they underperformed expectation, even if they are profitable. That does not mean they made loss.
Unsure about this. Isn't Qwen on Chatbot Arena Leaderboard, and is made by Alibaba?
No. Traditionally, donors have no standing to sue charity. From https://www.thetaxadviser.com/issues/2021/sep/donor-no-standing-sue-donor-advised-fund.html
California limits by statute the persons who can sue for mismanagement of a charitable corporation's assets. The court found that the claims raised by Pinkert for breach of a fiduciary duty for mismanagement of assets were claims for breach of a charitable trust. The court determined that under California law, a suit for breach of a charitable trust can be brought by the attorney general of California...
The patent is not yet granted.
Someone from South Korea is extremely skeptical and wrote a long thread going into paper's details why it must be 100% false: https://twitter.com/AK2MARU/status/1684435312557314048. Sorry it's in Korean, but we live in the age of miracle and serviceable machine translation.
But it wasn't until the 1940s and the advent of the electronic computer that they actually built a machine that was used to construct mathematical tables. I'm confused...
You are confused because that is not the reality. As you can read on Wikipedia's entry on difference engine, Scheutz built a difference engine derivative, sold it, and it was used to create logarithmic tables.
You must have read this while writing this article. It is prominent in the Wikipedia article in question and hard to miss. Why did you make this mistake? If it was a deliberate act of misleading for narrative convenience I am very disappointed. Yes, the reality is rarely narratively convenient, but you shouldn't lie about it.
My median estimate has been 2028 (so 5 years). I first wrote down 2028 in 2016 (so 12 years after then), and during 7 years since, I barely moved the estimate. Things roughly happened when I expected them to.
I am curious how this fine-tuning for function calling was done, because it is user controllable. In the OpenAI API, if you pass none
to function_call
parameter, the model never calls a function. There seem to be one input bit and one output bit, for "you may want to call a function" and "I want to call a function".
While I agree being led by someone who is aware of AI safety is a positive sign, I note that OpenAI is led by Sam Altman who similarly showed awareness of AI safety issues.
I did the obvious thing and it worked? I have a suspicion you haven't tried hard enough, but indeed we all have comparative advantages.
- Click the link, which is https://twitter.com/BAAIBeijing
- Click the link on the first tweet, to the conference website, which is https://2023.baai.ac.cn/
- The website is titled "2023 北京智源大会", copy to the clipboard
- Type https://www.bilibili.com/ on the address bar, everyone knows that's where Chinese videos are
- Paste "2023 北京智源大会" to the search box and press the enter key
- Click and watch 麻省理工Max Tegmark教授: 把AI关在受控的笼子里 (2023北京智源大会开幕式主旨演讲)
Parallelization part (data parallelism, tensor parallelism, pipeline parallelism, ZeRO) is completely standard. See Efficient Training on Multiple GPUs by Hugging Face for a standard description. Failure recovery part is relatively unusual.
That is trivial to program? For example, you can have AutoGPT UI which lists pending tasks with icons next to them, where clicking a trashcan will completely erase it from the context. That doesn't need any LLM-level help like LEACE.
What do you mean? Current LLMs are stateless. If unsuccessful attempts to solve the task are made, just reset the history and retry.
There is no problem with air gap. Public key cryptography is a wonderful thing. Let there be a license file, which is a signed statement of hardware ID and duration for which license is valid. You need private key to produce a license file, but public key can be used to verify it. Publish a license server which can verify license files and can be run inside air gapped networks. Done.
I note that this is how Falcon from Abu Dhabi was trained. To quote:
Falcon is a 40 billion parameters autoregressive decoder-only model trained on 1 trillion tokens. It was trained on 384 GPUs on AWS over the course of two months.
I think bow and arrow is powerful enough and gun is not necessary.
As an example of question specific enough to be answerable by science, there is Is Pornography Use Associated with Sexual Difficulties and Dysfunctions among Younger Heterosexual Men? (2015). It begins:
Recent epidemiological studies reported high prevalence rates of erectile dysfunction (ED) among younger heterosexual men (≤40). It has been suggested that this "epidemic" of ED is related to increased pornography use. However, empirical evidence for such association is currently lacking.
The answer is no. As far as I know, this was among the first study powerful enough to answer this question. Well done, science!
Of course, nobody listens to science. Compare the introduction above with another introduction written 4 years later, from Is Pornography Use Related to Erectile Functioning? (2019).
Despite evidence to the contrary, a number of advocacy and self-help groups persist in claiming that internet pornography use is driving an epidemic of erectile dysfunction (ED).
The shift in tone is palpable, and you can just feel the powerlessness researchers feel about the situation.
Since the topic of chess was brought up: I think the right intuition pump is endgame tablebase, not moves played by AlphaZero. A quote about KRNKNN mate-in-262 discovered by endgame tablebase from Wikipedia:
Playing over these moves is an eerie experience. They are not human; a grandmaster does not understand them any better than someone who has learned chess yesterday. The knights jump, the kings orbit, the sun goes down, and every move is the truth. It's like being revealed the Meaning of Life, but it's in Estonian.
I agree timescale is a good way to think about this. My intuition is if high school math problems are 1 then IMO math problems are 100(1e2) and typical research math problems are 10,000(1e4). So exactly half way! I don't have first hand experience with hardest research math problems, but from what I heard about timescale they seem to reach 1,000,000(1e6). I'd rate typical practical R&D problems 1e3 and transformative R&D problems 1e5.
Edit: Using this scale, I rate GPT-3 at 1 and GPT-4 at 10. This suggests GPT-5 for IMO, which feels uncomfortable to me! Thinking about this, I think while there are lots of 1-data and 10-data, there are considerably less 100-data and above that most things are not written down. But maybe that is an excuse and it doesn't matter.
I kind of disagree. (I was on South Korean IMO team.) I agree IMO problems are in similar category of tasks including research math than high school math, but since IMO problems are intended to be solvable within a time limit, there is (quite low, in absolute sense) upper limit to their difficulty. Basically, intended solution is not longer than a single page. Research math problems have no such limit and can be arbitrarily difficult, or have a solution arbitrarily long.
Edit: Apart from time limit, length limit, and difficulty limit, another important aspect is that IMO problems are already solved, so known to be solvable. IMO problems are "Prove X". Research math problems, even if they are stated as "Prove X", is really "Prove or disprove X", and sometimes this matters.
Eh, there are not that many IMO problems, even including shortlisted problems. Since there are not that many, IMO contestants basically solve all previous IMO problems to practice. So it's not like AI is having an unfair advantage.
I am of the opinion that adding the condition of "not trained on prior IMO/math contest problems" is ridiculous.
GPT-6 will probably be able to analyze all the neurons in itself with >0.5 scores
This seems to assume the task (writing explanations for all neurons with >0.5 scores) is possible at all, which is doubtful. Superposition and polysemanticity are certainly things that actually happen.
I note that Eliezer did this (pretty much immediately) on Twitter.
Of course such approaches are suggested, for example LOVE in a simbox is all you need. The main argument has been whether the simulation can be realistic, and whether it can be secure.
I would not describe development of deep learning as discontinuous, but I would describe it as fast. As far as I can tell, development of deep learning happened by accumulation of many small improvements over time, sometimes humorously described as graduate student descent (better initialization, better activation function, better optimizer, better architecture, better regularization, etc.). It seems possible or even probable that brain-inspired RL could follow the similar trajectory once it took off, absent interventions like changes to open publishing norm.
I think the primary difficulty is how to train it. GPT is trained from internet texts, but internet does not record memory of authors of those texts, so memory is unavailable in the training data set.
As far as I can tell, Sam is saying no to size. That does not mean saying no to compute, data, or scaling.
"Hundreds of complicated things" comment definitely can't be interpreted to be against transformers, since "simply" scaling transformers fits the description perfectly. "Simply" scaling transformers involves things like writing a new compiler. It is simple in strategy, not in execution.
Ask GPT to hash you a word (let alone guess which word was the origin of a hash), it'll just put together some hash-like string of tokens. It's got the right length and the right character set (namely, it's a hex number), but otherwise, it's nonsense.
But GPT can do base64 encoding. So what is the difference?
I also am not sure it is enough to change the conclusion, but I am pretty sure "put ChatGPT to Bing" doesn't work as a business strategy due to inference cost. You seem to think otherwise, so I am interested in a discussion.
Inference cost is secret. The primary sources are OpenAI pricing table (ChatGPT 3.5 is 0.2 cents per 1000 tokens, GPT-4 is 30x more expensive, GPT-4 with long context is 60x more expensive), Twitter conversation between Elon Musk and Sam Altman on cost ("single-digits cents per chat" as of December 2022), and OpenAI's claim of 90% cost reduction since December. From this I conclude OpenAI is selling API calls at cost or at loss, almost certainly not at profit.
Dylan Patel's SemiAnalysis is a well respected publication on business analysis of semiconductor industry. In The Inference Cost Of Search Disruption, he estimates the cost per query at 0.36 cents. He also wrote a sequel on cost structure of search business, which I recommend. Dylan also points out simply serving ChatGPT for every query at Google would require $100B in capital investment, which clearly dominates other expenditures. I think Dylan is broadly right, and if you think he is wrong, I am interested in your opinions where.
Economic cost-benefit analysis of training SOTA model seems entirely wrong to me.
If training a new SOTA model enabled a company to gain just a fraction of the global search market
This is admirably concrete, so I will use this, but the point generalizes. This assumes Microsoft can gain (and keep) 1% of the global search market by spending $1B USD and training a new SOTA model, which is obviously false? After training a new SOTA model, they need to deploy it for inference, and inference cost dominates training cost. The analysis seems to assume inference cost is negligible, but that's just not true for search engine case which requires wide deployment. The analysis should either give an example of economic gain that does not require wide deployment (things like stock picking comes to mind), or should analyze inference cost, at the very least it should not assume inference cost is approximately zero.
I am interested in examples of non-empirical (theoretically based) deep learning progress.
Recent Adversarial Policies Beat Superhuman Go AIs seem to plant doubt how well abstractions generalize in the case of Go.
Eh, I agree it is not mathematically possible to break one time pad (but it is important to remember NSA broke VENONA, mathematical cryptosystems are not same as their implementations in reality), but most of our cryptographic proofs are conditional and rely on assumptions. For example, I don't see what is mathematically impossible about breaking AES.
To state the obvious, pause narrows the lead with less ethical competitors only if pause is not enforced against less ethical competitors. I don't think anyone is in favor of unenforced pause: that would be indeed stupid, as the basic game theory says.
My impression is that we disagree on how feasible it is to enforce the pause. In my opinion, at the moment, it is pretty feasible, because there simply are not so many competitors. Doing an LLM training run is a rare capability now. Things are fragile and I am in fact unsure whether it would be feasible next year.
I am saying it is Chinese government's interest for Chinese labs to slow down, as well as other labs. I am curious which part you disagree:
a) Chinese government prioritizes social stability over technological development (my assessment: virtually certain)
b) Chinese government is concerned technology like ChatGPT is a threat to social stability (my assessment: very likely, and they are in fact correct about this)
c) Chinese government will need some time to prepare to neutralize technology like ChatGPT as a threat to social stability, as they neutralized Internet with Great Firewall (my assessment: very likely, they got surprised by pace of development as everyone else did)
China also seems to be quite far behind the west in terms of LLM
This doesn't match my impression. For example, THUDM(Tsing Hua University Data Mining lab) is one of the most impressive group in the world in terms of actually doing large LLM training runs.
Why do you think China will ignore it? This is "it's going too fast, we need some time", and China also needs some time for all the same reason. For example, China is censoring Google with Great Firewall, so if Google is to be replaced by ChatGPT, they need time to prepare to censor ChatGPT. Great Firewall wasn't built in a day. See Father of China's Great Firewall raises concerns about ChatGPT-like services from SCMP.