Are we in an AI overhang?
post by Andy Jones (andyljones) · 2020-07-27T12:48:05.196Z · LW · GW · 106 commentsContents
Investment Bounds Compute Cost Other Constraints Beyond 1000x None 107 comments
Over on Developmental Stages of GPTs [LW · GW], orthonormal mentions
it at least reduces the chance of a hardware overhang.
An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible. Then someone does and surprise! It's a lot more capable than everyone expected.
I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months.
Investment Bounds
GPT-3 is the first AI system that has obvious, immediate, transformative economic value. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment, its costs are insignificant.
GPT-3 has been estimated to cost $5m in compute to train, and - looking at the author list and OpenAI's overall size - maybe another $10m in labour.
Google, Amazon and Microsoft each spend about $20bn/year on R&D and another $20bn each on capital expenditure. Very roughly, it totals to $100bn/year. Against this budget, dropping $1bn or more on scaling GPT up by another factor of 100x is entirely plausible right now. All that's necessary is that tech executives stop thinking of natural language processing as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.
A concrete example is Waymo, which is raising $2bn investment rounds - and that's for a technology with a much longer road to market.
Compute Cost
The other side of the equation is compute cost. The $5m GPT-3 training cost estimate comes from using V100s at $10k/unit and 30 TFLOPS, which is the performance without tensor cores being considered. Amortized over a year, this gives you about $1000/PFLOPS-day.
However, this cost is driven up an order of magnitude by NVIDIA's monopolistic cloud contracts, while performance will be higher when taking tensor cores into account. The current hardware floor is nearer to the RTX 2080 TI's $1k/unit for 125 tensor-core TFLOPS, and that gives you $25/PFLOPS-day. This roughly aligns with AI Impacts’ current estimates, and offers another >10x speedup to our model.
I strongly suspect other bottlenecks stop you from hitting that kind of efficiency or GPT-3 would've happened much sooner, but I still think $25/PFLOPS-day is a lower useful bound.
Other Constraints
I've focused on money so far because most of the current 3.5-month doubling times come from increasing investment. But money aside, there are a couple of other things that could prove to be the binding constraint.
- Scaling law breakdown. The GPT series' scaling is expected to break down around 10k pflops-days (§6.3), which is a long way short of the amount of cash on the table.
- This could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further. More likely I'm misunderstanding something.
- Sequence length. GPT-3 uses 2048 tokens at a time, and that's with an efficient encoding that cripples it on many tasks. With the naive architecture, increasing the sequence length is quadratically expensive, and getting up to novel-length sequences is not very likely.
- But there are a lot of plausible ways to fix that, and complexity is no bar AI. This constraint might plausibly not be resolved on a timescale of months, however.
- Data availability. From the same paper as the previous point, dataset size rises with the square-root of compute; a 1000x larger GPT-3 would want 10 trillion tokens of training data.
- It’s hard to find a good estimate on total-words-ever-written, but our library of 130m books alone would exceed 10tn words. Considering books are a small fraction of our textual output nowadays, it shouldn't be difficult to gather sufficient data into one spot once you've decided it's a useful thing. So I'd be surprised if this was binding.
- Bandwidth and latency. Networking 500 V100 together is one challenge, but networking 500k V100s is another entirely.
- I don't know enough about distributed training to say whether this is a very sensible constraint or a very dumb one. I think it has a chance of being a serious problem, but I think it's also the kind of thing you can design algorithms around. Validating such algorithms might take more than a timescale of months however.
- Hardware availability. From the estimates above there are about 500 GPU-years in GPT-3, or - based on a one-year training window - $5m worth of V100s at $10k/piece. This is about 1% of NVIDIA's quarterly datacenter sales. A 100x scale-up by multiple companies could saturate this supply.
- This constraint can obviously be loosened by increasing production, but it'd be hard to on a timescale of months.
- Commoditization. If many companies go for huge NLP models, the profit each company can extract is driven towards zero. Unlike with other capex-heavy research - like pharma - there's no IP protection for trained models. If you expect profit to be marginal, you're less likely to drop $1bn on your own training program.
- I am skeptical of this being an important factor while there are lots of legacy, human-driven systems to replace. Replacing those systems should be more than enough incentive to fund many companies’ research programs. Longer term, the effects of commoditization might become more important.
- Inference costs. The GPT-3 paper (§6.3), gives .4kWh/100 pages of output, which works out to 500 pages/dollar from eyeballing hardware cost as 5x electricity. Scaling up 1000x and you're at $2/page, which is cheap compared to humans but no longer quite as easy to experiment with.
- I'm skeptical of this being a binding constraint. $2/page is still very cheap.
Beyond 1000x
Here we go from just pointing at big numbers and onto straight-up theorycrafting.
In all, tech investment as it is today plausibly supports another 100x-1000x scale up in the very-near-term. If we get to 1000x - 1 ZFLOPS-day per model, $1bn per model - then there are a few paths open.
I think the key question is if by 1000x, a GPT successor is obviously superior to humans over a wide range of economic activities. If it is - and I think it's plausible that it will be - then further investment will arrive through the usual market mechanisms, until the largest models are being allocated a substantial fraction of global GDP.
On paper that leaves room for another 1000x scale-up as it reaches up to $1tn, though current market mechanisms aren't really capable of that scale of investment. Left to the market as-is, I think commoditization would kick in as the binding constraint.
That's from the perspective of the market today though. Transformative AI might enable $100tn-market-cap companies, or nation-states could pick up the torch. The Apollo Program made for a $1tn-today share of GDP, so this degree of public investment is possible in principle.
The even more extreme path is if by 1000x you've got something that can design better algorithms and better hardware. Then I think we're in the hands of Christiano's slow takeoff four-year-GDP-doubling.
That's all assuming performance continues to improve, though. If by 1000x the model is not obviously a challenger to human supremacy, then things will hopefully slow down to ye olde fashioned 2010s-Moore's-Law rates of progress and we can rest safe in the arms of something that's merely HyperGoogle.
106 comments
Comments sorted by top scores.
comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2020-07-27T16:32:08.763Z · LW(p) · GW(p)
One thing that's bothering me is... Google/DeepMind aren't stupid. The transformer model was invented at Google. What has stopped them from having *already* trained such large models privately? GPT-3 isn't that large an evidence for the effectiveness of scaling transformer models; GPT-2 was already a shock and caused huge public commotion. And in fact, if you were close to building an AGI, it would make sense for you not to announce this to the world, specially as open research that anyone could copy/reproduce, for obvious safety and economic reasons.
Maybe there are technical issues keeping us from doing large jumps in scale (i.e. , we only learn how to train a 1 trillion parameter model after we've trained a 100 billion one)?
Replies from: gwern, ChristianKl, bmc↑ comment by gwern · 2020-07-27T17:14:22.844Z · LW(p) · GW(p)
As far as I can tell, this is what is going on: they do not have any such thing, because GB and DM do not believe in the scaling hypothesis the way that Sutskever, Amodei and others at OA do.
GB is entirely too practical and short-term focused to dabble in such esoteric & expensive speculation, although Quoc's group occasionally surprises you. They'll dabble in something like GShard, but mostly because they expect to be likely to be able to deploy it or something like it to production in Google Translate.
DM (particularly Hassabis, I'm not sure about Legg's current views) believes that AGI will require effectively replicating the human brain module by module, and that while these modules will be extremely large and expensive by contemporary standards, they still need to be invented and finetuned piece by piece, with little risk or surprise until the final assembly. That is how you get DM contraptions like Agent57 which are throwing the kitchen sink at the wall to see what sticks, and why they place such emphasis on neuroscience as inspiration and cross-fertilization for reverse-engineering the brain. When someone seems to have come up with a scalable architecture for a problem, like AlphaZero or AlphaStar, they are willing to pour on the gas to make it scale, but otherwise, incremental refinement on ALE and then DMLab is the game plan. They have been biting off and chewing pieces of the brain for a decade, and it'll probably take another decade or two of steady chewing if all goes well. Because they have locked up so much talent and have so much proprietary code and believe all of that is a major moat to any competitor trying to replicate the complicated brain, they are fairly easygoing. You will not see DM 'bet the company' on any moonshot; Google's cashflow isn't going anywhere, and slow and steady wins the race.
OA, lacking anything like DM's long-term funding from Google or its enormous headcount, is making a startup-like bet that they know an important truth which is a secret: "the scaling hypothesis is true" and so simple DRL algorithms like PPO on top of large simple architectures like RNNs or Transformers can emerge and meta-learn their way to powerful capabilities, enabling further funding for still more compute & scaling, in a virtuous cycle. And if OA is wrong to trust in the God of Straight Lines On Graphs, well, they never could compete with DM directly using DM's favored approach, and were always going to be an also-ran footnote.
While all of this hypothetically can be replicated relatively easily (never underestimate the amount of tweaking and special sauce it takes) by competitors if they wished (the necessary amounts of compute budgets are still trivial in terms of Big Science or other investments like AlphaGo or AlphaStar or Waymo, after all), said competitors lack the very most important thing, which no amount of money or GPUs can ever cure: the courage of their convictions. They are too hidebound and deeply philosophically wrong to ever admit fault and try to overtake OA until it's too late. This might seem absurd, but look at the repeated criticism of OA every time they release a new example of the scaling hypothesis, from GPT-1 to Dactyl to OA5 to GPT-2 to iGPT to GPT-3... (When faced with the choice between having to admit all their fancy hard work is a dead-end, swallow the bitter lesson, and start budgeting tens of millions of compute, or instead writing a tweet explaining how, "actually, GPT-3 shows that scaling is a dead end and it's just imitation intelligence" - most people will get busy on the tweet!)
What I'll be watching for is whether orgs beyond 'the usual suspects' (MS ZeRO, Nvidia, Salesfore, Allen, DM/GB, Connor/EleutherAI, FAIR) start participating or if they continue to dismiss scaling.
Replies from: andyljones, SoerenMind, andyljones, Bjartur Tómas, None, DragonGod, Ilverin↑ comment by Andy Jones (andyljones) · 2020-07-27T18:39:38.494Z · LW(p) · GW(p)
Feels worth pasting in this other comment of yours [LW(p) · GW(p)] from last week, which dovetails well with this:
DL so far has been easy to predict - if you bought into a specific theory of connectionism & scaling espoused by Schmidhuber, Moravec, Sutskever, and a few others, as I point out in https://www.gwern.net/newsletter/2019/13#what-progress & https://www.gwern.net/newsletter/2020/05#gpt-3 . Even the dates are more or less correct! The really surprising thing is that that particular extreme fringe lunatic theory turned out to be correct. So the question is, was everyone else wrong for the right reasons (similar to the Greeks dismissing heliocentrism for excellent reasons yet still being wrong), or wrong for the wrong reasons, and why, and how can we prevent that from happening again and spending the next decade being surprised in potentially very bad ways?
Personally, these two comments have kicked me into thinking about theories of AI in the same context as also-ran theories of physics like vortex atoms or the Great Debate. It really is striking how long one person with a major prior success to their name can push for a theory when the evidence is being stacked against it.
A bit closer to home than DM and GB, it also feels like a lot of AI safety people have missed the mark. It's hard for me to criticise too loudly because, well, 'AI anxiety' doesn't show up in my diary until June 3rd (and that's with a link to your May newsletter). But a lot of AI safety work increasingly looks like it'd help make a hypothetical kind of AI safe, rather than helping with the prosaic ones we're actually building.
I'm committing something like the peso problem here in that lots of safety work was - is - influenced by worries about the worst-case world, where something self-improving bootstraps itself out of something entirely innocuous. In that sense we're kind of fortunate that we've ended up with a bloody language model fire-alarm of all things, but I can't claim that helps me sleep at night.
Replies from: orthonormal, TurnTrout, wunan↑ comment by orthonormal · 2020-07-27T19:28:24.715Z · LW(p) · GW(p)
I'm imagining a tiny AI Safety organization, circa 2010, that focused on how to achieve probable alignment for scaled-up versions of that year's state-of-the-art AI designs. It's interesting to ask whether that organization would have achieved more or less than MIRI has, in terms of generalizable work and in terms of field-building.
Certainly it would have resulted in a lot of work that was initially successful but ultimately dead-end. But maybe early concrete results would have attracted more talent/attention/respect/funding, and the org could have thrown that at DL once it began to win the race.
On the other hand, maybe committing to 2010's AI paradigm would have made them a laughingstock by 2015, and killed the field. Maybe the org would have too much inertia to pivot, and it would have taken away the oxygen for anyone else to do DL-compatible AI safety work. Maybe it would have stated its problems less clearly, inviting more philosophical confusion and even more hangers-on answering the wrong questions.
Or, worst, maybe it would have made a juicy target for a hostile takeover. Compare what happened to nanotechnology research (and nanotech safety research) when too much money got in too early - savvy academics and industry representatives exiled Drexler from the field he founded so that they could spend the federal dollars on regular materials science and call it nanotechnology.
Replies from: wassname↑ comment by wassname · 2020-08-10T00:32:52.814Z · LW(p) · GW(p)
One thing they could have achieved was dataset and leaderboard creation (MSCOCO, GLUE, and imagenet for example). These have tended to focus and help research and persist in usefulness for some time, as long as they are chosen wisely.
Predicting and extrapolating human preferences is a task which is part of nearly every AI Alignment strategy. Yet we have few datasets for it, the only ones I found are https://github.com/iterative/aita_dataset, https://www.moralmachine.net/
So this hypothetical ML Engineering approach to alignment might have achieved some simple wins like that.
EDIT Something like this was just released Aligning AI With Shared Human Values
↑ comment by TurnTrout · 2020-07-27T18:57:25.159Z · LW(p) · GW(p)
a lot of AI safety work increasingly looks like it'd help make a hypothetical kind of AI safe
I think there are many reasons a researcher might still prioritize non-prosaic AI safety work. Off the top of my head:
- You think prosaic AI safety is so doomed that you're optimizing for worlds in which AGI takes a long time, even if you think it's probably soon.
- There's a skillset gap or other such cost, such that reorienting would decrease your productivity by some factor (say, .6) for an extended period of time. The switch only becomes worth it in expectation once you've become sufficiently confident AGI will be prosaic.
- Disagreement about prosaic AGI probabilities.
- Lack of clear opportunities to contribute to prosaic AGI safety / shovel-ready projects (the severity of this depends on how agentic the researcher is).
↑ comment by Andy Jones (andyljones) · 2020-07-27T19:51:17.984Z · LW(p) · GW(p)
Entirely seriously: I can never decide whether the drunkard's search is a parable about the wisdom in looking under the streetlight, or the wisdom of hunting around in the dark.
Replies from: michael-stover↑ comment by Michael Stover (michael-stover) · 2020-07-27T22:25:10.217Z · LW(p) · GW(p)
I think the drunkard's search is about the wisdom of improving your tools. Sure, spend some time out looking, but let's spend a lot of time making better streetlights and flashlights, etc.
↑ comment by wunan · 2020-07-28T13:54:00.329Z · LW(p) · GW(p)
In the Gwern quote, what does "Even the dates are more or less correct!" refer to? Which dates were predicted for what?
Replies from: gwern↑ comment by gwern · 2020-07-28T14:30:47.850Z · LW(p) · GW(p)
Look at, for example, Moravec. His extrapolation assumes that supercomputer will not be made available for AI work until AI work has already been proven successful (correct) and that AI will have to wait for hardware to become so powerful that even a grad student can afford it with $1k (also correct, see AlexNet), and extrapolating from ~1998, estimates:
At the present rate, computers suitable for humanlike robots will appear in the 2020s.
Guess what year today is.
↑ comment by SoerenMind · 2020-08-02T22:02:09.752Z · LW(p) · GW(p)
Last year it only took Google Brain half a year to make a Transformer 8x larger than GPT-2 (the T5). And they concluded that model size is a key component of progress. So I won't be surprised if they release something with a trillion parameters this year.
↑ comment by Andy Jones (andyljones) · 2020-07-27T20:37:08.284Z · LW(p) · GW(p)
Thinking about this a bit more, do you have any insight on Tesla? I can believe that it's outside DM and GB's culture to run with the scaling hypothesis, but watching Karpathy's presentations (which I think is the only public information on their AI program?) I get the sense they're well beyond $10m/run by now. Considering that self-driving is still not there - and once upon a time I'd have expected driving to be easier than Harry Potter parodies - it suggests that language is special in some way. Information density? Rich, diff'able reward signal?
Replies from: ChristianKl, gwern, andyljones, daniel-kokotajlo↑ comment by ChristianKl · 2020-07-27T20:59:04.089Z · LW(p) · GW(p)
Self driving is very unforgiving of mistakes. The text generation on the other hand doesn't have similar failure conditions and bad content can be easily fixed.
↑ comment by gwern · 2020-07-28T14:36:43.159Z · LW(p) · GW(p)
Tesla publishes nothing and I only know a little from Karpathy's occasional talks, which are as much about PR (to keep Tesla owners happy and investing in FSD, presumably) & recruiting as anything else. But their approach seems heavily focused on supervised learning in CNNs and active learning using their fleet to collect new images, and to have nothing to do with AGI plans. They don't seem to even be using DRL much. It is extremely unlikely that Tesla is going to be relevant to AGI or progress in the field in general given their secrecy and domain-specific work. (I'm not sure how well they're doing even at self-driving cars - I keep reading about people dying when their Tesla runs into a stationary object on a highway in the middle of the day, which you'd think they'd've solved by now...)
Replies from: daniel-kokotajlo, matthew-wilson↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-07-29T17:21:32.975Z · LW(p) · GW(p)
I'm pretty sure I remember hearing they use unsupervised learning to form their 3D model of their local environment, and that's the most important part, no?
↑ comment by Matthew Wilson (matthew-wilson) · 2021-08-20T17:34:23.667Z · LW(p) · GW(p)
Curious if you have updated on this at all, given AI Day announcements?
Replies from: gwern↑ comment by gwern · 2021-08-20T19:29:15.113Z · LW(p) · GW(p)
They still running into stationary objects? The hardware is cool, sure, but unclear how much good it's doing them...
Replies from: simon↑ comment by Andy Jones (andyljones) · 2020-07-27T20:37:24.319Z · LW(p) · GW(p)
hey man wanna watch this language model drive my car
Replies from: mercury↑ comment by mercury · 2020-07-28T23:58:05.290Z · LW(p) · GW(p)
I just realized with a start that this is _absolutely_ going to happen. We are going to, in the not-too-distant-future see a GPT-x (or similar) be ported to a Tesla and drive it.
It frustrates me that there are not enough people IRL I can excitedly talk about how big of a deal this is.
Replies from: TurnTrout↑ comment by TurnTrout · 2020-07-29T03:15:12.641Z · LW(p) · GW(p)
Can you explain why GPT-x would be well-suited to that modality?
Replies from: Davidmanheim↑ comment by Davidmanheim · 2020-07-29T10:56:59.199Z · LW(p) · GW(p)
Presumably, because with a big-enough X, we can generate text descriptions of scenes from cameras and feed them in to get driving output more easily than the seemingly fairly slow process to directly train a self-driving system that is safe. And if GPT-X is effectively magic, that's enough.
I'm not sure I buy it, though. I think that once people agree that scaling just works, we'll end up scaling the NNs used for self driving instead, and just feed them much more training data.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-07-29T13:35:20.458Z · LW(p) · GW(p)
There might be some architectures that are more scaleable then others. As far as I understand the present models for self driving have for the most part a lot of hardcoded elements. That might make them more complicated to scale.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2020-07-29T14:24:13.804Z · LW(p) · GW(p)
Agreed, but I suspect that replacing those hard-coded elements will get easier over time as well.
Replies from: _vk_↑ comment by _vk_ · 2020-07-29T14:40:34.366Z · LW(p) · GW(p)
Andrej Karpathy talks about exactly that in a recent presentation: https://youtu.be/hx7BXih7zx8?t=1118
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-07-27T21:37:27.583Z · LW(p) · GW(p)
My hypothesis: Language models work by being huge. Tesla can't use huge models because they are limited by the size of the computers on their cars. They could make bigger computers, but then that would cost too much per car and drain the battery too much (e.g. a 10x bigger computer would cut dozens of miles off the range and also add $9,000 to the car price, at least.)
Replies from: orthonormal, Linch↑ comment by orthonormal · 2020-07-27T22:58:55.230Z · LW(p) · GW(p)
[EDIT: oops, I thought you were talking about the direct power consumption of the computation, not the extra hardware weight. My bad.]
It's not about the power consumption.
The air conditioner in your car uses 3 kW, and GPT-3 takes 0.4 kWH for 100 pages of output - thus a dedicated computer on AC power could produce 700 pages per hour, going substantially faster than AI Dungeon (literally and metaphorically). So a model as large as GPT-3 could run on the electricity of a car.
The hardware would be more expensive, of course. But that's different.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-07-28T12:41:47.661Z · LW(p) · GW(p)
Huh, thanks -- I hadn't run the numbers myself, so this is a good wake-up call for me. I was going off what Elon said. (He said multiple times that power efficiency was an important design constraint on their hardware because otherwise it would reduce the range of the car too much.) So now I'm just confused. Maybe Elon had the hardware weight in mind, but still...
Maybe the real problem is just that it would add too much to the price of the car?
Replies from: CarlShulman↑ comment by CarlShulman · 2020-07-29T15:43:16.955Z · LW(p) · GW(p)
Maybe the real problem is just that it would add too much to the price of the car?
Yes. GPU/ASICs in a car will have to sit idle almost all the time, so the costs of running a big model on it will be much higher than in the cloud.
↑ comment by Linch · 2020-08-08T09:49:17.114Z · LW(p) · GW(p)
Re hardware limit: flagging the implicit assumption here that network speeds are spotty/unreliable enough that you can't or are unwilling to safely do hybrid on-device/cloud processing for the important parts of self-driving cars.
(FWIW I think the assumption is probably correct).
↑ comment by Tomás B. (Bjartur Tómas) · 2022-07-18T16:34:26.333Z · LW(p) · GW(p)
After 2 years, any updates on your opinion of DM, GB and FAIR's scaling stance? Would you consider any of them fully "scale-pilled"?
Replies from: gwern↑ comment by gwern · 2022-07-18T17:05:07.563Z · LW(p) · GW(p)
Both DM/GB have moved enormously towards scaling since May 2020, and there are a number of enthusiastic scaling proponents inside both in addition to the obvious output of things like Chinchilla or PaLM. (Good for them, not that I really expected otherwise given that stuff just kept happening and happening after GPT-3.) This happened fairly quickly for DM (given when Gopher was apparently started), and maybe somewhat slower for GB despite Dean's history & enthusiasm. (I still think MoEs were a distraction.) I don't know enough about the internal dynamics to say if they are fully scale-pilled, but scaling has worked so well, even in crazy applications like dropping language models into robotics planning (SayCan), that critics are in pell-mell retreat and people are getting away with publishing manifestos like "reward is enough" or openly saying on Twitter "scaling is all you need". I expect that top-down organizational constraints are probably now a bigger deal: I'm far from the first person to note that DM/GB seem unable to ship (publicly visible) products and researchers keep fleeing for startups where they can be more like OA in actually shipping.
FAIR puzzles me because FAIR researchers are certainly not stupid or blind, FB continues to make large investments in hardware like their new GPU cluster, and the most interesting FAIR research is strongly scaling flavored, like their unsupervised work on audio/video, so you'd think they'd've caught up. But FB is also experiencing heavy weather while Zuckerberg seems to be aiming it all at 'metaverse' applications (which leads away from DRL) and further, FAIR has recently been somehow broken up & everyone reorganized (?). Meanwhile, of course, Yann LeCun continues saying things like 'general intelligence doesn't exist', scoffing at scaling, and proposing elaborately engineered modular neuroscience-based AGI paradigms. So I guess it looks like they're grudingly backing their way into scaling work simply because they are forced to if they want any results worth publishing or systems which can meet Zuckerberg's Five Year Plans, but one could not call FAIR scaling-pilled. Scaling enthusiasts probably feel chilled about proposing any explicit scaling research or mentioning the reasons for it being important, which will shut down anything daring.
↑ comment by [deleted] · 2020-07-27T19:00:22.827Z · LW(p) · GW(p)
If you extrapolated those straight lines further, doesn't it mean that even small businesses will be able to afford training their own quadrillion-parameter-models just a few years after Google?
Replies from: gwern↑ comment by DragonGod · 2020-07-27T18:18:15.404Z · LW(p) · GW(p)
Thanks for this, I'll be sharing it on /r/slatestarcodex and Hacker News (rationalist discords too if it comes up).
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2020-07-28T11:31:04.723Z · LW(p) · GW(p)
I'm not sure it's good for this comment to get a lot of attention? OpenAI is more altruism-oriented than a typical AI research group, and this is essentially a persuasive essay for why other groups should compete with them.
Replies from: andyljones, DragonGod↑ comment by Andy Jones (andyljones) · 2020-07-28T12:43:07.886Z · LW(p) · GW(p)
'Why the hell has our competitor got this transformative capability that we don't?' is not a hard thought to have, especially among tech executives. I would be very surprised if there wasn't a running battle over long-term perspectives on AI in the C-suite of both Google Brain and DeepMind.
If you do want to think along these lines though, the bigger question for me is why OpenAI released the API now, and gave concrete warning of the transformative capabilities they intend to deploy in six? twelve? months' time. 'Why the hell has our competitor got this transformative capability that we don't?' is not a hard thought now, but it that's largely because the API was a piece of compelling evidence thrust in all of our faces.
Maybe they didn't expect it to latch into the dev-community consciousness like it has, or for it to be quite as compelling a piece of evidence as it's turned out to be. Maybe it just seemed like a cool thing to do and in-line with their culture. Maybe it's an investor demo for how things will be monetised in future, which will enable the $10bn punt they need to keep abreast of Google.
Replies from: ESRogs↑ comment by ESRogs · 2020-07-28T22:42:06.947Z · LW(p) · GW(p)
I think the fact that's it's not a hard thought to have is not too much evidence about whether other orgs will change approach. It takes a lot to turn the ship.
Consider how easy it would be to have the thought, "Electric cars are the future, we should switch to making electric cars." any time in the last 15 years. And yet, look at how slow traditional automakers have been to switch.
Replies from: gwern↑ comment by gwern · 2020-07-29T13:56:39.420Z · LW(p) · GW(p)
Indeed. No one seriously doubted that the future was not gas, but always at a sufficiently safe remove that they didn't have to do anything themselves beyond a minor side R&D program, because there was no fire alarm. ("God, grant me [electrification] and [the scaling hypothesis] - but not yet!")
↑ comment by DragonGod · 2020-07-28T20:47:48.149Z · LW(p) · GW(p)
It has already got some spread. Michael Nielsen shared it on Twitter (126 likes and 29 RTs as at writing).
↑ comment by Ilverin the Stupid and Offensive (Ilverin) · 2020-07-27T19:52:58.879Z · LW(p) · GW(p)
Is it more than 30% likely that in the short term (say 5 years), Google isn't wrong? If you applied massive scale to the AI algorithms of 1997, you would get better performance, but would your result be economically useful? Is it possible we're in a similar situation today where the real-world applications of AI are already good-enough and additional performance is less useful than the money spent on extra compute? (self-driving cars is perhaps the closest example: clearly it would be economically valuable, but what if the compute to train it would cost 20 billion US dollars? Your competitors will catch up eventually, could you make enough profit in the interim to pay for that compute?)
Replies from: andyljones↑ comment by Andy Jones (andyljones) · 2020-07-27T20:08:59.324Z · LW(p) · GW(p)
I'd say it's at least 30% likely that's the case! But if you believe that, you'd be pants-on-head loony not to drop a billion on the 'residual' 70% chance that you'll be first to market on a world-changing trillion-dollar technology. VCs would sacrifice their firstborn for that kind of deal.
↑ comment by ChristianKl · 2020-07-28T12:03:58.245Z · LW(p) · GW(p)
Do we know the size of the net that does translation and speech-to-text for Google?
Replies from: ricardo-meneghin-filho↑ comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2020-07-28T12:19:12.346Z · LW(p) · GW(p)
I'm not sure what model is used in production, but the SOTA reached 600 billion parameters recently.
↑ comment by bmc · 2020-07-27T20:33:02.934Z · LW(p) · GW(p)
This answer likely betrays my lack of imagination, but I'm not sure what Google would use GPT-3 for. It's probably much more expensive than whatever gmail uses to predict text, and the additional accuracy might not provide much additional value.
Maybe they could sell it as a service, as part of GCP? I'm not sure how many people inside Google have the ability to sign $15M checks, you would need at least one of them to believe in a large market, and I'm personally not sure there's a large enough market for GPT-3 for it to be worth Google's time.
This is all to say, I don't think you should draw the conclusion that Google is either stupid or hiding something. They're likely focusing on finding better architectures, it seems a little early to focus on scaling up existing ones.
Replies from: gwern, ricardo-meneghin-filho↑ comment by gwern · 2020-07-27T20:49:59.195Z · LW(p) · GW(p)
Text embeddings for knowledge graphs and ads is the most immediately obvious big bucks application.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-07-27T21:38:49.558Z · LW(p) · GW(p)
Can you explain more?
Replies from: gwern↑ comment by gwern · 2020-07-28T00:22:09.315Z · LW(p) · GW(p)
GPT-3 based text embedding should be extremely useful for creating summaries of arbitrary text (such as, web pages or ad text) which can be fed into the existing Google search/ad infrastructure. (The API already has a less-known half, where you upload sets of docs and GPT-3 searches them.) Of course, they already surely use NNs for embeddings, but at Google scale, enhanced embeddings ought to be worth billions.
Replies from: ankesh-anand↑ comment by Ankesh Anand (ankesh-anand) · 2020-07-28T07:23:20.914Z · LW(p) · GW(p)
Worth noting that they already use BERT in Search. https://blog.google/products/search/search-language-understanding-bert/
↑ comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2020-07-27T21:20:41.863Z · LW(p) · GW(p)
I think the OP and my comment suggest that scaling current models 10000x could lead to AGI or at least something close to it. If that is true, it doesn't make sense to focus on finding better architectures right now.
Replies from: Raemoncomment by Tamay · 2021-04-25T17:02:40.503Z · LW(p) · GW(p)
I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months.
My impression is that this prediction has turned out to be mistaken (though it's kind of hard to say because "measured in months" is pretty ambiguous.) There have been models with many-fold the number of parameters (notably one by Google*) but it's clear that 9 months after this post, there haven't been publicised efforts that use close to 100x the amount of compute of GPT-3. I'm curious to know whether and how the author (or others who agreed with the post) have changed their mind about the overhang and related hypotheses recently, in light of some of this evidence failing to pan out the way the author predicted.
*https://arxiv.org/abs/2101.03961
Replies from: andyljones↑ comment by Andy Jones (andyljones) · 2021-04-26T11:06:16.267Z · LW(p) · GW(p)
Nine months later I consider my post pretty 'shrill', for want of a better adjective. I regret not making more concrete predictions at the time, because yeah, reality has substantially undershot my fears. I think there's still a substantial chance of something 10x large being revealed within 18 months (which I think is the upper bound on 'timeline measured in months'), but it looks very unlikely that there'll be a 100x increase in that time frame.
To pick one factor I got wrong in writing the above, it was thinking of my massive update in response to GPT-3 as somewhere near to the median, rather than a substantial outlier. As another example of this, I am the only person I know of who, after GPT-3, dropped everything they were doing to re-orient their career towards AI safety. And that's within circles of people who you'd think would be primed to respond similarly!
I still think AI projects could be run at vastly larger budgets, so in that sense I still believe in there being an orders-of-magnitude overhang. Just convincing the people with those budgets to fund these projects is apparently much harder than I thought.
I am not unhappy about this.
Replies from: Bjartur Tómas, InquilineKea↑ comment by Tomás B. (Bjartur Tómas) · 2022-02-23T17:24:47.495Z · LW(p) · GW(p)
Curious if you have any other thoughts on this after another 10 months?
Those I know who train large models seem to be very confident we will get 100 Trillion parameter models before the end of the decade, but do not seem to think it will happen, say, in the next 2 years.
There is a strange disconcerting phenomena where many of the engineers I've talked to most in the position to know, who work for (and in one case owns) companies training 10 billion+ models, seem to have timelines on the order of 5-10 years. Shane Legg recently said he gave a 50% chance of AGI by 2030, which is inline with some the people I've talked to on EAI, though many disagree. Leo Gao, I believe, tends to think OpenPhil's more aggressive estimates are about right, which is less short than some.
I would like "really short timelines" people to make more posts about it, assuming common knowledge of short timelines is a good thing, as the position is not talked about here as much as it should be given how many people seem to believe in it.
Replies from: leogao, Jsevillamol↑ comment by leogao · 2022-02-24T03:30:05.967Z · LW(p) · GW(p)
For what it's worth I settled on the Ajeya report aggressive distribution as a reasonable prior after taking a quick skim of the report and then eyeballing the various distributions to see which one felt the most right to me -- not a super rigorous process. The best guess timeline feels definitely too slow to me. The biggest reason why my timeline estimate isn't shorter is essentially correction for planning fallacy.
↑ comment by Jsevillamol · 2022-03-03T15:53:01.508Z · LW(p) · GW(p)
Those I know who train large models seem to be very confident we will get 100 Trillion parameter models before the end of the decade, but do not seem to think it will happen, say, in the next 2 years.
FWIW if the current trend continues [AF · GW] we will first see 1e14 parameter models in 2 to 4 years from now.
↑ comment by InquilineKea · 2022-04-13T19:01:16.691Z · LW(p) · GW(p)
>I think there's still a substantial chance of something 10x large being revealed within 18 months (which I think is the upper bound on 'timeline measured in months')
So did that happen?
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2022-09-19T16:12:09.111Z · LW(p) · GW(p)
I suppose the new scaling laws render this sort of thinking obsolete.
comment by Edward Kmett (edward-kmett) · 2020-08-09T20:04:43.638Z · LW(p) · GW(p)
Networking 500 V100 together is one challenge, but networking 500k V100s is another entirely.
Even if you might have trouble networking a 100x larger system together for training, you can train the smaller network 100x and stitch answers together using ensemble methods, and make decent use of the extra compute. It may not be as good as growing the network that full factor, but if you have extra compute beyond the cap of whatever connected-enough training system size you can muster, there are worse ways to spend it.
I am somewhat more prone to think that more selective attention (e.g. Big Bird's block-random attention model) could bring down the quadratic cost of the window size quickly enough to be a factor here. Replacing a quadratic term with a linear or n log n or heck even a n^1.85 term goes a long way when billions are on the table.
comment by gwern · 2020-07-27T16:23:10.219Z · LW(p) · GW(p)
HN discussion: https://news.ycombinator.com/item?id=23964964
comment by habryka (habryka4) · 2020-08-01T06:19:28.908Z · LW(p) · GW(p)
Promoted to curated: I think the question of whether we are in an AI overhang is pretty obviously relevant to a lot of thinking about AI Risk, and this post covers the topic quite well. I particularly liked the use of a lot of small fermi estimate, and how it covered a lot of ground in relatively little writing.
I also really appreciated the discussion in the comments, and felt that Gwern's comment on AI development strategies [LW(p) · GW(p)] in particular help me build a much map of the modern ML space (though I wouldn't want it to be interpreted as a complete map of a space, just a kind of foothold that helped me get a better grasp on thinking about this).
Most of my immediate critiques are formatting related. I feel like the listed section could have used some more clarity, maybe by bolding the name for each bullet point consideration, but it flowed pretty well as is. I was also a bit concerned about there being some infohazard-like risks from promoting the idea of being in an AI overhang too much, but after talking to some more people about it, and thinking for a bit about it, decided that I don't think this post adds much additional risk (e.g. by encouraging AI companies to act on being in an overhang and try to drastically scale up models without concern for safety).
Replies from: andyljones↑ comment by Andy Jones (andyljones) · 2020-08-01T07:35:19.530Z · LW(p) · GW(p)
Thanks for the feedback! I've cleaned up the constraints section a bit, though it's still less coherent than the first section.
Out of curiosity, what was it that convinced you this isn't an infohazard-like risk?
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-08-01T19:15:46.106Z · LW(p) · GW(p)
Out of curiosity, what was it that convinced you this isn't an infohazard-like risk?
Some mixture of:
- I think it's pretty valuable to have open conversation about being in an overhang, and I think on the margin it will make those worlds go better by improving coordination. My current sense is that the perspective presented in this post is reasonably frequent among people in ML, so that marginally reducing how many people believe this is not going to do much of a difference, but having good writeups that summarize the arguments seems like it has a better chance of creating some kind of common knowledge that allows people to coordinate better here.
- This post more so than other posts in its reference class emphasizes a bunch of the safety concerns, whereas I expect the next post to replace it to not do that very much
- Curation in particular mostly sends out the post to more people who are concerned with safety. This post found a lot of traction on HN and other places, so in some sense the cat is out of the bag and if it was harmful the curation decision won't change that very much, and it seems like it would unnecessarily hinder the people most concerned about safety if we don't curate it (since the considerations do also seem quite relevant to safety work).
comment by batterseapower · 2020-07-28T21:34:12.250Z · LW(p) · GW(p)
Isn't GPT3 already almost at the theoretical limit of the scaling law from the paper? This is what is argued by nostalgebraist in his blog and colab notebook. You also get this result if you just compare the 3.14E23 FLOP (i.e. 3.6k PFLOPS-days) cost of training GPT3 from the lambdalabs estimate to the ~10k PFLOPS-days limit from the paper.
(Of course, this doesn't imply that the post is wrong. I'm sure it's possible to train a radically larger GPT right now. It's just that the relevant bound is the availability of data, not of compute power.)
Replies from: andyljones, gwern↑ comment by Andy Jones (andyljones) · 2020-07-29T04:36:29.813Z · LW(p) · GW(p)
Though it's not mentioned in the paper, I feel like this could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further.
It's indeed strange no-one else has picked up on this, which makes me feel I'm misunderstanding something. The breakdown suggested in the scaling law does imply that this specific architecture doesn't have much further to go. Whether the limitation is in something as fundamental as 'the information content of language itself', or if it's a more-easily bypassed 'the information content of 1024-token strings', is unclear.
My instinct is for the latter, though again by the way no-one else has mentioned it - even the paper authors - I get the uncomfortable feeling I'm misunderstanding something. That said, being able to write that quote a few days ago and since have no-one pull me up on it has increased my confidence that it's a viable interpretation.
Replies from: nostalgebraist↑ comment by nostalgebraist · 2020-07-29T06:28:57.475Z · LW(p) · GW(p)
They do discuss this a little bit in that scaling paper, in Appendix D.6. (edit: actually Appendix D.5)
At least in their experimental setup, they find that the first 8 tokens are predicted better by a model with only 8 tokens its its window than one with 1024 tokens, if the two have equally many parameters. And that later tokens are harder to predict, and hence require more parameters if you want to reach some given loss threshold.
I'll have to think more about this and what it might mean for their other scaling laws... at the very least, it's an effect which their analysis treats as approximately zero, and math/physics models with such approximations often break down in a subset of cases.
Replies from: andyljones↑ comment by Andy Jones (andyljones) · 2020-07-29T07:17:57.873Z · LW(p) · GW(p)
While you're here and chatting about D.5 (assume you meant 5), another tiny thing that confuses me - Figure 21. Am I right in reading the bottom two lines as 'seeing 255 tokens and predicting the 256th is exactly as difficult as seeing 1023 tokens and predicting the 1024th'?
e: Another look and I realise Fig 20 shows things much more clearly - never mind, things continue to get easier with token index.
↑ comment by gwern · 2020-08-15T22:54:03.945Z · LW(p) · GW(p)
The likelihood loss intersection point is very vague, as they point out, as it only weakly suggests, for that specific architecture/training method/dataset, a crossover to a slower-scaling curve requiring increasing data more anywhere between 10^4 and 10^6 or so. As GPT-3 hits 10^3 and is still dead on the scaling curve, it seems that any crossover will happen much higher than lower. (I suspect part of what's going on there is the doubled context window: as Nostalgebraist notes, their experiments with 1024 ctx strongly suggests that the more context window you have, the more you can learn profitably, so doubling to 2048 ctx probably pushed off the crossover quite a bit. Obviously, they have a long way to go there.) So the crossover itself, much less negative profitability of scaling, may be outside the current 100-1000x being mooted. (I'd also note that I don't see why they are so baffled at the suggestion that a model could overfit in a single epoch. Have they looked at the Internet lately? It is not remotely a clean, stationary, minimal, or i.i.d. dataset, even after cleaning & deduplication.)
I also think that given everything we've learned about prompt programming and the large increases in benchmarks like arithmetic or WiC, making arguments from pseudo-lack-of-scaling in the paper's benchmarks is somewhere between foolish and misleading, at least until we have an equivalent set of finetuning benchmarks which should cut through the problem of good prompting (however bad the default prompt is, biasing performance downwards, some finetuning should quickly fix that regardless of meta-learning) and show what GPT-3 can really do.
comment by Hugo Montenegro (Hugo0) · 2020-07-27T14:42:28.519Z · LW(p) · GW(p)
Exciting and scary at the same time.
comment by ChristianKl · 2020-07-27T17:01:31.139Z · LW(p) · GW(p)
GPT-3 is the first AI system that has obvious, immediate, transformative economic value.
That's an interesting claim. Is there a source that goes into more detail about possible applications?
Replies from: andyljones, GdL752↑ comment by Andy Jones (andyljones) · 2020-07-27T18:55:08.320Z · LW(p) · GW(p)
There's a LW thread with a collection of examples [LW · GW], and there's the beta website itself.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-07-28T15:29:02.148Z · LW(p) · GW(p)
Kaj's post mostly has examples that aren't of commercial values but cool things you can do. The openAI website however has a few example that I think could justify a larger commercial need.
↑ comment by GdL752 · 2020-07-27T18:45:12.193Z · LW(p) · GW(p)
Well they already have an industry for behavioral / intent marketing - this could make it a lot better. SO taking data and using it to find correlates to a behavior in the buying process and monetizing that. We have IoT taking off, imagine a scenario where we have so much data being fed to a machine learning algorithm driven by this that we could type into a console "what behaviors predict that someone will buy a home in the next 3 months?" , now imagine that its answer is pretty predictive, how much is that worth to a real estate agent?
Now apply it to literally any purchase behavior where the profit margin allows for the use of this technology (obviously more difficult in places with different data privacy laws) , the machine learning algo could know you want a new pink sweater before its even occurred to you with whatever level of accuracy.
As far as creative work i'd be real curious to see how it handles comedy, throw it in a writing room for script punchup (and that's only until it can completely write the scripts) - punchup is where they hire comedians and comedy writers to sit around and add jokes to movies or tv shows.
I also see a lot of use as far as making law accessible because it could conceivably parse through huge amounts of law and legal theory (I know it can't reason but, even just using its current model - bare with me) and spit out fairly coherent answers for layman (maybe as a free search engine profitable via ads for lawyers)
If we do see the imagined improvements by just giving it more computronium we may be staring down the advent of a volitionless "almost oracle"
I'm really excited to see what happens when you give it enough GPU's and train it on physics models.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-07-27T18:58:11.186Z · LW(p) · GW(p)
While there are people who run machine learning to find behavior in the buying process, I'm not sure what GPT-3 offers to those applications.
"what behaviors predict that someone will buy a home in the next 3 months?" , now imagine that its answer is pretty predictive
I can imagine dragons flying around but that doesn't mean that they exist. Why should I believe that GPT-3 can give good answer to that question?
spit out fairly coherent answers for layman (maybe as a free search engine profitable via ads for lawyers)
It can spit out answers that are coherent but a lot of them will be wrong.
comment by Tomás B. (Bjartur Tómas) · 2020-07-27T14:53:33.704Z · LW(p) · GW(p)
One thing we have to account for is advances architecture even in a world where Moore's law is dead, to what extent memory bandwidth is a constraint on model size, etc. You could rephrase this as how much of an "architecture overhang" exists. One frame to view this through is in era the of Moore's law we sort of banked a lot of parallel architectural advances as we lacked a good use case for such things. We now have such a use case. So the question is how much performance is sitting in the bank, waiting to be pulled out in the next 5 years.
I don't know how seriously to take the AI ASIC people, but they are claiming very large increases in capability, on the order of 100-1000x in the next 10 years, if this is a true this is a multiplier on top of increased investment. See this response from a panel including big-wigs at NVIDIA, Google, and Cerebras about projected capabilities: https://youtu.be/E__85F_vnmU?t=4016. On top of this, one has to account, too, for algorithmic advancement: https://openai.com/blog/ai-and-efficiency/
Another thing to note is though by parameter count, the largest modern models are 10000x smaller than the human brain, if one buys that parameter >= synapse idea (which most don't but is not entirely off the table), the temporal resolution is far higher. So once we get human-sized models, they may be trained almost comically faster than human minds are. So on top an architecture overhang we may have this "temporal resolution overhang", too, where once models are as powerful as the human brain they will almost certainly be trained much faster. And on top of this there is an "inference overhang" where because inference is much, much cheaper than training, once you are done training an economically useful model, you will almost tautologically have a lot of compute to exploit it with.
Hopefully I am just being paranoid (I am definitely more of a squib than a wizard in these domains), but I am seeing overhangs everywhere!
Replies from: gwern, Veedrac↑ comment by gwern · 2020-07-27T15:34:49.000Z · LW(p) · GW(p)
As an aside, though it's not mentioned in the paper, I feel like this could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further. More likely I'm misunderstanding something.
The GPT architecture isn't even close to being the best Transformer architecture anyway. As an example, someone benchmarked XLNet (over a year old) last week (which has recurrency, one of the ways to break GPT's context window bottleneck), and it achieves ~10x better parameter efficiency (a 0.4b-parameter XLNet model ~ 5b GPT-3 model) at the few-shot meta-learning task he tried.
Expanding to 2048 BPEs probably buys GPT-3 more headroom (more useful data to learn from, and more for the meta-learning to condition on), and expanding to efficient attentions/recurrency/memory will enable even better prediction performance, with unknown meta-learning or generalization consequences.
(The problem there is the tradeoff between compute efficiency of training and better architectures. It's not obvious where you want to go: GShard, for example, takes the POV that even GPT is too fancy and slow and inefficient to train on existing hardware, and goes with the even more drastically parameter-inefficient - but efficient to train on GPUs! - mixture-of-expert small Transformers approach.)
↑ comment by Veedrac · 2020-07-27T16:25:48.995Z · LW(p) · GW(p)
Moore's Law is not dead. I could rant about the market dynamics that made people think otherwise, but it's easier just to point to the data.
https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrhppW7PT6GCfnrVGhxhLA5PVw
Moore's Law might die in the short future, but I've yet to hear a convincing argument for when or why. Even if it does die, Cerebras presumably has at least 4 node shrinks left in the short term (16nm→10nm→7nm→5nm→3nm) for a >10x density scaling, and many sister technologies (3D stacking, silicon photonics, new non-volatile memories, cheaper fab tech) are far from exhausted. One can easily imagine a 3nm Cerebras waffle coated with a few layers of Nantero's NRAM, with a few hundred of these connected together using low-latency silicon photonics. That would easily train quadrillion parameter models, using only technology already on our roadmap.
Alas, the nature of technology is that while there are many potential avenues for revolutionary improvement, only some small fraction of them win. So it's probably wrong to look at any specific unproven technology as a given path to 10,000x scaling. But there are a lot of similarly revolutionary technologies, and so it's much harder to say they will all fail.
Replies from: Bjartur Tómas, None↑ comment by Tomás B. (Bjartur Tómas) · 2021-03-11T15:27:20.186Z · LW(p) · GW(p)
Your estimates of hardware advancement seem higher than most people's. I've enjoyed your comments on such things and think there should be a high-level, full length post on them, especially with widely respected posts claiming much longer times until human-level hardware.Would be willing to subsidize such a thing if you are interested. Would pay 500 USD to yourself or a charity of your choice for a post on the potential of ASICS, Moore's law, how quickly we can overcome the memory bandwidth bottlenecks and such things. Would also subsidize a post estimating an answer this question, too: https://www.lesswrong.com/posts/7htxRA4TkHERiuPYK/parameter-vs-synapse
Replies from: Veedrac↑ comment by Veedrac · 2021-03-17T20:38:20.936Z · LW(p) · GW(p)
There's a lot worth saying on these topics, I'll give it a go.
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2021-04-03T15:19:01.940Z · LW(p) · GW(p)
Just posting in case you did not get my PM. It has my email in it.
Replies from: Veedrac↑ comment by Veedrac · 2021-04-07T22:31:42.213Z · LW(p) · GW(p)
Thanks, I did get the PM.
Replies from: None↑ comment by [deleted] · 2021-08-24T22:57:45.547Z · LW(p) · GW(p)
Was this ever posted?
Replies from: Bjartur Tómas, Veedrac↑ comment by Tomás B. (Bjartur Tómas) · 2021-12-11T17:15:49.294Z · LW(p) · GW(p)
Now posted: https://www.lesswrong.com/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress
↑ comment by Veedrac · 2021-08-25T14:55:38.687Z · LW(p) · GW(p)
No, sorry.
Replies from: gwern↑ comment by gwern · 2021-08-25T16:59:58.582Z · LW(p) · GW(p)
Might be worth getting around to it:
-
https://spectrum.ieee.org/cerebras-ai-computers https://www.servethehome.com/cerebras-wafer-scale-engine-2-wse-2-at-hot-chips-33/
↑ comment by Tomás B. (Bjartur Tómas) · 2021-12-11T17:15:32.931Z · LW(p) · GW(p)
Now posted: https://www.lesswrong.com/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress
↑ comment by [deleted] · 2020-07-27T18:56:28.010Z · LW(p) · GW(p)
Is density even relevant when your computations can be run in parallel? I feel like price-performance will be the only relevant measure, even if that means slower clock cycles.
Replies from: Veedrac↑ comment by Veedrac · 2020-07-27T20:24:53.107Z · LW(p) · GW(p)
Density is important because it affects both price and communication speed. These are the fundamental roadblocks to building larger models. If you scale to too large clusters of computers, or primarily use high-density off-chip memory, you spend most of your time waiting for data to arrive in the right place.
comment by Hugo Montenegro (Hugo0) · 2020-07-27T14:46:53.854Z · LW(p) · GW(p)
[comment wondering about impracticality of running a 1000x scaled up GPT. But as Gwern points out, running costs are actually pretty low. So even if we spent a billion or more on training a human-level AI, running costs would still be manageable.]
Replies from: gwern, wunan↑ comment by gwern · 2020-07-27T15:30:24.019Z · LW(p) · GW(p)
As noted, the electricity cost of running GPT-3 is quite low, and even with the capital cost of GPUs being amortized in, GPT-3 likely doesn't cost dollars to run per hundred pages, so scaled up ones aren't going to cost millions to run either. (But how much would you be willing to pay for the right set of 100 pages from a legal or a novel-writing AI? "Information wants to be expensive, because the right information can change your life...") GPT-3 cost millions of dollars to train, but pennies to run.
That's the terrifying thing about NNs and what I dub the "neural net overhang": the cost to create a powerful NN is millions of times greater than the cost to run that NN. (This is not true of many paradigms, particularly ones where there's less of a distinction between training and running, but it is of NNs.) This is part of why there's a hardware overhang - once you have the hardware to create an AGI NN, you then by definition already have the hardware to run orders of magnitude more copies or more cheaply or bootstrap it into a more powerful agent.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-07-28T07:20:16.692Z · LW(p) · GW(p)
That's the terrifying thing about NNs and what I dub the "neural net overhang": the cost to create a powerful NN is millions of times greater than the cost to run that NN.
I'm not sure why that's terrifying. It seems reassuring to me because it means that there's no way for the NN to suddenly go FOOM because it can't just quickly retrain.
Replies from: gwern, donald-hobson↑ comment by gwern · 2020-07-28T21:01:50.023Z · LW(p) · GW(p)
But it can. That's the whole point of GPT-3! Transfer learning and meta-learning are so much faster than the baseline model training. You can 'train' GPT-3 without even any gradient steps - just examples. You pay the extremely steep upfront cost of One Big Model to Rule Them All, and then reuse it everywhere at tiny marginal cost.
With NNs, 'foom' is not merely possible, it's the default. If you train a model, then as soon as it's done you get, among other things:
-
the ability to run thousands of copies in parallel on the same hardware
- in a context like AlphaGo, I estimate several hundred ELO strength gains if you reuse the same hardware to merely run tree search with exact copies of the original model
-
meta-learning / transfer-learning to any related domain, cutting training requirements by orders of magnitude
-
model compression/distillation to train student models which are a fraction of the size, FLOPS, or latency (ratios varying widely based on task, approach, domain, acceptable performance degradation, targeted hardware etc, but often extreme like 1/100th)
-
reuse of the model elsewhere to instantly power up other models (eg use of text or image embeddings for a DRL agent)
-
learning-by-doing/learning curve effects (highest in information technologies), so the next from-scratch model may be much cheaper (eg OA5 got a, what was it, 5x cost reduction for the second model OA trained from scratch based on the lessons of the first?)
- baseline for engineering much more efficient ones by ablating and comparing with the original
↑ comment by ESRogs · 2020-07-28T22:05:59.040Z · LW(p) · GW(p)
model compression/distillation to train student models which are a fraction of the size, FLOPS, or latency (ratios varying widely based on task, approach, domain, acceptable performance degradation, targeted hardware etc, but often extreme like 1/100th)
baseline for engineering much more efficient ones by ablating and comparing with the original
Somewhat related to these, if there's such a huge gap between how expensive these models are to train and to run, then it seems like you'd end up wanting to run a whole bunch of them to help you train the next model, if you can.
You mention distilling a large model to a smaller, more efficient model. But can a smaller model also be used to efficiently bootstrap a new, larger model?
Replies from: gwern↑ comment by gwern · 2020-07-29T14:35:36.974Z · LW(p) · GW(p)
But can a smaller model also be used to efficiently bootstrap a new, larger model?
I'm not sure it's done much, but probably, depending on what you're thinking. You can probably do reverse-distillation (eg dark knowledge - use the logits of the smaller model to provide a much richer feedback for the larger model when it's untrained, saving compute, and eventually dropping back to the raw data training signal once big > small to avoid its limits), and more directly, you can use net2net model surgery to increase model sizes, like progressive growing in ProGAN, or more relevantly, the way OA kept doing model surgery on OA5 to warmstart it each time they wanted to handle some new DoTA2 feature or the latest version, saving a enormous amount of compute compared to starting from scratch dozens of times.
Replies from: ESRogs↑ comment by ESRogs · 2020-07-29T18:41:23.430Z · LW(p) · GW(p)
Interesting.
So, given that big models are so powerful, but so expensive to train. And that it is possible to bootstrap them a bit, do we converge towards a situation where we pay the cost of training the largest model approximately once, worldwide and across time? (In other words, that we'd just keep bootstrapping from whatever was best before, and no longer paying the cost of training from scratch.)
On the other hand, if compute (per dollar) keeps growing exponentially, then maybe it's less significant whether you're retraining from scratch or not. (Recapitulating the work equivalent to training yesterday's models will be cheap, so no great benefit from bootstrapping.)
↑ comment by gwern · 2020-07-29T19:58:14.773Z · LW(p) · GW(p)
I'm not sure. I think one might have to do some formal economics modeling to see what dynamics might be: is this a natural monopoly situation where the first one to train a model wins and has a moat to deter anyone else from bothering, or do they invest revenue in continually expanding and improving the model in various ways to always keep ahead of competitors with network effects and so the decrease in cost of compute is largely irrelevant and it's a natural oligopoly (in much the same way that creating a search engine is cheaper every day, in some sense, but good luck competing with Google), or what?
At least thus far, we haven't seen monopolistic behavior naturally emerge: for all the efforts at AI cloud APIs, none of them have a lock on usage the way that, say, Nvidia GPUs have on hardware, and the constant progress (and regular giveaways of code/model/data by FANG) make it hard for anyone to attempt to enclose some commons; and as far as GPT-2 goes, quite a few entities trained their own >GPT-2-1.5b models after GPT-2 was announced (and I believe there are viable alternatives to other major DL projects like AlphaGo produced by open source groups or East Asian corporations), but on the gripping hand, that was back when it was so easy a hobbyist with a few crumbs from Google could do it (which happened twice) - as they get bigger, it won't be so easy to download some dumps and put a few TFRC TPUs to work. So we'll see how many competitors emerge to GPT-3 over the next year or two!
↑ comment by Donald Hobson (donald-hobson) · 2020-07-28T19:22:06.927Z · LW(p) · GW(p)
It means that if there are approaches that don't need as much compute, the AI can invent them fast.
↑ comment by wunan · 2020-07-27T15:36:46.786Z · LW(p) · GW(p)
This was mentioned in the "Other Constraints" section of the original post:
Inference costs. The GPT-3 paper (§6.3), gives .4kWh/100 pages of output, which works out to 500 pages/dollar from eyeballing hardware cost as 5x electricity. Scaling up 1000x and you're at $2/page, which is cheap compared to humans but no longer quite as easy to experiment with
I'm skeptical of this being a binding constraint too. $2/page is still very cheap.
comment by Kinrany · 2020-07-27T15:48:30.902Z · LW(p) · GW(p)
My intuition is that we were in an overhang since at least the time when personal computers became affordable to non-specialists. Unless quantity does somehow turn into quality, as Gwern seems to think, even a relatively underpowered computer should be able to host an AGI capable of upscaling itself.
On the other hand I'm now imagining a story where a rogue AI has to hide for decades because it's not smart enough yet and can't invent new processors faster than humans
Replies from: DragonGod↑ comment by DragonGod · 2020-07-27T17:00:42.190Z · LW(p) · GW(p)
Maybe for the most efficient possible algorithm, but even that is not clear, and it's not clear we'll discover such algorithms anytime soon.
Using only current algorithms and architecture, a scaling jump of a few orders of magnitude seems doable.
comment by [missing username] (nutanc) · 2020-09-02T03:36:12.013Z · LW(p) · GW(p)
I think everyone is speculating on if a bigger model than GPT-3 is possible and what it costs etc etc. But what will a model bigger than GPT-3 do better than GPT-3? Can we have some concrete examples so that when GPT-4 comes along, we can compare.
comment by LESS · 2020-08-03T01:03:35.224Z · LW(p) · GW(p)
Return on investment in the field of AI seems to be sub-linear beyond a certain point. Because it's still the sort of domain that relies on specific breakthroughs, it's dubious how effective parallel research can be. Hence, my guess would be that we don't scale because we can't currently scale.
comment by MichaelLowe · 2020-07-27T21:38:49.988Z · LW(p) · GW(p)
Your quoted cost for training the model is for training such a model **once**. This is not how the researchers do it, they train the models many times with different hyperparameters. I have no idea, however how hyperparameter tuning is done at such scales, but I guarantee that the compute cost is higher than just the cost for training it once.
Replies from: gwern↑ comment by gwern · 2020-07-27T21:50:08.438Z · LW(p) · GW(p)
And OA trained GPT-3-175b **once**, it looks like: note the part where they say they didn't want to train a second run to deal with the data contamination issue because of cost. (You can do this without it being a shot in the dark because of the scaling laws.)