My AI Predictions 2023 - 2026
post by HunterJay · 2023-10-16T00:50:52.968Z · LW · GW · 28 commentsContents
2024 2025 2026 2027 & Beyond None 28 comments
Epistemic status: My, mostly intuitive, guesses - with only a few days dwelling on it, and no serious research beyond what I already knew.
I work in the startup sphere, in field robotics, and I am about to have an opportunity to majorly shift what I am working on. To work out what projects might make sense on a multi-year time frame, I wrote up what I thought might happen in AI in the next couple of years as specifically as I could.
I found the exercise surprisingly useful. It turned a whole bunch of vague "X will get better over time" to actionable "X will be practical in around Y years". I don't think my guesses will end up actually being very accurate, but having something solid forced me to actually think about the future and make my gibberish-internal-intuitions into more-consistent-guesses. I was really surprised at how much it helped, actually.
So, here's that list of predictions. I'm sharing it here more as a "here's how you can do something similar" than as a "here's a well researched report on the future trends." (which it definitely is not). I didn't go to the trouble of putting my %'s on the guesses "X in year Y", but it's about 40%-70% for any given guess.
________________________________
[Written 6th October 2023]
Rest of 2023:
- Small improvements to LLMs
- Google releases something competitive to ChatGPT.
- Anthropic and OpenAI slightly improve GPT-4 and Claude2
- Meta or another group releases better open source models, up to around GPT-3.5 level.
- Small improvements to Image Generation
- Dalle3 gets small improvements.
- Google or Meta releases something similar to Dalle3, but not as good.
- Slight improvements to AI generated videos.
- Basic hooking up of Dalle3 to video generation with tagged on software, not really good consumer stuff yet. Works in an interesting way, like Dalle1, but not useful for much yet.
- Further experiments hooking LLMs up to robotics/cars, but nothing commercial released.
- Small improvements in training efficiency and data usage, particularly obviously in smaller models becoming more capable than older, larger ones.
2024
- GPT-5 or equivalent is released.
- It’s as big a jump on GPT-4 as GPT-4 was on GPT-3.5.
- Can do pretty much any task when guided by a person, but still gets things wrong sometimes.
- Multimodal inputs, browsing, and agents based on it are all significantly better.
- Agents can do basic tasks on computers -- like filling in forms, working in excel, pulling up information on the web, and basic robotics control. This reaches the point where it is actually useful for some of these things.
- Robotics and long-horizon agents still don’t work well enough for production. Things fall apart if the agent has to do something with too many branching possibilities or on time horizons beyond half an hour or so. This time period / complexity quickly improves as low-hanging workarounds are added.
- Context windows are no longer an issue for text generation tasks.
- Algorithmic improvements, or summarisation and workarounds, better attention on infinite context windows, or something like that solves the problem pretty much completely from a user’s perspective for the best models.
- GPT-5 has the context of all previous chats, Copilot has the entire codebase as context, etc.
- This is later applied to agent usage, and agents quickly improve to become useful, in the same way that LLMs weren’t useful for everyday work until ChatGPT.
- Online learning begins -- GPT-5 or equivalent improves itself slowly, autonomously, but not noticeably faster than current models are improved with human effort and a training step. It does something like select its own data to train on from all of the inputs and outputs it has received, and is trained on this data autonomously and regularly (daily or more often).
- AI selection of what data to train on is used to improve datasets in general - training for one epoch on all data becomes less common, as some high quality or relevant parts of giant sets are repeated more often or allowed larger step size.
- Autonomous generation of data is used more extensively, especially for aligning base models, or for training models smaller than the best ones (by using data generated by larger models).
- Code writing is much better, and tie-ins to Visual Studio are better than GPT-4 is today, as well as having much better context.
- Open source models as capable of GPT-4 become available.
- Training and runtime efficiency improves by at least a factor of two, while hardware continues improvements on trend.
- This is because of a combination of -- datasets improved by AI curation and generation, improved model architecture, and improvements in hyperparameter selection, including work similar to the optimisations gained from discovering Chinchilla scaling laws.
2025
- AI agents are used in basic robotics -- like LLM driven delivery robots and (in demos of) household and factory robots, like the Tesla Bot. Multimodal models basically let them work out of the box, although not 100% reliably yet.
- Trends continue from the previous year -- the time horizons agents can work on increase, LLMs improve on traditional LLM tasks, smaller models get more capable, and the best models get bigger.
- AI curated and generated data becomes far more common than previously, especially for aligning models.
- Virtual environments become more common for training general purpose models, combined with traditional LLM training.
- Code writing AI (just LLMs with context and finetuning) are capable of completely producing basic apps, solving most basic bugs, and working with human programmers very well -- it’s pair programming with an AI, with the AI knowing all of the low level details (a savant who has memorised the docs and can use them perfectly, and can see the entire codebase at once), and the human keeping track of the higher level plan and goals. The AI can also be used to recommend architectures and approaches, of course, and gradually does more and more between human inputs.
- If there ever feels like a lull in progress, it will be in this period leading up to models capable enough for robotics control, long time frame agents, and full form video generation, which I don’t expect to happen in an large scale way in 2025.
- Possibly GPT-6 or equivalent is released, but more likely continuous improvements to GPT-5 carry forward. There’s not a super meaningful difference at this point, with online learning continually improving existing models.
2026
- GPT-6 or equivalent capabilities are reached (i.e. as big a jump as GPT-3.5 to 4, to 5, to 6).
- Multimodal works great out of the box. The same model can do video, image, text, audio, and other analysis and generation, including outputting commands to control digital agents and robots via API calls.
- Simulated environments are used in training -- online learning inside a video game, inside a virtual machine, etc. This could be training on long sequences of pre-generated actions like with traditional LLMs learning from existing text, as well as training on sequential actions chosen by the LLM as it trains, like with reinforcement learning.
- Whether from OpenAI or others, this level of LLM enables general purpose household, warehouse, and factory robots to start actually being useful for some tasks, like cleaning and sorting. They are expensive, rare, and not particularly reliable, but are being manufactured at scale by Tesla and others.
- Realistic fully automated video generation is better than Dalle3 image generation, but limited to reasonably short snippets (<60s) without human intervention before it looks strange. This length quickly increases, and workarounds and human input allow long length high quality videos to be produced.
- Progress appears to accelerate again, as online learning in virtual environments, generated data, and robotics systems and digital agents enter common usage.
2027 & Beyond
- I struggle to imagine what the world looks like beyond this point. The above trends may continue for some time, with robotics and digital agents taking over a larger and larger share of the world.
- At some point, a major step change will also happen when AI is capable of generating new major scientific breakthroughs on its own -- more akin to Einstein coming up with relativity to explain known data than akin to predicting the shape of proteins.
- A massive change will come as the share of AI improvement caused by AI's own work surpasses the share caused by human work, possibly later this decade.
- It seems likely to me that superintelligence -- and all of the sci-fi seeming technologies and X-risk that comes with it -- will appear soon after this period. I have a significant probability of it happening this decade. (I have previously said a 50% chance of AGI by 2029, and superintelligence very shortly afterwards, and that still feels right to me). If it doesn’t appear by then, I would expect one of the following to be true:
- Regulation significantly slows development.
- Zero algorithmic advances on the scale of transformers are developed.
- There is something unexpectedly limiting about the transition from oracle to agentic AI, and we have a huge “oracle overhang” -- where a new architecture that works well as an agent will suddenly be as capable as a million humans with all of the knowledge and skills of GPT-6-to-10, once that theoretical breakthrough happens.
28 comments
Comments sorted by top scores.
comment by pathos_bot · 2023-10-16T06:30:31.524Z · LW(p) · GW(p)
The major shift in the next 3 years will be that, as a rule, top level AI labs will not release their best models. I'm certain this has somewhat been the case for OpenAI, Anthropic and Google for the past year. At some point full utilization of a SOTA model will be a strategic advantage for companies themselves to use for their own tactical purposes. The moment any $X of value can be netted from an output/inference run of a model for less than $(X-Y) in costs, where Y represents the marginal labor/maintenance/averaged risk costs for each run's output, no company would ever be advantaged by releasing the model to be used by anyone other than themselves. This closed-source event horizon I imagine will occur sometime in late 2024.
Replies from: daniel-kokotajlo, Bjartur Tómas, Insub↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-16T13:34:04.000Z · LW(p) · GW(p)
Related previous discussion:
Soft takeoff can still lead to decisive strategic advantage — AI Alignment Forum [AF · GW]
Review of Soft Takeoff Can Still Lead to DSA — AI Alignment Forum [AF · GW]
↑ comment by Tomás B. (Bjartur Tómas) · 2023-10-16T15:52:10.127Z · LW(p) · GW(p)
This is a very good, and very scary point - another thing that could provide, at least the appearance of, a discontinuity. One symptom of this this scenario would be a widespread, false belief that "open source" models are SOTA.
Might be good to brainstorm other symptoms to prime ourselves to recognize when we are in this scenario. Complete hiring-freezes/massive layoffs at the firms in question, aggressive expansion into previously-unrelated markets, etc.
↑ comment by Insub · 2023-10-16T16:06:55.533Z · LW(p) · GW(p)
Not sure I understand; if model runs generate value for the creator company, surely they'd also create value that lots of customers would be willing to pay for. If every model run generates value, and there's ability to scale, then why not maximize revenue by maximizing the number of people using the model? The creator company can just charge the customers, no? Sure, competitors can use it too, but does that really override losing an enormous market of customers?
Replies from: pathos_bot, Bjartur Tómas↑ comment by pathos_bot · 2023-10-16T19:36:52.075Z · LW(p) · GW(p)
That's very true, but there are two reasons why a company may not be inclined to release an extremely capable model:
1. Safety risk: someone uses a model and jailbreaks it in some unexpected way, the risk of misuse is much higher with a more capable model. OpenAI had GPT-4 for 9-10 months before releasing it trying to RHLF and even lobotomized it to being more safe. The Summer 2022 internal version of GPT-4 was, according to Microsoft researchers, more generally capable than the released version (as evidenced by the draw a unicorn test). This needed delay and assumed risks will naturally be much greater with a larger model, both in that larger models, so far, seem harder to simply RHLF into unjailbreakability, and by being more capable, any jailbreak carries more risk, thus the general business level margin of safety will be higher.
2. Sharing/exposing capabilities: Any business wants to maintain a strategic advantage. Releasing a SOTA model will allow a company's competitors to use it, test its capabilities and train models on its outputs. This reality has become more apparent in the past 12 months.
↑ comment by Tomás B. (Bjartur Tómas) · 2023-10-16T16:41:50.076Z · LW(p) · GW(p)
It does seem to me a little silly to give competitors API access to your brain. If one has enough of a lead, one can just capture your competitors markets.
comment by Vladimir_Nesov · 2023-10-16T17:04:00.391Z · LW(p) · GW(p)
Text data is running out, the 2+ billion dollar scale training runs due in 2-4 years are going to devour the rest of it. This might be sufficient to reach AGI, in the sense of capability for mostly autonomous research, in particular development of compute multipliers for training runs and plucking the rest of low hanging fruit of the unsupervised learning revolution, overcoming scarcity of hardware.
If AGI is not in range of those runs, and if there is no synthetic data generation process useful at that scale, the bulk of compute goes to multimodality (though latency will still cripple many use cases), and the rate of competence improvement may slow for years. This is the main scenario where I see significant hope for regulation to take hold. Doing better and countering the risk of AGI in the initial rush to billion dollar scale runs requires a nebulously defined pause right now.
Replies from: avturchin, HunterJay, Archimedes↑ comment by avturchin · 2023-10-16T20:43:29.872Z · LW(p) · GW(p)
One way to get more data is to pay humans to create the specific types of data we need. For example, if billion people write 100 pages on the unique topic of their expertise each - and the needed data generation will be controlled by AI - maybe that will be enough.
↑ comment by HunterJay · 2023-10-18T00:27:21.494Z · LW(p) · GW(p)
I'm somewhat skeptical that running out of text data will meaningfully slow progress. Today's models are so sample inefficient compared with human brains that I suspect there are significant jumps possible there.
Also, as you say;
- Synthetic text data might well be possible (especially for domains where you can test the quality of the produced text externally (e.g. programming)
- Reinforcement-learning-style virtual environments can also generate data (and not necessarily only physics based environments either -- it could be more like playing games or using a computer).
- And multimodal inputs gives us a lot more data too, and I think we've only really scratched the surface of multimodal transformers today.
↑ comment by Vladimir_Nesov · 2023-10-18T01:00:02.774Z · LW(p) · GW(p)
New untested ideas take unpredictable time to develop. Given the current timeline of pure compute/investment scaling, there is no particular reason for all bottlenecks to be cleared just in time for scaling to continue without slowing down. Hence the possibility of it slowing down at the upcoming possible bottlenecks of natural text data and available-on-short-notice hardware, which are somewhat close together.
Sample efficiency (with respect to natural data) can in principle be improved, humans and some RL systems show it's possible, and synthetic data is a particular form this improvement might take. But it's not something that's readily available, known to subsume capabilities of LLMs and scale past them. Also, straying further from the LLM recipe of simulating human text might make alignment even more intractable. In a universe where alignment of LLMs is feasible within the current breakneck regime, the source of doom I worry about is an RL system that either didn't train on human culture or did too much reflection to remain within its frame.
Compared to natural text, multimodal data and many recipes for synthetic data give something less valuable for improving model competence, reducing return on further scaling. When competence improvement slows down, and if AGI in the sense of human-level autonomous work remains sufficiently far away at that point, investment scaling is going to slow down as well. Future frontier models cost too much if there is no commensurate competence improvement.
↑ comment by Archimedes · 2023-10-16T22:52:53.524Z · LW(p) · GW(p)
My hunch is that there's sufficient text already if an AI processes it more reflectively. For example, each chunk of text can be fed through a series of LLM prompts intended to enrich it, and then the model trains on the enriched/expanded text.
comment by Gordon Seidoh Worley (gworley) · 2023-10-16T17:29:19.016Z · LW(p) · GW(p)
I dislike this type of post. Predictions are nice and all, but I can take no updates from these predictions because I know nothing about the author or any reason to give their predictions credence and no real evidence is given to justify the predictions (evidence could include something like making real money bets on prediction markets, not just data and arguments!). So for me this post is nothing but speculation, and I'm disappointed that LessWrong readers voted it up.
Note: This is nothing against the post's author! I think it's totally fine for a person to speculate. I'm just both surprised this became a frontpage post and that it was voted up!
Replies from: HunterJay, elriggs, matthew-barnett↑ comment by HunterJay · 2023-10-16T21:19:44.297Z · LW(p) · GW(p)
I am honestly very surprised it became a front page post too! It totally is just speculation.
I tried to be super clear that these were just babbled guesses, and I was mainly just telling people to try to do same, rather than trusting my starting point here.
The other thing that surprised me is that there haven't been too many comments saying "this part is off", or "you missed trend X!". I was kind of hoping for that!
↑ comment by Logan Riggs (elriggs) · 2023-10-16T19:34:19.813Z · LW(p) · GW(p)
I really like this post, but more for:
- Babbling ideas I might not have thought of previously (e.g. the focus here on long-time horizon tasks)
- Good exercise to do as a group to then dig into cruxes
than updating my own credences on specifics.
↑ comment by Matthew Barnett (matthew-barnett) · 2023-10-16T17:35:37.367Z · LW(p) · GW(p)
I agree the author should attach credences. I'd also appreciate a little more specificity, for example with the prediction, "Google releases something competitive to ChatGPT." I'm not sure whether that refers to ChatGPT-3.5 or ChatGPT-4 and the meaning here is actually quite important.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-16T04:57:54.736Z · LW(p) · GW(p)
oint, a major step change will also happen when AI is capable of generating new major scientific breakthroughs on its own -- more akin to Einstein coming up with relativity to explain known data than
The acceleration of AI R&D will begin sooner than that, I think. We could get 10x speedup just by automating the typical openai engineer I think.
↑ comment by HunterJay · 2023-10-16T06:00:45.369Z · LW(p) · GW(p)
I broadly agree. I think AI tools are already speeding up development today, and on reflection I don't actually think AI being more capable than humans at modeling the natural world would be a discontinuous point on the ramp up to superintelligence, actually.
It would be a point where AI gets much harder to predict, though, which is probably why it was on my mind when I was trying to come up with predictions.
↑ comment by Tomás B. (Bjartur Tómas) · 2023-10-16T16:44:14.470Z · LW(p) · GW(p)
And OpenAI has explicitly said this is what they want to do! Their Superalignment strat looks suspiciously like "gunning for RSI".
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-16T04:56:20.994Z · LW(p) · GW(p)
(i.e. as big a jump as GPT-3 to 4, to 5, to 6).
Earlier you said the jump between 4 and 5 was as big as the jump between 3.5 to 4.
comment by Mike Capuano (mike-capuano) · 2024-01-15T21:08:39.233Z · LW(p) · GW(p)
I see some discussion here and in the associated Reddit thread about more efficient and smaller models. I think ChatGPT4 is at about one trillion parameters. I was under the impression that model sizes were increasing at about 10x/year so that could mean GPT5 is 10 trillion and GPT6 (or equivalent) is 100 trillion parameters by 2026. Does that sound about right or is there some sort of algorithmic change likely to happen that will allow LLMs to improve without the number of parameters growing 10x/year?
On a related note, I've heard backend cluster sizes are supposedly growing at similar rates. 32K nodes with 8 GPUs per node today growing at 10x per year. To me that seems improbable as it would be 320K nodes in 2025 and then 3.2M nodes in 2026.
Thoughts or info that you might have here?
Replies from: HunterJay↑ comment by HunterJay · 2024-01-20T06:44:42.199Z · LW(p) · GW(p)
10x per year for compute seems high to me. Naïvely I would expect the price/performance of compute to double every 1-2 years as it has been forever, with overall compute available for training big models being a function of that + increasing investment in the space, which could look more like one-time jumps. (I.e. a 10x jump in compute in 2024 may happen because of increased investment, but a 100x increase by 2025 seems unlikely.) But I am somewhat uncertain of this.
For parameters, I definitely think the largest models will keep getting bigger, and for compute to be the big driver of that -- but also I would expect improvements like mixture of experts models to continue, which effectively allow more parameters with less compute (because not all of the parameters are used at all times). Other techniques, like RLHF, also improve the subjective performance of models without increasing their size (i.e. getting them to do useful things rather than only predict what next word is most likely).
I guess my prediction here would be simply that things like this continue, so that in 2025 if you have X compute, you could get a better model in 2025 than you could in 2023. But you also could have 5x to 50x more compute in 2025, so you have the sum of those improvements!
It's obviously far cheaper to play with smaller models, so I expect lots of improvements will initially appear in models small-for-their-time.
Just my thoughts!
comment by Sergii (sergey-kharagorgiev) · 2023-10-16T07:24:07.261Z · LW(p) · GW(p)
I have a similar background (working at a robotics startup), would agree with many points.
GPT-5 or equivalent is released. It’s as big a jump on GPT-4 as GPT-4 was on GPT-3.5.
GPT-4 has (possibly) 10x parameters compared to GPT-3.5. Similar jump in GPT-5 might require 10x parameters again, wouldn't it make it impractical (slow, expensive) to run?
AI agents are used in basic robotics -- like LLM driven delivery robots and (in demos of) household and factory robots
GPT-4 level models are too slow and expensive or real-time applications, how do you imagine this could work? Even in recent Google's robotics demos that are based on "small" transformers, inference speed is one of the bottlenecks.
Replies from: p.b.↑ comment by p.b. · 2023-10-16T09:26:28.193Z · LW(p) · GW(p)
If you scale width more than depth and data more than parameters you can probably go some ways before latency becomes a real problem.
Additionally, it would also make sense to take more time (i.e. larger models) for harder tasks. The user probably doesn't need code or mathematical solutions instantly, as long as its still 100X faster than a human.
In robotics you probably need something hierarchical, where low-level movements are controlled by small nets.
Replies from: HunterJay↑ comment by HunterJay · 2023-10-16T14:13:19.081Z · LW(p) · GW(p)
Agree on lower depth models being possible, a few other possibilities:
-
Smaller models with lower latency could be used, possibly distilled down from larger ones.
-
Compute improvements might make it practical onboard (like with Tesla's self-driving hardware inside the chest of their andriod).
-
New architectures could work on more than one time scale -- kind of like humans do. E.g. when we walk, not all of the processing is done in the brain. Your spinal cord can handle a tonne of it autonomously. (Will find source tomorrow).
-
LLM-type models could do the parts that can accept higher latency, leaving lower level processes to handle themselves. Imagine for a household cleaning robot that a LLM based agent puts out high level thoughts like "Scan the room for dirty clothes. ... Fold them. ... Put them in the third draw", and existing low level stuff actually carried out the instructions. That's an exaggerated example, but you get the idea, it doesn't have to replace the PID controller!
↑ comment by HunterJay · 2023-10-18T00:36:44.426Z · LW(p) · GW(p)
I wrote this late at night, so to clarify and expand a little bit;
- "Work on more than one time scale" I think is actually an interesting idea to dwell on for a second. Like, when a person is trying to solve a problem, they will often pace back and forth, or talk, etc. They don't have to do everything in one pass, somehow the complex computation which lets them see and move around can work on a very fast time scale, while other problem solving is going on simultaneously, and only starts to effect motor outputs later on. That's interesting. The spinal cord doing processing independent of the brain thing I mentioned is evident in this older series of (rather horrible) experiments with cats: https://www.jstor.org/stable/24945006
- On the 'smaller models with lower latency', we already now see models like Minstral-7b outperforming 30b parameter models because of improvements in data, architecture, and training. I expect this trend to continue. If the largest models are capable of operating a robot out of the box, I think you could take those outputs, and use them to train (or otherwise distill down) the larger model to a more manageable size, more specialised for the task.
- On the 'LLMs could do the parts with higher latency', just yesterday I saw somebody do something like this with GPT-4V, where they periodically uploaded a photograph of what was in front of them, and got GPT-4V to output instructions on how to find the super market (walk further forward, turn right, etc). Kind of worked, that's the sort of thing I was picturing here, leaving much more responsive systems to handle the low latency work, like balance, gripping, etc.
comment by Ziyue Wang (VincentWang25) · 2023-10-16T04:56:56.369Z · LW(p) · GW(p)
Interesting to read! Curious about your prediction about AI safety related progress. Not sure how much impact it will have on your current prediction.
Replies from: HunterJay↑ comment by HunterJay · 2023-10-16T06:05:08.076Z · LW(p) · GW(p)
I am extremely worried about safety, but I don't know as much about it as I do about what's on the edge of consumer / engineering trends, so I think my predictions here would be not useful to share right now! The main way it relates to my guesses here is if regulation successfully slows down frontier development within a few years (which I would support).
I'm doing the ARENA course async online at the moment, and possibly moving into alignment research in the next year or two, so hoping to be able to chat more intelligently on alignment soonish.