What do you expect AI capabilities may look like in 2028?

post by nonzerosum · 2024-08-23T16:59:53.007Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    14 Nathan Helm-Burger
    6 RogerDearnaley
None
No comments

From the perspective of AI progress, bottlenecks, capabilities, safety - what do you think 2028 might look like?

Answers

answer by Nathan Helm-Burger · 2024-08-27T18:18:51.042Z · LW(p) · GW(p)

My current timelines, and the opportunity to bet with or against them:

Recursive Self-Improvement (RSI) by mid 2026

AGI by late 2027, probably sooner.

So in 2028, AGI might be well on the way to ASI (unclear how long this takes). Certainly I would expect it to be able to do any task on a computer faster and better than any human short of a nobel-prize-level expert in that specific task. I expect very fast and agile control of humanoid robots to be possible, and that humanoid robots will be capable of doing anything a human of average strength and dexterity can do. I don't know how expensive and rare such robots will be, probably both quite expensive and quite rare at that time. They will probably be on a trend towards getting rapidly cheaper though, as their obvious utility for a broad range of purposes will make it make sense for investors to put money into scaling up the manufacture of such robots. 

There are already narrow AIs capable of superhuman drone control, including faster-than-human reaction times. I expect this to continue to advance, becoming more robust and faster and cheaper. I believe this tech will also expand to allowing such drones to target and fire weapons systems at a superhuman level. This is already demonstrated to be possible to some extent, but I've not yet seen demonstrations of superhuman accuracy and reaction time.

I expect that AI will be capable of controlling lab robots or instructing unskilled humans of average dexterity in manipulating lab equipment to accomplish biology lab tasks. I expect that some combination of AIs (e.g. a general AI trained in tool use and equipped with narrow biology-specific AI like AlphaFold) will be available in 2028 that, when assembled into a system, will be capable of coming up with novel lab protocols to accomplish a wide range of goals. I expect that such a BioAI system will be capable of designing a protocol to use commonly available non-governmentally-restricted materials and common lab equipment to assemble a bioweapon. I believe the upper bound on the danger of potential bioweapons to be greatly expanded from what is currently known by then, in part due to the advances in biological design tools and in part due to public advances in the science of genetics. Therefore, I expect it to be possible for an AI to design and guide the creation of a bioweapon capable of wiping out 99%+ of humanity. I'm not saying I expect this to happen, just that I expect the technology available in 2028 to make this possible.

I expect that AI agency and scaffolding for general models will continue to advance. I expect that many-step tasks will be able to be accomplished reliably.  I believe that AI agents of 2028 will be able to act without supervision to successfully pursue long-range high-level goals like: 'make money however you can, launder it through extensive crypto trades, and then deposit it in the specified bitcoin wallet'.

I expect that open source models will continue to advance, and also that training and fine-tuning will continue to get cheaper and easier. I expect open-source models will still not have 'sticky value alignment', meaning that whoever controls an open-source model will be able to shape its behavior however they like. I don't expect this to result in perfect intent-alignment, but I do expect it will be 'pretty good intent alignment', such that the resulting AI agents will be able to be usefully deployed in low-supervision scenarios to enact many-step tasks in pursuit of high-level goals. I expect a typical home computer will not be capable not of training an open-source model from scratch in a reasonable amount of time. I do expect that a typical home computer will be capable of fine-tuning a pre-trained open source model capable of being used as part of an agent system.

I expect that the leading AI labs will have much more powerful AIs and agents than the open source ones, and lots more of them (due to training continuing to be much more expensive than inference). I expect that inference for medium-capability models will continue to get faster and cheaper. I expect the leading labs to mostly manage to maintain pretty good control over their AIs and the AI actions taken through APIs. I expect that the more likely source of harms will be from uncontrolled AIs will be from the relatively less-powerful open source AIs getting out of hand, or being used in deliberately harmful ways.

I think there's a possibility that large AI labs will shift to not allowing even API access to their best models, because their best models will be too expensive to run, and also would be too helpful to competitors seeking to improve their own algorithms and models. Allowing your AI to help improve competitors AIs will, if my RSI predictions are accurate, be too risky to justify the profit and reputation gains. In such a future, the leading labs will all have internal-only models that they use for pursuing RSI to generate better future generations of AI.

I expect the best AGIs of 2028 to be good at doing scientific research and also engineering. I expect that they'll be able to build toolkits for themselves of narrow AIs which do superhuman predictions of very specific domains. I think this will allow for the possibility of designing and deploying novel tech with unprecedentedly low amounts of testing and research time, leading to potentially bold actors having access to technology which seems strongly above current technology in some ways.

An example of such tech could be repurposing existing read/write brain implant chips to allow a dictator to make 'slave chips' that completely override a victim's motivational centers to make them unwaveringly enthusiastically loyal to the dictator. Possibly also making them smarter and/or allowing them to interface with other people's brains and/or with AI. If so, this would basically be like the Borg. Such an amalgam of enslaved humans and computers networked together could seem like an appealing way for a technologically-behind dictatorship like North Korea to compete economically and militarily with larger better-resourced nations. This sounds like a very science-fiction scenario, but in terms of costs and technological prerequisite technology it is very achievable. What is currently lacking is more things like the knowledge that this would be possible and affordable (which can be overcome with an intent-aligned ethics-unaligned AGI searching scientific papers for potentially advantageous tech), and the motivation / willingness to do this despite the unethical nature of the experimentation ( including likely low survival rate of the early subjects).

 

Things I'm quite unsure about but seem like possibilities to consider:

AI may accelerate research on nanotech, and thus we might see impressive molecular manufacturing unlocked by 2028.

AI might speed up fusion research, such that fusion becomes widely economically viable.

Robust super-humanly fast AI pilots may make certain military operations much cheaper and easier. Possibly this would result in making it much cheaper and easier to deploy wide-ranging missile defense systems. If so, this would upset the uneasy Mutually Assured Destruction détente that currently prevents World Powers from attacking each other. This, combined with increased tensions from AGI and the surge in technological development, could result in large scale conflicts.

Somebody may be foolish enough to unleash an AGI agent just for $@&*$ and giggles, by giving it explicit instructions to reproduce, self-improve, and seek power. Then perhaps deliberately releasing control over it, or letting it escape their control. This probably wouldn't be game over for humanity, but it could result in a significant catastrophe if the initial AI is sufficiently capable.

answer by RogerDearnaley · 2024-08-23T21:08:02.711Z · LW(p) · GW(p)

If:

a) the AI scaling curves hold up, and

b) we continue to improve at agentic scaffolding and other forms of "unhobbling", and

c) algorithmic efficiency improvement continue at about the same pace, and

d) the willingness of investors to invest exponentially more money in training AI each year continues to scale up at about the same rate, and

e) we don't hit any new limit like meaningfully running out of training data or power for training clusters, then:

capabilities will look something a lot like or close to Artificial General Intelligence (AGI)/Transformative Artificial Intelligence (TAI). Probably a patchy AGI, with some capabilities well into superhuman, most around expert-human levels (some still perhaps exceeded by rare very-talented-and-skilled individuals), and a few not yet at expert-human levels: depending on which abilities those are, it may be more of less TAI (currently long-term planning/plan execution is an really important major weakness: if that didn't get mostly-fixed by some combination of scaling, unhobbling, and new training data then it would be a critical lack).

Individually each of those listed preconditions seem pretty likely, but obviously there are five of them. If any of them fail, then we'll be close to but not quite at AGI, and making slower progress towards it, but we won't be stalled unless basically all of them fail, which seems like a really unlikely coincidence.

Almost certainly this will not yet by broadly applied across the economy, but given the potential for order-of-magnitude-or-more cost savings, people will be scrambling to apply it rapidly (during which fortunes will be made and lost), and there will be a huge amount of "who moved my cheese?" social upheaval as a result. As AI becomes increasingly AGI-like, the difficulty of applying it effectively to a given economic use case will reduce to somewhere around the difficulty of integrating and bringing-up-to-speed single human new-hire. So a massive and rapid economic upheaval will be going on. As an inevitable result, Luddite views and policies will skyrocket, and AI will become extremely unpopular with a great many people. A significant question here is whether this disruption will, in 2028, be limited to purely-intellectual work, or if advances in robotics will have yet started to have the same effect on jobs that also have a manual work element. I'm not enough of an expert on robotics to have an informed opinion here: my best guess is that robotics will lag, but not by much, since robotics research is mostly intellectual work.

This is of course about the level where the rubber really starts to hit the road on AI safety: we're no longer talking about naughty stories, cheap phishing, or how-to-guides on making drugs at home, and are looking at systems capable of autonomously or under human direction committing serious criminal or offensive activities at a labor cost at least an order-of-magnitude below current, and an escaped self-replicating malicious agent is feasible and might be able to evade law enforcement and computer security professionals unless they had equivalent AI assistance. If we get any major "warning shots" on AI safety, this is when they'll happen (personally I expect them to come thick-and fast). It's teetering on the edge of the existential risk level of Artificial Super-Intelligence (ASI).

Somewhere around that point, we start to hit two conflicting influences: 1) an intelligence feedback explosion from AGI accelerating AI research, vs. 2) to train a super-intelligence you need to synthesize very large amounts of training data displaying superintelligent behavior, rather than just using prexisting data from humans. So we either get a fast takeoff, or a slowdown, or some combination of the two. That's hard to predict: we're starting to get close to the singularity, where the usual fact that predictions are hard (especially about the future) is compounded by it being functionally almost impossible to predict the capabilities of something much smarter than us, especially when we've never previously seen anything smarter than us.

comment by Noosphere89 (sharmake-farah) · 2024-08-24T17:10:48.272Z · LW(p) · GW(p)

The big issue, in terms of AI safety, is likely to be misuse, not alignment issues, primarily because I expect these AGIs to exhibit quite a lot less instrumental convergence than humans out of the box, due to being trained on much denser data and rewards than humans, and I think this allows for corrigibility/DWIMAC strategies to alignment to mostly just work.

However, misuse of AIs will become a harder problem to solve, and short term, I expect the solution to be never releasing unrestricted AI to the general public and only allowing unrestricted AIs for internal use like AI research, unless it has robust resistance to fine-tuning attacks, and longer term, I think the solution will have to require more misuse-resistant AIs.

Also, in the world you sketched, with my additions, the political values of who control AIs become very important, for better or worse.

comment by nonzerosum · 2024-08-24T20:04:32.444Z · LW(p) · GW(p)

Can you define what AGI means to you in concrete observable terms? Will it concretely be an app that runs on a computer and does white collar jobs, or something else? 

Replies from: roger-d-1
comment by RogerDearnaley (roger-d-1) · 2024-08-26T20:51:31.097Z · LW(p) · GW(p)

I am using Artificial General Intelligence (AGI) to mean a AI that is, broadly, at least as good at most intellectual tasks as the typical person who makes a living from performing that intellectual task. If that applies across most economically-important intellectual tasks at a cost that is lower-than a human, then this is also presumably going to be Transformative Artificial Intelligence (TAI). So the latter means that it would be competitive at most white-collar jobs.

No comments

Comments sorted by top scores.