How can I reconcile the two most likely requirements for humanities near-term survival.
post by Erlja Jkdf. (erlja-jkdf) · 2022-08-29T18:46:58.083Z · LW · GW · 5 commentsThis is a question post.
Contents
Answers 1 Lalartu2 None 5 comments
1. We technologically plateau, due to humanities questionable ability to adapt to accelerating technological progression.
2. AI development is indefinitely disrupted; as it is likely to result in disaster.
- This is unlikely to be done deliberately. Ongoing attempts to slow down AI development are relatively ineffective; it is more likely, in my opinion, that a basic form of AI developed in the near future will either directly increase all individuals power, or lead to technologies that do the same. An example would be the possible implications of Palm AI. https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/
- This universal increase in power could be sufficient to disrupt all AI research indefinitely, with the actions of a minority working in unison.
All this considered, what's the most likely situation that could play out?
Answers
We technologically plateau because we reach technological limits. There aren't many important technologies not invented yet, things like nanorobots, compact fusion reactors or Dyson spheres are impossible. Whether AI is developed or not is irrelevant. After a century or two of stagnation, civilization runs out of resources and declines to pre-industrial level. This [LW · GW] is our future.
5 comments
Comments sorted by top scores.
comment by JBlack · 2022-08-30T01:51:27.508Z · LW(p) · GW(p)
I don't think either of those are the two most likely.
I see the most likely as related to (2), but nothing to do with the likelihood of AI causing disaster. More likely there will be some other disruption to our society that has nothing to do with AI, but prevents us from making sufficient progress to reach superhuman AGI for the near future. Probably we will recover in the less near future, but that's out of scope of the question.
Second most likely I see as being some as yet unknown obstacle that makes AGI unexpectedly unlikely with near future technology. The future is, after all, hard to predict. That doesn't mean that we technologically plateau in general, just that this one problem is much harder than we expect.
Replies from: erlja-jkdf↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-08-30T02:29:18.201Z · LW(p) · GW(p)
A technological plateau is strictly necessary. To give the simplest example; we lucked out on nukes. The next decade alone contains potential for several existential threats - readily made bioweapons, miniaturized drones, AI abuse - that I question our ability to consistently adapt too, particularly one after another.
We might get it, if our tech jumps thanks to exponential progress.
Replies from: JBlack↑ comment by JBlack · 2022-08-30T04:43:08.758Z · LW(p) · GW(p)
No, it is definitely not a strictly necessary requirement for near-term survival. To be "strictly necessary for near-term survival", such future technologies would have to be guaranteed to kill all of humanity, and soon. That's ridiculous hyperbole.
There are risks ahead, even existential risks, from other non-AI technologies but not to nearly that extent.
Replies from: erlja-jkdf↑ comment by Erlja Jkdf. (erlja-jkdf) · 2022-08-30T12:20:41.050Z · LW(p) · GW(p)
We're very good at generating existential risks. Given indefinite technological progression at our current pace, we are likely to get ourselves killed.
Replies from: JBlack