Why entropy means you might not have to worry as much about superintelligent AI
post by Ron J (ron-j) · 2024-05-23T03:52:40.874Z · LW · GW · 1 commentsContents
1 comment
The advent of AI superintelligence is imminent, likely within the next decade. This rapid progression toward advanced AI has sparked widespread concern about the potential consequences of such powerful technology. The crux of the matter lies in the alignment problem: how can we ensure that AI behaves in ways that are beneficial to humanity? The simple truth is, we can't implicitly align AI with human values. Good people will create good AI, and evil people will create evil AI. This age-old struggle between good and evil will inevitably play out in the realm of artificial intelligence.
Our best hope lies in the creation of more good AI than evil. The proliferation of benevolent AI systems, designed and operated by individuals and organizations with ethical intentions, can help counterbalance the malevolent uses of AI. However, even if we fail to achieve this balance, there's a fundamental principle that provides a silver lining: entropy.
Entropy, a concept rooted in thermodynamics and information theory, dictates that in any system, disorder tends to increase over time. This principle applies to AI systems as well. No matter how advanced or powerful an AI becomes, it will face inherent limitations. Even with infinite computational power and memory, an AI cannot simulate an open system faster than the system runs itself. To make predictions, AI must rely on heuristics, which inevitably introduce errors.
As time progresses, these errors accumulate. Predictions made by even the most advanced AI will, after some number of iterations, begin to resemble random noise. This inherent uncertainty means that no AI, regardless of its computational prowess, can maintain perfect accuracy indefinitely. Eventually, all predictions will degrade into chaos.
Yet, within this seemingly chaotic landscape, one prediction will still be right. This randomness levels the playing field, allowing even a lower-compute rival to potentially best an infinite-compute adversary through sheer luck or superior observation of the system's state. This dynamic ensures that the world reverts to the familiar human battles we have always fought and won.
The concept of entropy assures us that the future of AI will not be dominated by a single, all-powerful entity. Instead, it will be a landscape of competing intelligences, each with its own strengths and weaknesses. This inherent unpredictability preserves the opportunity for human ingenuity and resilience to prevail.
While the rise of AI superintelligence may seem daunting, the principles of entropy should provide a somewhat comforting perspective. The inevitable accumulation of errors in AI predictions ensures that no single intelligence can maintain dominance indefinitely. This inherent uncertainty offers hope that the age-old human struggle between good and evil will continue, and with it, the possibility for good to triumph. As we navigate this brave new world, our focus should be on fostering ethical AI development and leveraging the surprises of entropy to keep the scales balanced.
1 comments
Comments sorted by top scores.
comment by Viliam · 2024-05-23T14:36:21.927Z · LW(p) · GW(p)
The simple truth is, we can't implicitly align AI with human values. Good people will create good AI
According to how the word is used here, the alignment problem is that actually even good people can (and according to some, almost certainly will) create a harmful AI. By mistake, somewhat analogical to a bug in a computer program, but the problem is much deeper.
"Good people will create good AI, and evil people will create evil AI" is basically the alignment problem solved.