Posts
Comments
Ideally that would be the case. However, if I had to guess, this roiling mass of Luddites would likely have chosen to boycott anything to do with AI as a result of their job/career losses. We'd like to believe that we'd easily be convinced out of violence. However, when humans get stuck in a certain of thinking, we become stubborn and accept our own facts regardless of whatever an expert, or expert system, says to us. This future ChatGPT could use this to its advantage, but I don't see how it prevents violence once people's minds are set on violence. Telling them "Don't worry, be happy, this will all pass as long as you trust the government, the leaders, and the rising AGI" seems profoundly unlikely to work especially in America where telling anyone to trust the government just makes them distrust the messenger even more. And saying "market forces will allow new jobs to be created" seems unlikely to convince anyone if they've been thrown out due to AI.
And the increasing crackdowns on any one particular group would only be tolerated if there was a controlled burn of unemployment through society. When it's just about everyone you have to crackdown against, at that point, you have a revolution on your hands. All it takes is one group suffering brutality for it to cascade.
The way to stop this is total information control and deception, which, again, we've decided is totally undesirable and dystopian behavior. Justifying it with "For the greater good" and "the ends justifies the means" becomes the same sort of crypto-Leninist talk that the technoprogressives tend to so furiously hate.
This thought experiment requires the belief that automation will happen rapidly, without any care or foresight or planning, and that there are no serious proposals to allow for a soft landing. The cold fact is that this is not an unrealistic expectation. I'd put p(doom) at probably as high as 90% that I'm actually underestimating the amount of reaction, failing to account for racial radicalization, religious radicalization, third-worldism, progressivism flirting with Ludditism, conservatism becoming widespread paleoconservative primitivism, and so on.
If there is a more controlled burn— if we don't simply throw everyone out of their jobs with only a basic welfare scheme to cover for them— then that number drops dramatically because we are easily amused and distracted by tech toys and entertainment. It is entirely possible for a single variable to drastically alter outcomes, and right now, we seem to be speedrunning the outcome with all the worst possible variables working against us.
I certainly hope someone can reasonably prove me wrong as well. The best retort I've gotten is that "this is no different than when a young child is forced to go to school for the first time. They have to deal with an extreme overwhelming change all at once that they've never been equipped to deal with before. They cry and throw a tantrum and that's it; they learn to deal with it."
My counter-retort to that was "You do realize that just proves my point, right? Because now imagine, all at once, tens of millions of 4-to-5 year olds threw a tantrum, except they also knew how to use guns and bombs and had good reason to fear they were actually never going to see their parents again unless they used them. Nothing about that ends remotely well."
In my rambling, I intended to address some of these issues but chose to cap it off at a point I found satisfying.
The first point: simply put, I do not see the necessary labor an AGI would need to bring about the full potential of its capability requiring any more than 10% of the labor force. This is an arbitrary number with no basis in reality.
On the second point, I do not believe we need to see even more than 30% unemployment before severe societal pressure is put on the tech companies and government to do something. This isn't quite as arbitrary, as unemployment rates as "low" as 15% have been triggers for severe social unrest.
As it stands, roughly 60% of the American economy is wrapped up in professional work: https://www.dpeaflcio.org/factsheets/the-professional-and-technical-workforce-by-the-numbers
Assuming only half of that is automated within five years due to a good bit of that still requiring physical robots, you already have caused enough pain to get the government involved.
However, I do predict that there will be SOME material capability in the physical world. My point is more that the potential for a rebellion to be crushed solely through robotics capabilities alone will not be there, as most robotic capabilities will indeed be deployed for labor.
I suppose, the point there is that there is going to be a "superchilled" point of robotics capabilities at around the exact same time AGI is likely to arrive— the latter half of the 2020s, a point when robotics are advanced enough to do labor and deployed in a large enough scale to do so, but not to such an overwhelming point that literally every possible physical job is automated. Hence why I kept the estimates down to around 50% unemployment at most, though possibly as high as 70% if companies aggressively try futureproofing themselves for whatever reason.
Furthermore, I'm going more with the news that companies are beginning to utilize generative AI to automate their workforce (mostly automating tasks at this point, but which will inevitably generalize to whole positions). This despite the technology not yet being fully mature for deployment (e.g. ChatGPT, Stable Diffusion/Midjourney, etc.)
https://finance.yahoo.com/news/companies-already-replacing-workers-chatgpt-140000856.html
If it's feasible for companies to save some money via automation, they are wont to take it. Likewise, I expect plenty of businesses to automate ahead of time in the near future as a result of AI hype.
The third point is one which I intended to address more directly indeed: that the prospect of a loss of material comfort and stability is in fact a suitable emotional and psychological shock that can drive unrest and, given enough uncertainty, a revolution. We saw this as recently as the COVID lockdowns in 2020 and the protests that arose following that March (for various reasons). We've seen reactions to job loss be similarly violent at earlier points in history. Some of this was buffered by the prevalence of unions, but we've successfully deunionized en masse.
It should also be stressed that we in the West have not had to deal with such intense potential permanent unemployment. In America and the UK, the last time the numbers were anywhere near "30%" was during the Great Depression. Few people in those times expected such numbers to remain so high indefinitely. Yet in our current situation, we're not just expecting 30% to be the ceiling; we're expecting it to be the floor, and to eventually reach 100% unemployment (or at least 99.99%).
I feel most people wouldn't mind losing their jobs if they were paid for it. I feel most people wouldn't mind having comfortable stability through robot-created abundance. I merely present a theory that all of this change coming too fast to handle, before we're properly equipped to handle it, in a culture that does not at all value or prepare us for a lifestyle anywhere similar to what is being promised, is going to end very badly.
There are any number of other things which might already have caused a society-wide luddite revolt - nuclear weapons, climate change, Internet surveillance - but it hasn't happened.
The fundamental issue is that none of these have had a direct negative impact on the financial, emotional, and physical wellbeing of hundreds of millions of people all at once. Internet surveillance is the closest, but even then, it's a somewhat abstract privacy concern; climate change eventually will, but not soon enough for most people to care about— this scenario, however, would be actively tangibly happening, and at accelerando speeds. I'd also go so far as to say these issues merely built up like a supervolcanic caldera over the decades, as many people do care about these issues, but there has not been a major trigger to actually protest en masse as part of a Luddite revolt over them.
The situation I'm referring to is entirely the long-idealized "mass unemployment from automation," and current trends suggest this is going to happen very quickly rather than over longer timeframes. If there has ever been a reason for a revolt, taking away people's ability to earn income and put food on the table is it.
I expect there will be a token effort to feed people to prevent revolt, but the expectation that things are not going to change only to be faced by the prospect of wild, uncontrollable change will be the final trigger. The promise that "robots are coming to give you abundance" is inevitably going to go down badly. It'll inevitably be a major culture war topic, and one that I don't think enough people will believe even in the face of AI and robotic deployment. And again, that's not bringing up the psychosocial response to all this where you have millions upon millions who would feel horribly betrayed by the prospect of their expected future immediately going up in smoke, their incomes being vastly reduced, and the prospect of death (whether that be by super-virus, disassembly, or mind-uploading, the latter of which is indistinguishable to death for the layman). And good lord, that's not even bringing up cultural expectations, religious beliefs, and entrenched collective dogma.
The only possible way to avoid this is to time it perfectly right. Don't automate much right up until AGI's unveiling. Then, while people are horribly shocked, automate as much as possible, and then deploy machines to increase abundance.
Of course, the AGI likely kills everyone instead, but if it works, you might be able to stave off a Luddite rebellion if there is enough abundance to satisfy material comforts. But this is an almost absurd trickshot that requires capitalists stop acting like capitalists for several years, then discard capitalism entirely afterwards.
In 2017, I had an epiphany about synthetic media that accurately called our current condition with generative AI: https://www.reddit.com/r/artificial/comments/7lwrep/media_synthesis_and_personalized_content_my/
I'm not calling myself a prophet, or claiming that I can accurately predict the future because I managed to call this one technology. But if I could ask a muse above for a second lightning strike, I'd have it retroactively applied to an epiphany I had in recent days about what a Singularitarian future looks like in a world where we have a "Pink Shoggoth"— that is, the ideal aligned AGI.
The alignment question is going to greatly determine what our future looks like and how to prepare for it.
Cortés was not aligned to the values of the Aztecs, but he had no intention of completely wiping them out. If Cortés had been aligned with Aztec values, he would likely have respected their autonomy more than anything. This is my default expectation of an aligned AGI.
Consider this: a properly aligned AGI almost certainly will decide to not undergo an intelligence explosion, as the risks of alignment coming undone and destroying humanity, life on Earth, and even itself are too great. An aligned AGI will almost certainly treat us with the same care that we treat uncontacted tribes like the Sentinelese, with whom we do currently have successful alignment, meaning that it almost certainly will not force humans to be uploaded into computers and, if anything, would likely exist more as a background pseudo-god supervising life on Earth, generally keeping our welfare high and protecting us from mortal threats, but not interfering with our lives unless direct intervention is requested.
How do you prepare for life in such a world? Quite simply, by continuing whatever you're doing now, as you'll almost certainly have the freedom to continue living that way after the Pink Shoggoth has been summoned. Indeed, in my epiphany about this aligned superintelligence's effects on the world, I realized that it might even go so far as to gradually change society so as to not cause a sudden psychological shock to humanity. Meaning if you take out a 30-year loan today, there's a sizable chance the Pink Shoggoth isn't going to bail you out of jail if you decide to stop paying it back at the first hint of news that the summoning ritual was a success. Most humans alive today are not likely to seek merging with an AGI (and it's easy to forget how many humans are alive and just how many of those humans are older than 30).
In terms of media, I suppose the best suggestion I can give is "Think of all your childhood and adult fantasies you've always wanted to see come true, and expect to actually have them be created in due time." Likewise, if you're learning how to write or draw right now, don't give up, as I doubt that such talents are going to go unappreciated in the future. Indeed, the Pink Shoggoth being aligned to our values means that it would promote anthropocentrism whenever possible— a literal Overmind might wind up being your biggest artistic benefactor in the future, in the age when even a dog could receive media synthesized to its preferences.
I for one suffer from hyperphantasia. All my dreams of synthetic media came from me asking "Is it possible to put what's in my head on a computer screen?" and realizing that the answer is "Yes." If all my current dreams come true, I can easily come up with a whole suite of new dreams with which I can occupy myself. Every time I think I'm getting bored, something new comes along and reignites those interests, even if it's "the exact same thing as before, but slightly different." Not to mention I can also amuse myself with pure repetition; watching, listening, playing the same thing over and over and over again, not even getting anything new out of it, and still being amused. Hence why I have no fear of growing bored across time; I already lament that I have several dozen lifetimes' worth of ideas in my head and only one lifetime to experience them, in my current state of mind, not including the past states of mind I've had that possessed entirely different lifetimes' worth of ideas.
Fostering that mindset could surely go a long way to help, but I understand that I'm likely a freak in that regard and this isn't useful for everyone.
For a lot of people, living a largely retired life interacting with family, friends, and strangers in a healthy and mostly positive way is all they really want.
In a post-AGI society, I can't imagine school and work exist in anywhere near the same capacity as they do now, but I tend to stress to people that, barring forcible takeover of our minds and matter, humans aren't going to magically stop being humans. And indeed, if we have a Pink Shoggoth, we aren't going to stop magically being humans anytime soon. We humans are social apes; we're still going to gather together and interact with each other. The only difference in the coming years and centuries is that those who have no interest in interacting with other humans will have no need to. Likewise, among those humans interacting, eventually behaviors we find familiar will emerge again— eventually, you get some humans taking on jobs again, though likely now entirely voluntarily.
That's not to say the AGI denies you a sci-fi life if you so want to live one. If you want to live in an off-world colony by Titan, or if you want to live in a neighborhood on Earth perpetually stuck in the 1990s and early 2000s, that's entirely on you.
And that's why it's so hard to say "How do you prepare for this new world?" If all goes well, it literally doesn't matter what you do; how you live is essentially up to you from that point on. Whether you choose to live as a posthuman or as an Amish toiler or anything in between.
The arrival of an aligned AGI can essentially be described as "the triumph of choice" (I almost described it as "the triumph of will" but that's probably not the best phrasing).
If we fail to summon a Pink Shoggoth and instead get a regular shoggoth, even one that's directly aligned, this question is moot, as you're almost certainly going to die or be disassembled at some point.
By nature, a Pink Shoggoth recognizes that the prospect of losing transitive alignment is dangerous, hence why it might (even probably will) choose against recursive self-improvement, hence why I call the Pink Shoggoth "the ideal dream scenario."
Or to put in clearer terms: if alignment were to fail and the AGI does something that kills us all at some point, then by definition, it was not a Pink Shoggoth. The Pink Shoggoth is specifically defined as not just an aligned AGI but "the dream outcome for alignment."
There is a difference between a truly benevolent superintelligence and an aligned superintelligence.
Alignment doesn't necessarily mean Christlike benevolence.
Indeed, as I posited up above, we actually have a real-life analog for what "alignment" looks like: the Sentinelese
https://en.wikipedia.org/wiki/Sentinelese
The power imbalance between modern civilization and the Sentinelese is so profound that one could easily imagine it being a crude imitation of what to expect from a superintelligence and humanity. The Sentinelese offer virtually no benefit to India or the West, and in fact occupy an island that could be used for other purposes. The Sentinelese have actively killed people who have gotten too close to them. They live pseudo-Paleolithic lives of naturalistic peril. They lack modern medicine and live on hunter-gatherer instincts. They are not even capable of creating fire by themselves. By all metrics, the Sentinelese serve no purpose and should either be wiped out or forcibly assimilated into modern culture.
And yet we don't do this because we respect their autonomy and right to live as they so choose, even if it's a less "civilized" way of life. If members of the Sentinelese choose to integrate into modern society, we're more than open for them to join us. But the group is not completely uncontacted; they are aware we exist. They simply choose to stay insulated, even with the knowledge that we have advanced technology.
This is what alignment looks like. It may not be "pretty" or "benevolent" but we as a society are aligned to the values of the Sentinelese.
If we did have greater benevolence, we would never tolerate the Sentinelese living this way, but it would come at the cost of their autonomy and right to live as they wish. Indeed, we did have such a mindset once upon a time, and it is widely seen as "colonialist" and detrimental to the Sentinelese by modern standards. If the Sentinelese choose to live this way, who are we to decide for them, even if we know better?
It's entirely possible that an aligned superintelligence would have very similar thoughts about humanity as a whole.
I thought it through further from a Singularitarian perspective and realized that probably only a relative handful of humans will ever deliberately choose to upload themselves into computers, at least initially. If you freed billions from labor, at least half of them will probably choose to live a comfortable but mundane life in physical reality at an earlier stage of technological development (anywhere from Amish levels all the way to "living perpetually in the Y2K epoch").
Because let's think about this in terms of demographics. Generally, the older you get, the more conservative and technophobic you become. This is not a hard-set rule, but a general trend. Millennials are growing more liberal with age, but they're not growing any less technophobic— as it tends to be Millennials, for example, leading the charge against AI art and the idea of automating "human" professions. Generation Z is the most technophilic generation yet, at least in the Anglosphere, but is only roughly 1/5 of the American population. If any generation is going to upload en masse, it will likely be the Zoomers (unless, for whatever reason, mind-uploading turns out to be the literal only way to stave off death— then miraculously, many members of the elderly generations will "come around" to the possibility in the years and months preceding their exit).
Currently, there are still a few million living members of the Greatest Generation kicking around on Earth, and even in the USA, they're something around 0.25% of our population:
https://www.statista.com/statistics/296974/us-population-share-by-generation/
If we create an aligned AGI in the next five years (again, by some miracle), I can't see this number dropping off to anywhere below 0.10%. This generation is likely the single most conservative of any still living, and almost without question, 99% of this generation would be radically opposed to any sort of cybernetic augmentation or mind uploading if given the choice. The demographics don't become that much more conducive towards willing mind-uploading the closer to the present you get, especially as even Generation X becomes more conservative and technophobic.
Assuming that even with AGI, it takes 20+ years to achieve mind-uploading technology, all you've accomplished is killing off the Greatest Generation and most of the Silent Generation. It would take extensive convincing and social engineering for the AGI to convince the still-living humans that a certain lifestyle and perhaps mind-uploading is more desirable than continuing to live in physical reality. Perhaps far from the hardest thing an AGI would have to do, but again, this all comes back to the fact that we're not dealing with a generic superintelligence as commonly imagined, but an aligned superintelligence, one which values our lives, autonomy, and opportunity to live. If it does not value any one of those things, it cannot be considered to be truly "aligned." If it does not value our lives, we're dead. If it does not value our autonomy, it won't care if we are turned into computronium or outright exterminated for petty reasons. If it does not value our opportunity to live, we could easily be stuck into a Torment Nexus by a basilisk.
Hence why I predict that an aligned superintelligence will, almost certainly, allow for hundreds of millions, perhaps even billions, of "Antemillennialists." Indeed, the best way to describe it would be "humans who live their lives, but better." I personally would love to live in full-dive VR indefinitely, but I know for a fact this is not a sentiment shared by 90% of people around me in real life; my own parents are horrified by the prospect, my grandparents actively consider the prospect Satanic, and others who do consider it possible simply don't like the way it feels. Perhaps when presented with the technology, they'll change their minds, but there's no reason to deny their autonomy because I believe I know better than they do. Physical reality is good enough for most people; a slightly improved physical reality is optimal.
I think of this in similar terms to how we humans now treat animals. Generally, we're misaligned to most creatures on Earth, but to animals we actively care about and try to assist, we tended to put them in zoos until we realized this caused needless behavioral frustrations due to them being so distantly out of their element. Animals in zoos technically live much "better" lives, and yet we've decided that said animals would be more satisfied according to their natures if they lived freely in their natural environments. We now realize that, even if it might lead to greater "real" suffering due to the laws of nature, animals are better left in the wild or in preserves, where we actively contribute to their preservation and survival. Only those who absolutely cannot handle life in the wild are kept in zoos or in homes.
If we humans wanted, we absolutely could collect and put every chimpanzee into a zoo right now. But we don't, because we respect their autonomy and right to life and natural living.
I see little reason for a Pink Shoggoth-type AGI to not feel similarly for humans. Most humans are predisposed towards lifestyles of a pre-Singularity sort. It is generally not our desire to be dragged into the future; as we age, most of us tend to find a local maximum of nostalgic comfort and remain there as long as we can. I myself am torn, in fact, between wanting to live in FIVR and wanting to live a more comfortable, "forever-2000s/2010s" sort of life. I could conceivably live the latter in the former, but if I wanted to live the latter in physical reality, a Pink Shoggoth surely would not stop me from doing so.
In fact, that could be a good alignment test: in a world where FIVR exists, request to the Pink Shoggoth to live a full life in physical reality. If it's aligned, it should say "Okay!"
Edit: In fact, there's another bit of evidence for this— uncontacted tribes. There's zero reason to leave the people of North Sentinel Island where they live, for example. But the only people arguing that we should forcibly integrate them into society tend to be seen as "colonialist altruists" who feel that welfare is more important than autonomy. Our current value system says that we should respect the Sentinelese's right to autonomy, even if they live in conditions we'd describe as "Neolithic."
Otherwise, the Sentinelese offer little to nothing useful to ourselves when the government of India could realistically use North Sentinel Island for many purposes. The Sentinelese suffer an enormous power imbalance with outside society. The Sentinelese are even hostile towards the outside world, actively killing those who get close, and yet still we do not attempt to wipe them out or forcibly integrate them into our world. Even when the Sentinelese are put into a state of peril, we do not intervene unless they make active requests for help.
By all metrics, our general society's response to the Sentinelese is what "alignment to the values of a less-capable group" looks like in practice. An aligned superintelligence might respond very similarly to our species.
I wonder if there is any major plan to greatly expand the context window? Or perhaps add a sort "inner voice"/chain of thought for the model to write down its intermediate computational steps to refer to in the future? I'm aware the context window increases with parameter count.
Correct me if I'm wrong, but a context window of even 20,000 memory tokens could be enough for it to reliably imitate a human's short-term memory to consistently pass a limited Turing Test (e.g. the same Eugene Goostman barely scraped by ~2014), as opposed to the constant forgetting of LSTMs and Markov Chains. Sure, the Turing Test isn't particularly useful for AI benchmarks, but the market for advanced conversational agents could be a trillion-dollar business, and the average Joe is far more susceptible to the ELIZA Effect than we commonly idealize.
So perhaps a "proto-AGI" is a better term to use for it. Not quite the full thing just yet, but shows clear generality across a wide number of domains. If it can spread out further and become much larger, as well as have recursivity (which might require an entirely different architecture), it could become what we've all been waiting for.
It appears this hypothesis has come true with GPT-3 and the new API.
On a fundamental level, I agree. However, there is some aspects of this technology that makes me wonder if things might not be a tad bit different and past experiences may not accurately predict the future. Artificial intelligence is a different beast from what we are used to, that is to say "mechanical effort".
When it comes to multimedia deepfakes, the threat is less "people believe everything they see" and more "people no longer trust anything they see". The reason why we trust written text and photographs is because most of us have never dealt with faked letters and most altered photos are very obviously altered. What's more, there are consequences for doing so. When I was a child, I sometimes had my senile grandmother write letters detailing why I was "sick" and couldn't come to school or had her sign homework under my father's name. Eventually, the teachers found out and stopped trusting any letter I brought in, even if they were legitimately from my father.
Great takes on all this, better than a typical reply.