Human's roles in a post AGI era
post by Juliezhanggg · 2025-02-01T21:16:36.809Z · LW · GW · 0 commentsContents
No comments
Human's sense of meaning usually are derived from feeling special, on an individual or collective basis. We want to prove ourselves different through tastes, hobbies, and career choices. In the past, each of us followed a genetic phenomenon thus we consider each obtains talents, the perceived ability to be gifted at something. Our civilization, created cultural tribes, the bigger collective intelligence governed by country. Now, we believe we can enhance our human experience through VR, AR, BCI, and Gene editing to fundamentally break our limits.
I used to not take AI seriously with the deepest assumption that AI can't innovate, or might not be conscious and have true understanding. But then, after months of inquiries and research in the AI community, and neuroscience community, I have completely shifted my mind. I tried hard to brainstorm human value, and what the last safe net humans can rely on, that AI will never replace. But all the conclusions I found only last 1/2 weeks, and then the obvious counterargument will prove them wrong.
A few years ago, I was onboarded with the vision of robotics would automate our society early on in my career, mentored by a founder who has devoted all his life to researching and developing robotic hands. Last year, I updated my idea and believed that humanoid will be the future of robotic hardware which will eventually eliminate all other forms of automation machines. For the record, Since around that time, I have been trying to answer an important question: what is the essence of intelligence, and will AI have emotions, and be self-conscious like we do? The question is so important to me that I have to pause my longevity research on brain replacement to answer.
I know that roughly math could be the basis of intelligence, and Vitalik gave me a few pointers on his opinion on intelligence during our chat. It was quite shocking to me that human intelligence is no different from LLM. It is more human than human if intuition is the marker. RLHF could generate outputs that we find agreeable by having emotions. Bio tissues surely are not the only substrate for emotions/intelligence, and Vitalik believed that emotions are algorithmic. A fun example is to have something running on a clock speed of 1 cycle per minute that's implemented by people passing cards around with their hands to have emergent property equivalent to emotion. I didn't buy into this opinion right away when I heard it, but now after my back-and-forth questioning, I am leaning toward it. Emotion is not unique to humans, neither is intelligence.
As we speak of intelligence, super baby, enhanced intelligence in humans has been a hot topic for a while as a way to de-risk AI. It may sound appealing short term, but silicon-based intelligence will be much smarter than us given the same time of development. No human has ever reached the maximum perfect intelligence score so far, but that highest score is also restrained by biology. Human intelligence, and machine intelligence in a race longer run, the machine will probably be a better substrate because we make progress on compute and inference chips every year with speed, and cost.
Gared, a guy I often met in Lighthaven had a 6-hour conversation with me one day. He pointed to me that ultimately, the definition of intelligence is the combination of probability theory which is the search for hypothesis, and optimization theory which is to search for expected utility/results. As he is one of the smartest people I've ever met, it aligned with Vitalik's view that understanding intelligence, requires a mathematical way of seeing what is going on in vector spaces together with the understanding of how math maps to reality.
Then, after we have a unified definition of intelligence, going up to the industry level of current AI progress we can see more clearly what is going on. Everyone is fighting for more computing, more data, and more capital to win a race for who will get to the most intelligent model that can self-iterate to make itself smarter. This is what's going on between these model deployers. Dario predicted AI smarter than human Nobel prize winner to take place by 2026 which is 1 year later.
Yes, midterm, human will leverage AI to boost their generated value. Long-term, when a model can self-iterate without much supervision from humans, that's where the endgame is approaching. Lesser and lesser human agency will be required to correct/give instructions until one day, human is no longer needed in the loop. The way AI replaces humans is gradual, first, the one with less experience or domain expertise (fewer data) then move up the ladder to PhD, scientists, and more C-level executives.
Stephen Wolfram said that even if everything has been automated by computation, we could still have an excellent inner experience or meaningful existence. Automation obsoletes some jobs but enables others. Stephan doesn't think everything will be fully automated because of computational irreducibility. Something unpredictable will always happen, something that automation doesn't cover. If there is anything we care about, we will have to do something to go beyond automation. Marc Andreessen recently posted a tweet that's quite interesting, he thinks AI won't cause unemployment because one thing will block the progress of AI, making it illegal for most of the economy. Anything controlled by the government is rising in price including healthcare, education, and housing. The regulated part of products rise, and the less regulated parts fall. This argument also opposes the general perception that AI will take over the economy and people will be needed within the regulated zone. Over over-regulation problem is being solved by special economic zone projects that try to push fewer regulations from local governments, such as Prospera Honduras.
Even humans will always be needed for many reasons above, I do think there is a higher intellectual requirement for human labor. The easier the job is, the faster it will get replaced by automation. Humans will need to enhance their intelligence and be a better version of themselves, which means those who aren't capable of dealing with complex unpredictable reality will retire from the workforce. The types of jobs left for humans are always one ladder above AI. If AI writes scripts for humans, humans will manage and validate them. If agents do repetitive work, humans will manage, and assign roles to agents. Each time there is a technical breakthrough, humans will always move up to a more enjoyable, challenging position. Humans will require new education or training to become better planners, predictors, managers, and mayors of highly complex automation systems through efficient masters of tools including BCI, and AI.
Anthropic CEO Dario's blog revealed his thoughts on the after-effects when geniuses live in a data center (AGI). What's the meaning of human? How do they survive economically? Dario thinks humans do things such as running, walking, art, gaming, dating, and family, not for economic return, but purely for the sake of enjoyment. So, "do what you love" will continue to apply its charm to a post-AGI era. However, doing what you love will not lead to economic success according to Dario. His view aligns with the recent research by Epoch AI that short term the economy will boost, with humans leveraging AI leading to 10x growth. As long as AI IS ONLY 90% better at a job, the other 10% will give humans much leverage. A new job category will be created, which is called the amplifier: human complementing what AI is good at. The human economy will continue to make sense after a while when we reach a country of geniuses in a data center. But at one point, our current economic setup will no longer make sense if AI will be effective and cheap broadly speaking. From hunting, farming, feudalism, and industrialism. We need to start envisioning something entirely new to answer what's next. Universal basic income might only be a small part of the solution. He describes a capitalist economy of AI systems that give out resources to humans following a distribution algorithm. Of economy could run on whuffle points, or humans will continue to matter regardless in unexpected ways. He also calls for brave and dedicated people to stand out and put in a huge amount of effort and struggle to achieve a positive version of the future.
The research by Epoch AI suggests AGI could drive wages below the subsistence level. But before that, the wage will boost for a while until it keeps diminishing. The core argument is that capital acts as a production bottleneck. When you add more workers without a corresponding increase in other essential resources, then additional work contributes less. The same as Dario's argument where superintelligence as a commodity will be constrained by the speed of the outside world, the need for data, constraints from humans, and physical laws to make progress in the real world. The boost in wages came from the era after the Industrial Revolution, when the rate of innovation increased, eventually outpacing population growth, and leading to sustained increases in marginal productivity of labor. The introduction of AGI into the economy will also drive innovation at a faster pace, more to compensate for AI's negative impact on wages. The prediction is made that human wages will crash below 2045 when technological progress starts to slow down. Counting on wages alone for future generations to survive is unlikely to be reliable.
Different predictions are made by influential people, each has its reference to the history of industrial evolution and other important timelines, they each integrate their unique understanding of regulation, economy, physical laws, and intelligence to map out the future. The common thread here is that short term after AGI penetrates our economy, humans will maintain its current order until it no longer makes sense with AI being cheaper and more effective comparatively. The Industrial Revolution paved the way for capitalism with the wealth getting concentrated in the hands of a few and gave rise to the new economic theory of laissez-faire capitalism and socialism. It also led to later consumerism, and further Technological Progress as well as a new social class called the middle class. All of these were due to the factory system, with the use of machines that increase production efficiency.
AI will probably do something similar to us this time but make wealth concentrated at the hands of people who leverage the AI workforce to profit which in turn might strengthen capitalism but also give rise to new startups with less capital, less team count to rise to new social class (Everyone can have AI employees work for them). More individuals will be able to make the flip from employees to bosses. Ideas will finally be enough without skills or much capital, passion is more important than knowledge. It could lead to an abundance of content, products, and services and the pursuit of technical-driven innovation will be more in demand. Talent-intensive companies like consulting firms might be more affected by valuation than resource-intensive companies.
With Deepseek disrupting the AI model space last week, the future of foundational models gets stirred. If companies can get training costs for powerful reasoning models low, maybe some companies can train a Nobel prize-level genius who innovates under a reasonable budget. If more companies opensource their model, there will come a time when model providers become abundant with different variations catering to niche demands. Building applications on top of free effective models has its long lasting significance to the market.
Opportunities and existential risks occur at the same time, as someone who don't have millions of Nvidia chips at hand, writing is my way to influence. The purpose of this article is to share how my understanding evolve with AI and share my interpretation of what's actually going in hope it can guide us better predict, envision, take actions for our preferred outcome.
0 comments
Comments sorted by top scores.