How will they feed us

post by meijer1973 · 2023-06-01T08:49:51.645Z · LW · GW · 3 comments

Contents

  Universal basic income 
  Humans would still have jobs 
  Self-sufficiency 
  The AI overlords will provide
  Humans will be replaced by AI 
  Simulation and uploads 
  The X-risk remains
  Conclusion
None
3 comments

I was wondering how in more optimistic future scenarios people would be fed or get money to buy food et cetera. A common topic in discussions between pro and anti doomers is how the AI will kill us. It is a fair point, killing 8 billion people seems difficult at first glance. But what if the AI does not kill us, how will we eat? Our current agro-industrial food supply is very fragile. And feeding 8 billion people takes a lot of resources. So in the utopian view that question needs answering. There are several proposals. 

Universal basic income 

The AI would do most or all of the work so there is not much to do for humans. Therefore we build a welfare system with a universal basic income or some variant of that. It seems like a rather fragile system, because in this situation humans would have no economic value and so very little bargaining power. There is a great dependency of the AI’s and very little control in the hands of humanity. 

One threat to this model could be that it would be very hard to find a broad enough tax base because of tax evasion. Especially in the early stages of UBI companies could go abroad to countries that are more tax-friendly. Or they could just find loopholes and don’t pay taxes (like the system we have today). In the current system taxes are mostly paid by the working middle and upper class (taxing labour is a lot easier than taxing profits or wealth). The wealthy people and the bigger companies are very hard to tax. This is an open problem that needs to be solved. Just stating that we will tax companies that use neural networks or something like that is not a solution. Companies are superintelligent at tax-evasion. I cannot tell you how they won’t pay the tax, but they probably won’t. Unlike the speculative how will a superintelligence kill us, there are a lot of examples here. The default is “pay less or no taxes”, despite large efforts to make companies pay their taxes.   

Humans would still have jobs 

As things are going cognitive work is rapidly being automated and there is still enough physical labour to be done. So maybe there are enough jobs. This might be enough to provide enough value and earn enough money at a subsistence level or above. This option does not get a lot of attention in utopian scenario’s because it is not very interesting. But we might have food. 

Bargaining power for higher wages would probably be very low because high value tasks are done by AI and a few select humans. So this scenario is probably pretty dystopian. And a very possible threat is that eventually all the jobs will get automated anyhow. And than we are back in a scenario of great dependency and fragility.

A more dystopian variant would be that a few humans that provide useful work will be provided with food and shelter. Maybe because robots are more expensive to operate than human workers. 

Self-sufficiency 

I had to add this to be complete in my overview of ways to provide for food. When you do not have a job and do not receive any welfare there is the option to grow your own food. Productivity will be very low. Industrial agriculture increased productivity by a factor of 5 to 10 per acre so it would still mean that a lot of people would have no food. Obviously a dystopian scenario and long term future is bleak. When AI overlords don’t kill and don’t bother with us, eventually the land will be used by the AI (e.g. use the biomass for energy). Or because the AI’s don’t need an ecosystem pollution and environmental degradation will threaten human habitat. 

The AI overlords will provide

Let’s get more creative. No capitalism, taxes and stuff like that like in the UBI system. The AI’s will produce and provide. A vulnerability is that to feed 8 billion people you need a lot of resources and that would be a great sacrifice for the AI overlords. We do not need an alignment of the “don’t kill the humans” type. We need alignment in the sense that the AI would sacrifice valuable energy and resources for humanity (that has no productive value anymore). 

The AI would need a strong intrinsic motivation to keep humans alive. This would set the bar for the alignment problem very high. Only intrinsic motivation would work. The motivation needs to be so deep that the AI will be motivated to make sub-agents that retain this motivation to keep humans alive. 

Not killing people as a motivation for the AI or cause no harm would not suffice. The AI would need to be motivated to feed humanity at scale. And its improved sub-agents in a few hundred years would need to retain this motivation.  

Humans will be replaced by AI 

I am amazed by some of the arguments made in the anti-doomer camp. An example is the discussion on the Bankless podcast. First Eliezer is interviewed and does his thing and scares the shit out of the hosts (credit to the hosts, because they really listened to Eliezer so they were pretty shaken by that). Then they invited Robin Hanson and he disagrees with Eliezer. To me the Hanson version of the future is still dystopian and I think most people would feel the same way if they took the time to study his views. It is not that I think he is wrong in the sense that it might play out as he says it will. I do not agree that this is a good future. 

In his view humans will be replaced by AI and it will create a more interesting world than humanity did. Although this is not called doomerism this scenario sounds pretty dystopian to me. The question how will they feed us gets answered, they don’t feed us.   

There probably isn’t a really good logical argument to say “humanity not die” is good. Call it an axiom or just a feeling or call me just old-fashioned, but I think humanity should not die. If we disagree on that, so be it. Hanson has very good points, but I think humanity should live.

Simulation and uploads 

And there is of course the possibility to simulate humans instead of feeding them. Could be a very efficient solution for the aligned AI that has the task to make humanity prosper. Obviously humans will have almost no control in that situation. A threat to this situation is that it takes a lot of energy resources to upkeep this simulated humanity. The AI’s will need to see a benefit to sacrifice all these energy resources to stimulate people. A very strong alignment is thus needed. Not the type “not kill humans” alignment.

A world that has simulated humans poses a high existential risk in itself at any time the overlords could just cut the power. Again this is a big ask for alignment and goes a lot further than just not kill humans. 

The X-risk remains

In these scenario’s the existential risk remains. And because the solutions are fragile the risk is significant. At least higher than 10% per century. Which spells doom in the long run. In a lot of the more optimistic scenario’s I see a lot of fragility and rising X-risk. I like the honesty of the Hanson argument. He admits that in the future there will probably be no humans and he is ok with that.

Furthermore I think the “not kill humans” alignment is not enough. That is an argument made often by the optimists. I have a hard time understanding how our current agricultural industry can coexist with an AI that does not care about humanity and does not kill humanity. In these scenario’s we might not die immediately, but existential risks rise. 

Conclusion

Feeding the world is a hard problem that takes a lot of resources. And it will probably get harder with a degrading environment. Humans that provide no economic value will have a hard time getting the resources that they need if the world stays anything like we have seen for the past 10.000 years. There is no precedent for providing for people that provide no economic value at scale. It is hard for me to write this and I am very sorry to maybe offend the people that are in a dependant situation. Providing for people that cannot provide for themselves is a good thing in my view, but to do this at scale will be very difficult and is unprecedented. We humans are often good people and we therefore often provide for the people that cannot provide for themselves. Creating a system that will do the same seems very hard.

Defending a system that will provide for us all needs very strong proof. And thus it shifts the burden of proof. Just like something killing all humanity is unprecedented and therefore requires strong proof. 

This argument is far from complete. My goal is to view the problem from a different perspective (where the burden of proof is different). It also invites you to explore other “optimistic” ideas and see how humanity will survive in those optimistic scenarios. Is there food and is the X-risk decreasing. As you will start to explore these optimistic scenarios and if you are like me then you will probably find them often times quite dystopian.  

 

p.s. not a native speaker and did not use AI to improve the text, hope you still get the gist of it

3 comments

Comments sorted by top scores.

comment by AnthonyC · 2023-06-01T12:31:09.250Z · LW(p) · GW(p)

I would add that a world with advanced AI, with or without much more advanced robotics, is a world with much more automation than we have today. So we can build and manufacturing and mine more than we do today.

I mention this because controlled environment agriculture (indoor farming) has many times higher yield per unit area than conventional farming, and many times lower water and fertilizer use and waste generation. Yields are also more stable, not subject to weather and less subject to pests. Construction and energy costs are the limiters, and those should be solvable with normal economic development + better automation.

Replies from: meijer1973
comment by meijer1973 · 2023-06-01T20:24:31.076Z · LW(p) · GW(p)

Thanks for the addition. Vertical and indoor farming should improve on the current fragility (thus add to robustness) of the agricultural industry. Feeding 8 billion people will still cost a lot of resources. 

Mining however is different in that mining cost will ever increase due to decreasing quality of ore and ores being mined in places that are harder to reach. This effect could be offset by technological progress for a limited time (unless we go to the stars). Vast improvements in recycling could be a solution, but that requires a lot of energy. 

Solving the energy problem via fusion energy would really help a lot for the more utopian scenario's. 

Replies from: AnthonyC
comment by AnthonyC · 2023-06-01T22:04:50.990Z · LW(p) · GW(p)

Fusion would be a big help for sure, but not strictly necessary. Consider that total sunlight reaching the Earth's surface is ~120PW, or 15MW for each of 8 billion people. That's about 1000x current primary energy use. Commercially cost-effective solar cells are currently ~20% efficient. If you could get install and balance of system costs down enough, with universal adoption rooftops alone could theoretically get you within spitting distance of an all-solar grid (yes in practice it won't happen this way, it will take massive amounts of complementary infrastructure and other energy sources and other technologies, etc., I'm just talking about land usage requirements.)

And yes mining gets more difficult in absolute terms, but I think you're (on a timescale of decades) underestimating the value of improving mining and metallurgical technology while overestimating the difficulty of recycling. On a timescale of longer than decades, "improving technology" expands to include things like automated asteroid mining (and manufacturing?) using space-based solar power.