Nine Points of Collective Insanity
post by Remmelt (remmelt-ellen), flandry19 · 2022-12-27T03:14:11.426Z · LW · GW · 3 commentsThis is a link post for https://mflb.com/ai_alignment_1/ai_narrative_psr.html
Contents
3 comments
A scenario detailing how humans automate away other humans, as the beginning of the economic decoupling of the AI-hardware economy from the human-wetware economy.
Excerpts compiled below:
- Where/first it begins with the customer service people – they are automated away (since that is expensive, and no one wants to do it – everyone hates working with abusive customers anyway).
- that then the engineers building the capability to build capability automate themselves away, (but *not* before also developing automation of maximally efficient optimized marketing and sales processes, since those sales/marketing people tend to also be *very* expensive (and opinionated temperamental narcissist artist types too)).
- then, when the VC – seeing that the executive team is no longer needed – (since everyone else has already been factored out of the overall company structure) that they (the VC folk) notice that they also have the capability to make fully autonomous corporations (maybe using some of the newer 'Decentralized Autonomous Orgs', with a heavy crypto emphasis).
- then that the overall situation is effectively "passive income on steroids for the VC, so much so, to the point that even the VCs themselves, are no longer even having to evaluate startup business plan proposals at all – they already have the perfect evaluative tools, and the ability, with their 'owned' AI, to identify the 'best possible versions' (the most profitable, and extractive versions) of any possible business plan(s),
That the net effect ends up being that fewer and fewer actual people (users) have any degree of significant potential for any sort of combined economic impact (aside from (maybe) a very few world scale global trillionaires), and the total share of the overall world total economy that has anything to do with the real needs of people – since the total value of their 'investment ability' is monotonically decreasing, and at an ever increasing rate.
→ Read link to Forrest Landry's blog for more.
Note: Text is laid out in his precise research note-taking format.
3 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2022-12-28T05:22:03.388Z · LW(p) · GW(p)
I'm surprised at the heavy downvotes on this one. It seems like a key type of concern. I'm curious what downvoters' reasoning is.
comment by Dave Lindbergh (dave-lindbergh) · 2022-12-27T04:37:47.506Z · LW(p) · GW(p)
This seems to assume that ordinary people don't own any financial assets - in particular, haven't invested in the robots. Many ordinary people in Western countries do and will have such investments (if only for retirement purposes), and will therefore receive a fraction of the net output from the robots.
Given the potentially immense productivity of zero-human-labor production, even a very small investment in robots might yield dividends supporting a lavish lifestyle. And if those investments come with shareholder voting rights, they'd also have influence over decisions (even if we assume people's economic influence is zero).
Of course, many people today don't have such investments. But under our existing arrangements, whoever does own the robots will receive the profits and be taxed. Those taxes can either fund consumption directly (a citizen's dividend, dole, or suchlike) or (better I think) be used to buy capital investments in the robots - such purchases could be distributed to everyone.
[Some people would inevitably spend or lose any capital given them, rather than live off the dividends as intended. But I can imagine fixes for that.]
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2022-12-27T06:58:29.586Z · LW(p) · GW(p)
Many ordinary people in Western countries do and will have [investments in AI/robots] (if only for retirement purposes), and will therefore receive a fraction of the net output from the robots.
... Of course, many people today don't have such investments. But under our existing arrangements, whoever does own the robots will receive the profits and be taxed. Those taxes can either fund consumption directly (a citizen's dividend, dole, or suchlike) or (better I think) be used to buy capital investments in the robots - such purchases could be distributed to everyone.
...Given the potentially immense productivity of zero-human-labor production, even a very small investment in robots might yield dividends supporting a lavish lifestyle.
I appreciate the nuance.
My takes:
- Yes, I would also expect many non-tech-people in the Global North to invest in AI-based corporations, if only by investing savings in an (equal or market-cap weighted) index fund.
- However, this still results in an even much stronger inequality of incomes and savings than in the current economy, because in-the-know-tech-investors will keep reinvesting profits into high-RoI (and likely highly societally extractive) investments for scaling up AI and connected machine infrastructure.
- You might argue that if most people (in the Global North) are still able to live lavish lifestyles relative to current lifestyles, that would not be too bad. However, Forrest's arguments go further than that.
- Technology would be invested into and deployed most by companies (particularly those led by power-hungry leaders with Dark Triad traits) that are (selected by market profits for being) the most able to extract and arbitrage fungible value through the complex local cultural arrangements on which market exchanges depend to run. So basically, the GDP growth you would measure from the outside would not concretely translate into "robots give us lavish lifestyles". It actually would look like depleting all what's out there for effectively and efficiently marketing and selling "products and services" that are increasingly mismatched with what we local humans deeply care about and value.
- I've got a post lined up exploring this.
- Further, the scaling up of automated self-learning machinery will displace scarce atomic and energy resources for use for producing and maintaining the artificial robots in the place of reproducing and protecting the organic humans. This would rapidly accelerate what we humans started in exploiting natural resources for our own tribal and economic uses (cutting down forests and so on), destroying the natural habitats of other organic species in the process (connected ecosystems that humans too depend on for their existence). Except, this time, the human markets, the human cultures, and the humans themselves are the ones to go.
- Technology would be invested into and deployed most by companies (particularly those led by power-hungry leaders with Dark Triad traits) that are (selected by market profits for being) the most able to extract and arbitrage fungible value through the complex local cultural arrangements on which market exchanges depend to run. So basically, the GDP growth you would measure from the outside would not concretely translate into "robots give us lavish lifestyles". It actually would look like depleting all what's out there for effectively and efficiently marketing and selling "products and services" that are increasingly mismatched with what we local humans deeply care about and value.