What is to be done? (About the profit motive)
post by Connor Barber (connor-barber) · 2023-09-08T19:27:40.228Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 23 Dave Lindbergh 8 kithpendragon 6 rhollerith_dot_com 4 PeterMcCluskey 3 the gears to ascension 2 shminux 2 faul_sname 1 alex.herwix None 1 comment
I've recently started reading posts and comments here on LessWrong and I've found it a great place to find accessible, productive, and often nuanced discussions of AI risks and their mitigation. One thing that's been on my mind is that seemingly everyone takes for granted that the world as it exists will eventually produce AI, particularly sooner than we have the necessary knowledge and tools to make sure it is friendly. Many seem to be convinced of the inevitability of this outcome, that we can do little to nothing to alter the course. Often referenced contributors to this likelihood are current incentive structures; profit, power, and the nature of current economic competition.
I'm therefore curious why I see so little discussion on the possibility of changing these current incentive structures. Mitigating the profit motive in favor of incentive structures more aligned with human well-being seems to me an obvious first step. In other words, to maximize the chance for aligned AI, we must first make an aligned society. Do people not discuss this idea here because it is viewed as impossible? Undesirable? Ineffective? I'd love to hear what you think.
Answers
The merits of replacing the profit motive with other incentives has been debated to death (quite literally) for the last 150 years in other fora - including a nuclear-armed Cold War. I don't think revisiting that debate here is likely to be productive.
There appears to be a wide (but not universal) consensus that to the extent the profit motive is not well aligned with human well-being, it's because of externalities. Practical ideas for internalizing externalities, using AI or otherwise, I think are welcome.
↑ comment by alex.herwix · 2023-09-10T14:32:32.001Z · LW(p) · GW(p)
That seems to downplay the fact that we will never be able to internalize all externalities simply because we cannot reliably anticipate all of them. So you are always playing catch up to some degree.
Also simply declaring an issue “generally” resolved when the current state of the world demonstrates it’s actually not resolved seems premature in my book. Breaking out of established paradigms is generally the best way to make rapid progress on vexing issues. Why would you want to close the door to this?
Replies from: dr_s↑ comment by dr_s · 2023-09-25T05:51:22.781Z · LW(p) · GW(p)
I don't think he's declaring it resolved, more arguing that it's been fought over to the death - quite literally - and yet no viable alternative seems to have emerged, so odds are doing it here would turn out similarly improductive and possibly destructive to the community.
LessWrong tends to flinch pretty hard away from any topic that smells even slightly of politics. Restructuring society at large falls solidly under that header.
I sometimes imagine that making it so that anyone who works for or invests in an AI lab is unwelcome at the best Bay Area parties would be a worthwhile state of affairs to work towards, which is sort of along the same lines as you write.
Eliminating the profit motive would likely mean that militaries develop dangerous AI a few years later.
I'm guessing that most people's main reason is that it looks easier to ban AI research than to sufficiently reduce the profit motive.
As far as I know, there has never been a society that both scaled and durably resisted command-power being sucked into a concentrated authority bubble; whether this command-power/authority was tokenized via rank insignia or via numerical wealth ratings, the task of building a large-scale society of hundreds of millions to billions that can coordinate, synchronize, keep track of each others' needs and wants, fulfill the fulfillable needs and most wants, and nevertheless retains the benefits of giving both humans and nonhumans significant slack that the best designs for medium-scale societies of single to tens of millions like indigenous governance does and did, is an open problem. I have my preferences for what areas of thought are promising, of course.
Structuring numericalization of which sources of preference-statement-by-a-wanting-being are interpreted as command by the people, motors, and machines in the world appears to me to inlines the alignment problem and generalize it away from AI. It seems to me right now that this is the perspective where "we already have unaligned AI" makes the most sense - what is coming is then more powerful unaligned ai - and it seems to me that promising movement on aligning AI with moral cosmopolitanism will likely be portable back into this more general version. Right now, the competitive dynamics of markets - where purchasers typically sort offerings by some combination of metrics that centers price - creates dynamics where sellers that can produce things the most cheaply in a given area win. Because of monopolization and the externalities it makes tractable, the organizations most able to sell services which involve the work of many AI research workers and the largest compute clusters are somewhat concentrated, with the more cheaply implementable AI systems in more hands but most of those hands are the ones most able to find vulnerabilities in purchasers' decisionmaking and use it to extract numericalized power coupons (money).
It seems to me that ways to solve this would involve things that are already well known: if very-well-paid workers at major AI research labs could find it in themselves to unionize, they may be more able to say no to things where their organizations' command structure has misplaced incentives stemming from those organizations' stock contract owners' local incentives, maybe. But I don't see a quick shortcut around it and it doesn't seem like it's as useful as technical research on how to align things like profit motive with cosmopolitan values, eg via things like Dominant Assurance Contracts.
You might be thinking about it in a wrong way. Societal structures follow capabilities, not wants. If you try to push for "each person works and is paid according to their abilities and needs" too early, you end up with communist dystopias. If we are lucky, the AGI age will improve our capabilities enough where "to everyone according to their needs" may become feasible, aligning the incentives with well-being rather than with profit. So, to answer your questions:
- It is currently impossible to "align the incentives" without causing widespread suffering.
- It is undesirable if you do not want to cause suffering.
- It is ineffective to try to align the incentives away from profit if your goal is making them aligned with "human well being".
That said, there are incremental steps that are possible to take without making things worse, and they are discussed quite often by Scott Alexander and Zvi, as well as by others in the rationalist diaspora. So read them.
↑ comment by M. Y. Zuo · 2023-09-10T01:37:43.372Z · LW(p) · GW(p)
This is a bit of a tangent but even in an ideal future I can't see how this wouldn't just be shifting the problem one step away. After all, who would get to define what the 'needs' are?
If it's defined by majority consensus, why wouldn't the crowd pleasing option of shifting the baseline to more expansive 'needs' be predominant?
Replies from: shminux↑ comment by Shmi (shminux) · 2023-09-10T08:26:09.763Z · LW(p) · GW(p)
I'd assume that people themselves would define what they need, within the limits of what is possible given the technology of the time.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-09-10T14:43:48.258Z · LW(p) · GW(p)
So it would be exactly the same as how 'needs' are recognized in present day society?
Replies from: shminux, dr_s↑ comment by Shmi (shminux) · 2023-09-10T19:06:24.862Z · LW(p) · GW(p)
I guess my point is the standard one: in many ways even poor people live a lot better now than royalty 300 years ago.
↑ comment by dr_s · 2023-09-25T05:53:07.366Z · LW(p) · GW(p)
Well, except now just saying "I need this" wouldn't get the need satisfied if you don't have the money for it.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-09-25T18:29:32.648Z · LW(p) · GW(p)
How's that different from the future?
There clearly will still be resource constraints of some kind and they will very likely need some unit of currency to carry out their activities.
Replies from: dr_s↑ comment by dr_s · 2023-09-25T20:46:39.448Z · LW(p) · GW(p)
I mean, people seem to assume here when discussing an "ideal future" some kind of post-scarcity utopia in which there's enough to satisfy anyone's wildest needs ten times over.
I agree that I'm personally not a big believer in this being possible at all. You can have enough abundance to provide everyone with food, clothes and a house, or even more, but at some point you'll probably have to stop. Currency might be replaced by some analogue system, but yes, at the end, you need some way to partition limited resources, and unlimited resources just aren't physical.
↑ comment by Connor Barber (connor-barber) · 2023-09-10T21:57:50.491Z · LW(p) · GW(p)
I don't think I agree that societal structures follow capabilities and not wants. I'll agree that certain advancements in capability (long term food storage, agriculture, gunpowder, steam engines, wireless communication, etc.) can have dramatic effects on society and how it can arrange itself, but all the changes are driven by people utilizing these new capabilities to further themselves and/or their community.
The idea of scarcity in the present is a great example of this. The world currently produces so much food that about a third of it is thrown away before even being sold, more than enough to feed all those who go hungry. There are orders of magnitude more empty houses in North America than there are homeless people, not even counting apartments or hotel rooms. We don't live in a time of scarcity, we live in a time of overproduction. People don't go hungry or homeless because we don't have enough production capacity to feed or house them and maintain everyone else's quality of life, but because it would be less profitable to do so. "To each according to their needs" is feasible right now without AI or even expanding production capacity, it's simply not incentivized.
I agree with your point that aligning incentives with well-being rather than profit is possible when we produce enough, I just see that we disagree whether or not we do actually produce enough currently.
I'd love if you could point me to any resources indicating that scarcity of necessities is currently natural instead of manufactured, or if you could expand further upon your first point about capability being the primary force driving societal change. Thanks for your response.
Replies from: shminux, Alan E Dunne↑ comment by Shmi (shminux) · 2023-09-10T23:16:08.342Z · LW(p) · GW(p)
I agree with your analysis of the current situation. However, the technological issues arise when trying to correct it without severe unintended consequences, and that is not related to profit. You can't transplant a house easily. You cannot easily feed only those who go hungry without affecting the economy (food banks help to some degree). There are people in need of companionship that cannot find it, even though there is a companion that would match somewhere out there. There are potential technological solutions to all those that are way outside our abilities (teleportation! replication! telepathy!) that would solve these issues. You can also probably find a few examples where what looks like profit-based incentive is in fact a technological deficiency.
↑ comment by Alan E Dunne · 2023-09-24T17:11:04.290Z · LW(p) · GW(p)
1/ evidence for these statements?
2/ in what sense is it profitable to throw away food or maintain empty dwellings that is distinct from "maintaining everyone else's quality of life"?
3/ if the evil is that some people's needs are not valued enough could that not be remedied by giving them money and making it profitable to meet their needs?
In other words, to maximize the chance for aligned AI, we must first make an aligned society.
"An aligned society" sounds like a worthy goal, but I'm not sure who "we" is in terms of specific people who can take specific actions towards that end.
I think proposals like this would benefit from specifying what the minimum viable "we" for the proposal to work is.
I ask myself the same question. I recently posted an idea about AI regulation to address such issues and start a conversation but there was almost no reaction and mostly just pushback. See: https://www.lesswrong.com/posts/8xN5KYB9xAgSSi494/against-the-open-source-closed-source-dichotomy-regulated [LW · GW]
My take is that many people here are very worried about AI doom and think that for-profit work is necessary to get the best minds working on the issue. It also seems that Governments in general are perceived to be incompetent so the fear is more regulation will screw things up rather than make them better.
Needless to say, I think this is a false dichotomy and we should consider how we (as a society involving diverse actors and positions in transparent process) can develop regulation that actually creates a playing field where the best minds can responsibly work on societal and AI alignment. It’s difficult of course but the better option when compared to letting things develop as is. The last couple of years have demonstrated clearly enough that this will not work out. Let’s not just bury the head in the sand and hope for the best.
1 comment
Comments sorted by top scores.
comment by Dagon · 2023-09-08T22:51:09.320Z · LW(p) · GW(p)
What, specifically, do you mean by "change current incentive structures"? Who are you altering to want different things, or to have different behaviors that they believe will get them those things?
I haven't seen any concrete thoughts on the topic, and for myself I don't think it's possible without a whole lot of risky interference in individual humans' beliefs and behaviors.