Prosocial Capitalism

post by scafaria · 2021-10-02T17:06:27.609Z · LW · GW · 10 comments

This is a link post for https://prosocialcapitalism.com/

This is my first LessWrong post. I read Inadequate Equilibria by Eliezer Yudkowsky and was inspired to research and articulate additional solutions to get our civilization "un-stuck". I would love feedback from the community on the essay I've posted to www.prosocialcapitalism.com. I can re-post it here if preferable for guidelines, just let me know. Thank you! 

Prosocial Capitalism: How positive sum networks will out-compete communism and antisocial capitalism, reshape the Internet, preserve liberty, and stave off dystopia

10 comments

Comments sorted by top scores.

comment by Giskard (tiago-macedo) · 2021-10-04T15:49:47.273Z · LW(p) · GW(p)

In this article, you posit that "positive sum networks will out-compete [...] antisocial capitalism [...]".

If I understand correctly, this is due to cooperative systems of agents (positive-sum networks) producing more utility than purely-competitive systems. You paint a good picture of this phenomenon happening, and I think you are describing something similar to what Scott Alexander is in In Favor of Niceness, Community, and Civilization.

However, the question then becomes "what exactly makes people choose to cooperate, and when?" You cite the Prisoner's Dilemma as a situation where the outcome Cooperate/Cooperate is better than the outcome Compete/Compete for both players. That is true, but the outcome Compete/Cooperate is better for player 1 than any other. The reverse is true for player 2. That is what makes the Coop/Coop state a fragile one for agents acting under "classical rationality".

Cooperation tends to be fragile, not because it is worse then Competition (it's better in the situations we posit), but because unilaterally defying is better. So, suppose you have a group of people (thousands? billions?) who follow a norm of "always choose cooperation". This group would surely be more productive than an external group who constantly chooses to compete, sure, but if you put even one person who chooses to compete inside the "always cooperate" group, that person will likely reap enormous benefits to the detriment of others -- they will be player 1 in a Compete/Cooperate dilemma.

If we posit that the cooperating group can learn, they will learn that there is a "traitor" among them, and will become a little more likely to choose Compete instead of Cooperate when they think they might be interacting with the "traitor". But this means that these people themselves be choosing Compete, increasing the amount of "traitors" in the group, and then the whole thing deteriorates.

Do you have any ideas on how to prevent this phenomenon? Maybe the cooperating group is acting under a norm that is more complex than just "always cooperate", that allows a state of Cooperate/Cooperate to become stable?

You cite "communication and trust" as "the two pillars of positive sum economic networks". Do you think that if there is a sufficiently large amount and quality of trust and communication they become self-reinforcing? What I have described is a deterioration of trust in a group. How can this be prevented?

Replies from: scafaria
comment by scafaria · 2021-10-05T19:06:33.030Z · LW(p) · GW(p)

Hi Giskard, 

Yes to your "more utility" point. I am influenced by Robert Wright, who makes a compelling and direct case that communication and trust are what make positive-sum outcomes possible (Nonzero and here). And he points out that societies or organizations that generate those positive-sum effects will outcompete those that devolve in a race to the bottom. 

Re your comment "Maybe the cooperating group is acting under a norm that is more complex than just 'always cooperate', that allows a state of Cooperate/Cooperate to become stable?", Yes, that's exactly it! Civilization is a multipolar game, as Scott Alexander points out in Meditations on Moloch and also in the article you cite ('...and the general case is called “civilization”'). 

In Moloch, Alexander points out all sorts of multipolar traps. Yet on the whole society has moved forward (at least since the 1600s) by developing sufficient complexity which governs our interactions. Fortunately, we don't live in a simple PD game played only once or played anonymously (both of which strongly disfavor cooperation). Our personal relationships, reputations, sense of shame, and fear of downstream consequences make real life different from the simplest PD game. They provide enough nuance and complexity that on the whole we benefit from "inheriting a cultural norm and not screwing it up" (the article you cite). 

Here's my premise: Up until now in our digital lives we have lacked agency. Our online communications tend to either be centralized (governed by a Zuckerberg) or else anonymous (where reputation, relationships, sense of shame, and fear of downstream consequences don't apply). With the former we lack agency because the medium is not designed to support our individual interests or even human flourishing (as breaking news today about Facebook reminds us). With the latter, the medium often lacks the requisite complexity that forms our cultural norm inheritance in the offline world. 

Online life today is merely an inadequate equilibrium, to use Eliezer Yudkowsky's term. The purpose of my essay and the re-post here is to ask, "Would the following set of changes (which I attempt to articulate) allow for digital interactions that break free from PD and allow us to solve collective action problems?" How could we design digital interactions so that they represent a multiplayer game with a structure that stacks the deck in favor of positive-sum outcomes? My optimistic conclusion is that the online world (leaning on decentralized identifiers, zero-knowledge proofs, etc.) can offer game design structures that are more advantageous to positive-sum outcomes than anything we've yet seen offline (and certainly better than today's online designs). 

Ironically, tipping out of today's inadequate equilibrium is itself a collective action problem. And as I say, "Until individuals regain agency in their digital social interactions, coordinating for positive sum collective action is hard". Fortunately, I believe there is now (finally) a fulcrum for a tipping point that does not rely on collective action. Now that advertisers can no longer exploit personal identifiers in the same way, I believe they will be forced to explore models that make media firms more money anyway! (My point re the "Barbados" example). 

Scott Alexander, Robert Wright, many of you on this forum, and I have long thought about how to achieve more positive sum outcomes (how to defeat Moloch). Usually we look with hope to morality and rationality, yet we know how powerful a force Moloch really is. That's where I become excited that I believe that (finally!) now there can be a tipping point via: (i) Capitalist incentives plus (ii) the shock to today's equilibrium of Apple/Google's privacy announcements plus (iii) (not required, but bonus!) regulatory and other pressures owing to revelations about Facebook. 

Thanks so much for engaging on the essay. I'm optimistic that there really is a path toward a better equilibrium, and it helps to bounce the ideas off smart people. Glad to have this forum! 

comment by scafaria · 2021-10-03T14:41:12.129Z · LW(p) · GW(p)

Looks like I was supposed to post the full text here instead of a link, is that right? (I'm new to LessWrong). Thanks. 

Replies from: maxwell-peterson
comment by Maxwell Peterson (maxwell-peterson) · 2021-10-03T17:00:03.146Z · LW(p) · GW(p)

It’s not clear to me - I’m pretty sure I’ve see others do short summary posts with a link, like yours, and I don’t think those posts were downvoted.

I might guess that there’s a bit of the feeling of an ad to this post? Like it’s primarily aimed at promoting a website, instead of sharing a post. I don’t think that’s what you’re actually doing - there’s a long post at the link! - but maybe others got that vibe.

Replies from: Viliam, scafaria
comment by Viliam · 2021-10-03T20:38:11.337Z · LW(p) · GW(p)

Well, the long text is an ad for a book. That's what the entire website is about. On the other hand, it does not differ significantly from... well, an article on a similar topic which would not be an ad for a book.

Therefore, I would not penalize it for being an ad. Though I have other objections, such as the text being too long, and containing a lot of belief but not enough evidence.

In other words, I think it is okay to have this article on this website as it is, but I didn't upvote it.

Replies from: scafaria
comment by scafaria · 2021-10-04T13:36:08.177Z · LW(p) · GW(p)

Thanks, Viliam

comment by scafaria · 2021-10-03T20:32:57.019Z · LW(p) · GW(p)

Thanks, Maxwell. That could be. I'm working toward a book, so I built a website around that very long essay. My goal posting here to LessWrong was to see if there really is an opportunity for "World Optimization" / a better equilibrium human condition growing out of those concepts. If the mods of the site think it worthwhile to repost in full (and to sanitize for anything promotional), I can. If not, that's fine too and I'm grateful for the opportunity. I will continue refining ideas toward improving our economy and society. Thanks! 

Replies from: maxwell-peterson
comment by Maxwell Peterson (maxwell-peterson) · 2021-10-03T23:11:53.715Z · LW(p) · GW(p)

Cheers!

Replies from: maxwell-peterson
comment by Maxwell Peterson (maxwell-peterson) · 2021-10-03T23:13:29.619Z · LW(p) · GW(p)

BTW, in case you don't know, since you mentioned that you're new to LW - there is a button in the lower-right corner that opens a chat where you can write messages to moderators directly. You could ask what they think through that.

Replies from: scafaria
comment by scafaria · 2021-10-04T13:36:31.784Z · LW(p) · GW(p)

Great, thanks gain