Collective Apathy and the Internet

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-14T00:02:19.161Z · LW · GW · Legacy · 34 comments

Contents

34 comments

Yesterday I convered the bystander effect, aka bystander apathy: given a fixed problem situation, a group of bystanders is actually less likely to act than a single bystander.  The standard explanation for this result is in terms of pluralistic ignorance (if it's not clear whether the situation is an emergency, each person tries to look calm while darting their eyes at the other bystanders, and sees other people looking calm) and diffusion of responsibility (everyone hopes that someone else will be first to act; being part of a crowd diminishes the individual pressure to the point where no one acts).

Which may be a symptom of our hunter-gatherer coordination mechanisms being defeated by modern conditions.  You didn't usually form task-forces with strangers back in the ancestral environment; it was mostly people you knew.  And in fact, when all the subjects know each other, the bystander effect diminishes.

So I know this is an amazing and revolutionary observation, and I hope that I don't kill any readers outright from shock by saying this: but people seem to have a hard time reacting constructively to problems encountered over the Internet.

Perhaps because our innate coordination instincts are not tuned for:

Etcetera.  I don't have a brilliant solution to this problem.  But it's the sort of thing that I would wish for potential dot-com cofounders to ponder explicitly, rather than wondering how to throw sheep on Facebook.  (Yes, I'm looking at you, Hacker News.)  There are online activism web apps, but they tend to be along the lines of sign this petition! yay, you signed something! rather than How can we counteract the bystander effect, restore motivation, and work with native group-coordination instincts, over the Internet?

Some of the things that come to mind:

But mostly I just hand you an open, unsolved problem: make it possible / easier for groups of strangers to coalesce into an effective task force over the Internet, in defiance of the usual failure modes and the default reasons why this is a non-ancestral problem.  Think of that old statistic about Wikipedia representing 1/2,000 of the time spent in the US alone on watching television.  There's quite a lot of fuel out there, if there were only such a thing as an effective engine...

34 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2009-04-14T21:33:36.919Z · LW(p) · GW(p)

Blink. You read Reddit, right? Have you never noticed that every time there's an outrageous story, everyone on Reddit bands together and does something about it? Dusty the cat? The ReMax debacle? That woman who got her cruise cancelled and the Redditors sent enough to get her a new one? Also, http://www.cracked.com/article_17170_8-awesome-cases-internet-vigilantism.html . This is pretty impressive. If I'd, say, put a big poster up in a school about Dusty the Cat or ReMax, I doubt the students would have been able to mount half as coherent or overwhelming a response as the Internet did.

And Anonymous versus Scientology was pretty impressive too.

All of these have some things in common. They're responses to a single outrageous incident, they're things that the mainstream media doesn't cover, and they don't take a huge time commitment to solve. So there is a big difference between them and, say, fighting world hunger.

But what I gather from these examples is that anonymity and the bystander effect do not suddenly change the incentive structure for people online. Possibly the best known Internet action-taking campaign ever was the anti-Scientology one perpetrated by...Anonymous.

I would suggest we shift our inquiries in the direction of why the Internet is so good at Dusty the Cat style operations and so bad at end world hunger style operations. I think it probably has to do with the way people use the Internet itself: short attention spans and novelty-seeking.

On the other hand, the Internet can pull through for people long-term: witness Howard Dean, Barack Obama, Ron Paul, and "netroots". So maybe it has more to do with the fragmented nature of the Internet. Reddit is a natural place for Ron Paul fans to get together and organize Ron Paul related things, but there are lots of fragmented communities and none of them is specifically focused on world hunger. Nor would a sudden interest in solving world hunger on one community's part spread to another.

I don't know. Don't have a specific answer. Just think we need to shift direction away from "Why is the Internet so bad at this?" because it isn't

comment by RobinHanson · 2009-04-14T12:56:19.427Z · LW(p) · GW(p)

The key problem is not doing collective action, but agreeing on what collective actions we think are worth doing. Governments excel at overcoming coordination problems to choose and implement collective actions. Its problem is that it often chooses badly.

Replies from: rhollerith
comment by rhollerith · 2009-04-14T20:53:16.360Z · LW(p) · GW(p)

There are many benefits to surrounding yourself with extremely bright rationalists and scientific generalists. But I wonder if Eliezer has been too successful in sparing himself from the tedium and the trouble of interacting with and observing the common run of scientifically-illiterate irrational not-particularly-bright humanity. If he had been forced to spend a few years in an ordinary American high school or in an ordinary workplace -- or even if he had had lengthy dealings with a few of the many community activists of the San Francisco Bay Area where he lives -- or even if he just had 20 more years of life experience -- I wonder if he would still think it is a good idea to "make it possible / easier for groups of strangers to coalesce into an effective task force over the Internet" using only the skills for working in groups that come from the ancestral environment.

The way it is now, to devise an effective plan to change society, a person needs more rationality skill and more true information about their society than most people have. But I humbly submit that that is not a bug, but rather a feature! So is the fact that the instincts and emotions and biases that come from the ancestral environment are not enough to do it. (I sometimes go even further and say that it is important to be able to use rationality and deliberation to veto or override the instincts and emotions and biases that come from the ancestral environment.)

And I do not think it is particularly useful to frame what I just said as elitism. It is just an acknowledgement of the following reality: for almost any plan you can come up with for empowering the masses, I can come up with a plan that preferentially empowers the people more likely to use the new power for good -- for some definition of "good" that you and I can both agree on. Science and technology and other means of empowering people have become too potent for scientists and technologists and others not to think through what people are likely to do with the new power.

EDIT: for the sake of perspective and balance, I note that the mere fact that a person has read this post and consequently probably has a strong interest in the subject matter of this web site might be enough evidence of rationality to ameliorate my concerns about empowering them with a new technology for collaboration provided that the collaboration has a goal or mission statement less ambiguous than the current mission statement of a certain institute that must not be named, but if Eliezer's purpose is to empower only the people interested enough in rationality and knowledge to keep coming back to this web site, he should say so instead of speaking of empowering people in general.

There are certain resonances between Robin's comment and this one, which is why I put it here.

comment by MBlume · 2009-04-14T00:46:34.881Z · LW(p) · GW(p)

Let's first assess the current state of the art. The Facebook Causes application does not meet all your criteria, but it hits a few pretty well. It puts money front and center, and tracks the amount you've donated, as well as the amount you've "raised" by referring friends who then donate. It also provides a number of means by which to tell friends about causes -- you decide what seems least annoying.

I notice someone's already registered The Institute Which Must Not Be Named

Replies from: listic
comment by listic · 2009-04-14T16:45:37.283Z · LW(p) · GW(p)

How can one be sure that the money actually goes to said cause?

Replies from: MBlume
comment by MBlume · 2009-04-14T18:18:47.992Z · LW(p) · GW(p)

I donated to TIWMNBN last night, and received a receipt from the networkforgood.org domain. Network For Good is an eight-year-old company which processes donations for a number of non-profits. Their Wikipedia page doesn't seem to throw any red flags so far as I can see.

comment by dfranke · 2009-04-14T07:31:39.186Z · LW(p) · GW(p)

It seems to me that the open source community has this problem effectively conquered. ESR's Homesteading the Noosphere is a pretty good analysis of how.

comment by Roko · 2009-04-14T16:05:48.227Z · LW(p) · GW(p)

But mostly I just hand you an open, unsolved problem: make it possible / easier for groups of strangers to coalesce into an effective task force over the Internet,

Solution: wait 5 years for internet connection speeds to increase by an order of magnitude and for everyone to have ultralight netbooks with good webcams on them, and let groups convene in a much-improved version crossover of second life and facebook in exactly the same way that groups of hunter-gatherers convened on the savannah.

It is an elegant solution, because rather than attempting to work around our barely-evolved-monkey brains, it merely changes the internet to be more like the EEA.

I'm trying to use a primitive version of this by disallowing comments on my blog and forcing people to comment on it on facebook, where they have a fixed identity and where their social circle overhears the conversation.

Replies from: Nick_Tarleton, ethcap0
comment by Nick_Tarleton · 2009-04-14T20:44:10.652Z · LW(p) · GW(p)

Good idea, but it'll be a lot more than 5 years until telecommunications can come close to the richness of face-to-face contact. (Just one example.) Probably more important still is the difference between having to set up a conversation vs. having people constantly at hand, feeling like they're at hand, and being constantly available yourself.

Replies from: Roko
comment by Roko · 2009-04-15T12:12:45.793Z · LW(p) · GW(p)

Probably more important still is the difference between having to set up a conversation vs. having people constantly at hand, feeling like they're at hand, and being constantly available yourself.

When facebook chat meets skype meets WoW, this will cease to be a problem. 5, maybe 10 years. Think WiMax.

Lanier's article is interesting. But the things people already do on WoW contradict most of the thrust of the argument that you are making. WoW avatars are primitive imitations of the human form... with NO facial features at all! Yet, people have got married on the basis of WoW interactions.

The difference, I suspect, is that in WoW people interact with each other in a virtual world, whereas in videoconferencing you just have a disembodied, pixellated head to look at.

comment by ethcap0 · 2009-04-14T19:25:05.961Z · LW(p) · GW(p)

I like your facebook idea. In my opinion anonymity is one of the big players in this mess, it creates a sense of normality that can be easily just a deception; even if people could hear the "plea for help" all it takes is the majority of posts making fun of it for reducing it's effectiveness, yet this majority of posts could have been wrote by the same person using many nicknames taking advantage of his anonymity... Since, in this scenario, most people seem to agree, the casual surfer would feel, even if he doesn't like the comments, that at some point making fun of someone crying for help is normal and socially accepted.

comment by CAE_Jones · 2013-04-23T11:42:07.529Z · LW(p) · GW(p)

I couldn't help but notice that Eliezer's list of idea up there described Kickstarter to a T. I have no idea when Kickstarter was launched. I'd be using Kickstarter myself for just about everything if I wasn't having difficulties getting my Amazon account working correctly.

comment by cousin_it · 2009-04-14T10:33:49.062Z · LW(p) · GW(p)

If the goal of promoting rationality conflicts with the goal of creating a taskforce, why expend energy reconciling those goals? Purchase utilons separately! Become a charismatic leader. Make inspirational speeches on YouTube. Give orders to a close circle of lieutenants. Go all the way.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-04-14T12:51:43.739Z · LW(p) · GW(p)

Because you don't want an irrational taskforce.

Replies from: cousin_it
comment by cousin_it · 2009-04-14T14:43:13.022Z · LW(p) · GW(p)

It's irrational to want a rational taskforce, rather than an efficient one.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-14T15:09:44.322Z · LW(p) · GW(p)

Perhaps the point is that an irrational task force won't be efficient long-term, in subtle ways that can not be controlled by ordering them around.

Replies from: cousin_it
comment by cousin_it · 2009-04-14T15:37:42.578Z · LW(p) · GW(p)

Justify this assertion. It sounds like a rationalization to me.

Also, I see no need for a long-term taskforce, seeing as the game will soon change radically for Reasons That May Not Be Named.

Replies from: Nick_Tarleton, Vladimir_Nesov
comment by Nick_Tarleton · 2009-04-14T20:40:29.164Z · LW(p) · GW(p)

On both points: humility. The effect of ordering irrational people around is not predictable enough for it to be a better option than having a taskforce that can guide itself rationally. (And, as Vladimir says, if you end up needing the taskforce to do something requiring rationality, you're out of luck.)

comment by Vladimir_Nesov · 2009-04-14T16:09:59.745Z · LW(p) · GW(p)

Justify this assertion. It sounds like a rationalization to me.

Ordering a thousand fanatic janitors to program an optimizing compiler will bear no fruit.

Also, I see no need for a long-term taskforce, seeing as the game will soon change radically for Reasons That May Not Be Named.

There is enough uncertainty in this business to worry about planning humanity's development even 150 years ahead.

Replies from: Strange7, cousin_it
comment by Strange7 · 2011-03-23T21:24:17.623Z · LW(p) · GW(p)

Ordering a thousand fanatic janitors to program an optimizing compiler will bear no fruit.

Did you actually think about that for five minutes?

Order your thousand fanatic janitors to study computer programming. Now you've got, say, 990 fanatic janitors begging forgiveness from the Great Leader for their failure, and ten minimally-competent programmers. Programmers continue training, while janitors atone by seeing to the programmers' every material need.

Consider how much time some potential world-changing genius wastes with preparing their own food, shopping for clothes, waiting in line for things, and so on. Given fanatical dedication to a cause, and a staff of less-skilled but equally-dedicated assistants, one of the chosen few could simply say "I want a ham sandwich" and get back to work, knowing that a ham sandwich prepared exactly to their previously-expressed specifications will be presented to them within minutes, without another precious thought allocated to the details of logistics.

comment by cousin_it · 2009-04-14T16:41:30.793Z · LW(p) · GW(p)

Ordering a thousand fanatic janitors to program an optimizing compiler will bear no fruit.

Stop equating intelligence with LW-rationality.

Replies from: Yasser_Elassal
comment by Yasser_Elassal · 2009-04-14T19:38:45.432Z · LW(p) · GW(p)

Stop equating skills with intelligence.

Replies from: cousin_it
comment by cousin_it · 2009-04-14T20:18:54.390Z · LW(p) · GW(p)

If I replace "intelligence" with "skills", the point still stands.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-14T21:07:41.063Z · LW(p) · GW(p)

Rationality is a skill. Replacing "intelligence" with "skills" gives the following point:

Stop equating skills with LW-rationality.

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-14T05:06:43.080Z · LW(p) · GW(p)

www.reddit.com/r/SuicideWatch/ seems to be doing alright.

Sorry about the broken link 'n' stuff.

Replies from: MBlume
comment by MBlume · 2009-04-14T05:19:37.800Z · LW(p) · GW(p)

I can see your link in the sidebar, but not in the comment.

ETA: nm

Replies from: William
comment by CannibalSmith · 2009-04-14T07:01:05.822Z · LW(p) · GW(p)

It's been proven that communities built on small world principle (social networks) scale well. I know some people well, and each of them know some people well, and so on. When I need help or whatever I tell that to my closest friends and ask top pass it on. That's how chain letters work.

comment by lix · 2009-04-14T06:52:57.961Z · LW(p) · GW(p)

Here is one proposal:

http://blog.wired.com/business/2009/03/yes-we-plan-how.html

Their idea seems to be to combine a social networking site with facilities for coordinating action and a karma system. If it can be designed in such a way that signals are honest, karma is fair and the system becomes widely-used, I imagine it could be highly effective. On the other hand, Facebook and co. give free karma that's instantly visible to all your associates, so I fear it will be very difficult for the new site to invade the market.

Replies from: thales
comment by thales · 2009-04-14T22:11:06.194Z · LW(p) · GW(p)

I'm new here and didn't know if this has been a topic of discussion yet, but I found this story to be fascinating:

http://www.physorg.com/news158928941.html

In short, two psychologists modeled decision-making in a variation of the Prisoner's Dilemma with a "quantum" probability model. Their motivation was to reconcile results from actual studies (the participants consistently made apparently irrational choices) with what classical probability theory predicts a rational agent would choose.

Oh, and the quantum thing isn't new-age mysticism at all. It's simply a model wherein instead of a binary choice, a choice can sort of be 0 and 1 simultaneously. I don't claim to fully understand it, but it sounds awfully interesting.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-14T22:33:26.526Z · LW(p) · GW(p)

EDIT: I looked at the context, and I'm setting a bad example for thales. This is off-topic for the post, so it should have been put in Open Thread instead.

But EY already responded, so I'll leave my comment instead of deleting it.

the original motivation for developing quantum mechanics in physics was to explain findings that seemed paradoxical from a classical point of view. Possibly, quantum theory can better explain paradoxical findings in psychology, as well.

Same justification Penrose used for saying quantum mechanics is required to explain consciousness.

If you were asked to gamble in a game in which you had a 50/50 chance to win $200 or lose $100, would you play? In one study, participants were told that they had just played this game, and then were asked to choose whether to try the same gamble again. One-third of the participants were told that they had won the first game, one-third were told they had lost the first game, and the remaining one-third did not know the outcome of their first game. Most of the participants in the first two scenarios chose to play again (69% and 59%, respectively), while most of the participants in the third scenario chose not to (only 36% played again). These results violate the “sure thing principle,” which says that if you prefer choice A in two complementary known states (e.g., known winning and known losing), then you should also prefer choice A when the state is unknown."

This is very interesting. I would guess that this is linked to instinctive fight-or-flight decisions, and has to do with adrenaline, not rational decisions.

participants who were told that their partner had defected or cooperated on the first round usually chose to defect on the second round (84% and 66%, respectively). But participants who did not know their partner’s previous decision were more likely to cooperate than the others (only 55% defected).

I assume this is a 2-round PD? Otherwise, why 66% defecting in response to cooperation?

As the scientists showed, both classical and quantum probability models accurately predict an individual’s decisions when the opponent’s choice is known. However, when the opponent’s action is unknown, both models predict that the probability of defection is the average of the two known cases, which fails to explain empirical human behavior.

When the action is unknown, you don't assume 1-1 odds. But you certainly would predict that P(defection) | unknown is between P(defection) | defection and P(defection) | cooperation.

To address this problem, the scientists added another component to both models, which they call cognitive dissonance, and can also be thought of as wishful thinking. The idea is that people tend to believe that their opponent will make the same choice that they do; if an individual chooses to cooperate, they tend to think that their opponent will cooperate, as well.

This isn't cognitive dissonance, but whatever.

In the quantum model, on the other hand, the addition of the cognitive dissonance component produces interference effects that cause the unknown probability to deviate from the average of the known probabilities.

Sounds to me - and this is based on more than what I quoted here - like they are simply positing that people think that the probability of their defecting is correlated with the probability of the other person defecting. Possibly they just don't understand probability theory, and think they're working outside it. I attended a lecture by Lofti Zadeh, inventor of fuzzy logic, in which he made it appear (to me, not to him) that he invented fuzzy logic to implement parts of standard probability theory that he didn't understand.

But the math for that explanation doesn't work. You'd have to read their paper in Proceedings of the Royal Society B to figure out what they really mean.

Replies from: DanielLC, Eliezer_Yudkowsky
comment by DanielLC · 2013-05-01T06:27:58.968Z · LW(p) · GW(p)

The idea is that people tend to believe that their opponent will make the same choice that they do; if an individual chooses to cooperate, they tend to think that their opponent will cooperate, as well.

It sounds like they're describing Evidential Decision Theory.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-14T22:45:50.901Z · LW(p) · GW(p)

I've heard other Bayesians say they're not impressed with Zadeh. I know fuzzy logic primarily as a numerical model of a nonstandard deduction system, as opposed to anything that would be used in real life.

comment by Roko · 2009-04-14T16:27:36.266Z · LW(p) · GW(p)

I have invited 27 friends to the SIAI facebook cause.

Let's try to all invite roughly that many. Post your accomplishments in this thread.