How can we respond to info-cascades? [Info-cascade series]

post by jacobjacob, Ben Pace (Benito) · 2019-03-13T10:55:25.685Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    13 Davidmanheim
    10 rossry
    0 rossry
None
2 comments

This is a question in the info-cascade question series [LW · GW]. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.

___

In my (Jacob's) work at Metaculus AI, I'm trying to build a centralised space for both finding forecasts as well as the reasoning underlying those forecasts. Having such a space might serve as a simple way for the AI community to avoid runway info-cascades.

However, we are also concerned with situations where new forecasters overweight the current crowd opinion in their forecasts, compared to the underlying evidence, and see this as major risk for the trustworthiness of forecasts to those working in AI safety and policy.

With this question, I am interested in previous attempts to tackle this problem, and how successful they have been. In particular:

Answers

answer by Davidmanheim · 2019-03-17T10:43:09.219Z · LW(p) · GW(p)

The Systems Dynamics "Beer Game" seems like a useful example of how something like (but not the same as) an info-cascade happens.

https://en.wikipedia.org/wiki/Beer_distribution_game - "The beer distribution game (also known as the beer game) is an experiential learning business simulation game created by a group of professors at MIT Sloan School of Management in early 1960s to demonstrate a number of key principles of supply chain management. The game is played by teams of at least four players, often in heated competition, and takes at least one hour to complete... The purpose of the game is to understand the distribution side dynamics of a multi-echelon supply chain used to distribute a single item, in this case, cases of beer."

Basically, passing information through a system with delays means everyone screws up wildly as the system responds in a nonlinear fashion to a linear change. In that case, Forrester and others suggest that changing viewpoints and using systems thinking is critical in preventing the cascades, and this seems to have worked in some cases.

(Please respond if you'd like more discussion.)

comment by jacobjacob · 2019-08-04T13:37:10.822Z · LW(p) · GW(p)

That's a really interesting effect, thanks for linking. I have two questions:

1) I'm confused about what the mechanism that produces the Bullwhip effect is.

One video suggested the following: as demand rapidly increases during time_step_1, suppliers aren't able to fully adapt and meet it, which causes an even larger shortage during time_step_2 and hence even larger demand; and somehow these effects compound down the supply chain.

Another mechanism is just that the demand signal is noisy, and so its variance will increase as one moves down the supply chain. But I'm confused why this causes everything to blow up (as opposed to, say, different sub-suppliers making errors in different directions, which sorta cancel out, even though the larger variance at the end-supplier causes some volatility. That is, it's just as likely that they underestimate demand as it is that they overestimate it.)

2)

changing viewpoints and using systems thinking

Exactly what does this imply that I, as a middle-manager somewhere in the supply chain, observing a noisy demand signal, should do? How does this concretely change my order decision from my supplier, in a way which improves things?

Replies from: Davidmanheim
comment by Davidmanheim · 2019-08-08T15:04:09.761Z · LW(p) · GW(p)

1) It's neither noise nor rapid increase - it's delayed feedback. Control theorists in engineering have this as a really clear, basic result, that delayed feedback is really really bad in various ways. There are entire books on how to do it well - https://books.google.ch/books?id=Cy_wCAAAQBAJ&pg=PR9&lpg=PR9 - but doing it without using these more complex techniques is bad.

2) You either hire a control theorist, or (more practically) you avoid the current feedback mechanism, and instead get people on the phone to talk about and understand what everyone needs, as opposed to relying on their delayed feedback in the form of numeric orders.

comment by jacobjacob · 2019-11-01T14:54:59.451Z · LW(p) · GW(p)

We (jacobjacob and Benito) decided to award $50 (out of the total bounty of $800) to this answer.

It offers a practical example of a cascade-like phenomenon, which is both generally applicable and has real economic consequences. Also, the fact that it comes with a came to understand and practice responding is rare and potentially quite valuable (I'm of the opinion that deliberate practice is currently a neglected virtue in the rationality/EA spheres).

answer by rossry · 2019-03-17T01:34:53.337Z · LW(p) · GW(p)

Abstract: Considering information cascades (both upwards and downwards) as a problem of incentives, better incentive design holds some promise. This academic paper suggests a model in which making truth-finding rewards contingent on reaching a certain number of votes prevents down-cascades, and where an informed (self-interested) choice of payout odds and threshold can also prevent up-cascades in the limit of a large population of predictors.

1) cf. avturchin [LW(p) · GW(p)] from the question about distribution across fields, pointing out that up-cascades and down-cascades are both relevant concerns, in many contexts.

2) Consider information cascades as related to a problem of incentives -- in the comments of the Johnichols post [LW · GW] referenced in the formalization question, multiple commentators point out that the model fails if agents seek to express their marginal opinion, rather than their true (posterior) belief. But incentives to be right do need to be built into a system that you're trying to pump energy into, so the question remains of whether a different incentive structure could do better, while still encouraging truth-finding.

3) Up-Cascaded Wisdom of the Crowd (Cong and Xiao, working paper) considers the information-aggregation problem in terms of incentives, and consider the incentives at play in an all-or-nothing crowdfunding model, like venture capital or Kickstarter (assuming that a 'no' vote is irrevocable like a 'yes' vote is) -- 'yes' voters win if there is a critical mass of other 'yes' voters and the proposition resolves to 'yes'; they lose if there is a critical mass and the proposition resolves to 'no'; they have 0 loss/gain if 'yes' doesn't reach a critical mass; 'no' voters are merely abstaining from voting 'yes'.

Their main result is that if the payment of incentives is conditioned on the proposition gaining a fixed number of 'yes' votes, a population of symmetric, common-prior/private-info agents will avoid down-cascades, as a single 'yes' vote that breaks a down-cascade will not be penalized for being wrong unless some later agent intentionally votes 'yes' to put the vote over the 'yes' threshold. (An agent i with negative private info still should vote no, because if a later agent i' puts the vote over the 'yes' threshold based in part on i's false vote, then i expects to lose on the truth-evaluation, since they've backed 'yes' but believe 'no'.)

A further result from the same paper is that if the actor posing the proposition can set the payout odds and the threshold in response to the common prior and known info-distribution, then a proposition-poser attempting to minimize down-cascades (perhaps because they will cast the first 'yes' vote, and so can only hope to win if the vote resolves to 'yes') will be incentivized to set odds and a threshold that coincidentally minimize the chance of up-cascades. In the large-population limit, the number of cascades under such an incentive design goes to 0.

4) I suspect (but will not here prove) that augmenting Cong and Xiao's all-or-nothing "crowdfunding for 'yes'" design with a parallel "crowdfunding for 'no'" design -- i.e., 'no' voters win (resp. lose) iff there is a critical mass of 'no' voters and the proposition resolves 'no' (resp. 'yes') -- can further strengthen the defenses against up-cascades (by making it possible to cast a more informed 'no' vote conditioned on a later, more-informed agent deciding to put 'no' over the threshold).

comment by rossry · 2019-03-17T01:53:12.102Z · LW(p) · GW(p)

A related idea in non-punishment of "wrong" reports that have insufficient support (again in the common-prior/private-info setting) comes from this paper [pdf] (presented at the same conference), which suggests collecting reports from all agents and assigning rewards/punishments by assuming that agents' reports represent their private signal, computing their posterior, and scoring this assumed posterior. Under the model assumptions, this makes it an optimal strategy for agents to truly reveal their private signal to the mechanism, while allowing the mechanism to collect non-cascaded base data to make a decision.

In general, I feel like the academic literature on market design / mechanism design has a lot to say about questions of this flavor.

comment by jacobjacob · 2019-08-04T13:41:04.745Z · LW(p) · GW(p)
A further result from the same paper is that if the actor posing the proposition can set the payout odds and the threshold in response to the common prior and known info-distribution, ...

This is a really cool result, but I'm confused about why it holds. Is the idea something like: the actor themself is uncertain about the value of the project, and the kickstarter also helps them find out whether it's worth doing, so up-cascades are costly in expectation (they might land themselves having to run some awful project)?

But if is the mechanism, it seems to apply to any rational actor using a kickstarter, as opposed to having anything to do with minimizing down-cascades?

Replies from: rossry
comment by rossry · 2019-08-05T11:08:29.548Z · LW(p) · GW(p)

Is the idea something like: the actor themself is uncertain about the value of the project, and the kickstarter also helps them find out whether it's worth doing

Nope! The paper's model for this result assumes that the value conditioned on success is known to the proposer, so that the proposer's only incentive is to maximize their own profits by setting the payout odds and threshold. The (non-obvious to me) result that the paper proves is that this coincidentally minimizes the probability of up-cascades:

A higher decision threshold excludes more DOWN cascades while it is less likely to be reached. We show that the concern about potential DOWN cascades dominates the concern about likelihood to reach the target. To maximize the proceeds, the proponent endogenously sets the target to the smallest number that in equilibrium completely excludes DOWN cascades in the same spirit as Welch (1992), with the caveat that the proponent utilizes both price and target to achieve this. Consequently, with endogenous issuance pricing, there is no DOWN cascade which stops private information aggregation, and good projects are always financed while bad projects are never financed, when the crowd base N becomes very large. In other words, financing efficiency and information aggregation efficiency approaches the first best as N grows bigger, despite the presence of information cascades.

comment by jacobjacob · 2019-11-01T15:01:44.580Z · LW(p) · GW(p)

We (jacobjacob and Ben Pace) decided to award $250 (out of the total bounty of $800) to this answer [LW(p) · GW(p)]. It does several important things.

  • It references existing (and novel) work in economics and mechanism design, which might have been time-consuming to discover otherwise
  • It distills a technical paper, which is a valuable service that is usually underfunded (academic institutions comparatively incentivise novel and surprising insights)
  • The insights provided are quite action-guiding, and caused me (jacobjacob) to have ideas for how one can experiment with new kinds of forecasting tournaments that use a threshold-mechanism to change participant incentives

I'll PM you for details about payment.

answer by rossry · 2019-03-17T01:30:21.056Z · LW(p) · GW(p)

[this answer was duplicated when I mistakenly copied my comment into an answer and then moved the comment to an answer.]

2 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2019-03-13T16:58:52.092Z · LW(p) · GW(p)

Pretty sure you know this already, and it's not exactly infrastructure, but it seems like if you have a nice formal process for eliciting people's beliefs, then you want to explicitly ask them for their impressions, not credences [EA(p) · GW(p)] (or alternatively for both).

comment by jacobjacob · 2019-11-01T14:53:37.925Z · LW(p) · GW(p)

.