Formalising continuous info cascades? [Info-cascade series]

post by Ben Pace (Benito), jacobjacob · 2019-03-13T10:55:46.133Z · LW · GW · 5 comments

This is a question post.

Contents

5 comments

This is a question in the info-cascade question series [LW · GW]. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.


Mathematically formalising info-cascades would be great.

Fortunately, it's already been done in the simple case. See this excellent LW post [LW · GW] by Johnicholas, where he uses upvotes/downvotes as an example, and shows that after the second person has voted, all future voters are adding zero new information to the system. His explanation using likelihood ratios is the most intuitive I've found.

The Wikipedia entry on the subject is also quite good.

However, these two entries primarily explain how information cascades when people have to make a binary choice - good or bad, left or right, etc. The question I want to understand is how to think of the problem in a continuous case - do the problems go away? Or more likely, what variables determine the speed at which people update to one extreme? And how far toward that extreme do people go before they realise their error?

Examples of continuous variables include things like project time estimates, stocks, and probabilistic forecasts. I imagine it's very likely that significant quantitative work has been done on the case of market bubbles, and anyone can write an answer summarising that work and explaining how to apply it to other domains like forecasting, that would be excellent.

Answers

5 comments

Comments sorted by top scores.

comment by DanielFilan · 2019-03-14T07:11:56.897Z · LW(p) · GW(p)

A relevant result is Aumann's agreement theorem, and offshoots where two Bayesians repeating their probability judgements back and forth will converge on a common belief. Although note that that belief isn't always the one they would have in the case that they both knew all their observations - supposing we both privately flip coins, and state our probabilities that we got the same result, we'll spend all day saying 50% without actually learning the answer - nevertheless you shouldn't expect probabilities to badly asymptote in expectation.

This makes me think that you'll want to think about bounded-rational models where people can only recurse 3 times, or something. [ETA: or models where some participants in the discourse are adversarial, as in this paper].

comment by Shmi (shminux) · 2019-03-13T20:44:18.646Z · LW(p) · GW(p)
The Wikipedia entry on the subject is also quite good.

The link is not to The Wikipedia, but to some pushy and confusing wiki reader app. Consider fixing.

On a different note, have you tried to do numerical simulations of the phenomenon you are describing? Multiple agents interacting under various conditions, watch if an equilibrium emerges, and what kind.

Replies from: jacobjacob
comment by jacobjacob · 2019-08-04T12:50:56.981Z · LW(p) · GW(p)

We haven't, but there's some interesting stuff in the economics literature [LW(p) · GW(p)].

comment by Alexei · 2019-03-14T05:16:15.467Z · LW(p) · GW(p)

Note that the post you linked by Johnicholas contains a mistake that the author admits invalidates his point.

Replies from: DanielFilan
comment by DanielFilan · 2019-03-14T07:04:05.480Z · LW(p) · GW(p)

I think that the rewrite mentioned was actually made, and the post as stands is right.

(Although in this case it's weird to call it an information cascade - in the situation described in the post, people don't have any reason to think that a +50 karma post is any better than a +10 karma post, so information isn't really cascading, just karma).