When does adding more people reliably make a system better?

post by jacobjacob · 2019-07-19T04:21:06.287Z · LW · GW · 20 comments

This is a question post.

Contents

  Answers
    11 mr-hire
None
20 comments

Prediction markets have a remarkable property. They reward correct contrarianism. They incentivise people to disagree with the majority consensus, and be right. If you add more traders to a market, in expectation they price will be more accurate.

More traders means both more fish and more sharks.

(The movie "The Big Short" might be a very sad portrait of the global financial system. But it's still the case that a system in a bad equilibrium with deeply immoral consequences rewarded the outcasts who pointed out those consequences with billions of dollars. Even though socially, no one bothered listening to them, including the US Government who ignored requests by one of the fund managers to share his expertise about the events after the crash.)

Lots of things we care about don't have this property.

In prediction markets the vetting process is really cheap. You might have to do some KYC, but mostly new people is great. This seems like a really imporant property for a system to have, and something we could learn from to build other such systems.

What other systems have this property?

Answers

answer by Matt Goldenberg (mr-hire) · 2019-07-19T20:30:21.574Z · LW(p) · GW(p)

Adding more people (or more chaos in general), works in systems that are anti-fragile [LW(p) · GW(p)], that is they're set up to actually gain from disorder. This may seen tautological, but in the above post I give 6 principles that make a system antifragile:

  • Optionality: You tend to choose options that give you more options in the future.
  • Hormesis: When bad outcomes befall you, you work to be more robust to that class of outcomes in the future.
  • Evolution: You're constantly creating multiple variations , and keeping those that survive over time.
  • The Barbell Strategy: You split your activities that are very safe, with low downside, and those that are very risky, with high upside.
  • Via Negativa: You work to remove sources of downside risk before you work to increase upside risk.
  • Skin in the Game: People and organizations are exposed to the downside risk that they create.

To the extent that a system follows the six principles above, adding more people to the system will tend to make it do better - capitalism for instance follows the evolution and optionality strategies, but as the economy becomes increasingly centralized, it tends to fail at the others to varying degrees, as people have pointed out.

Prediction Markets are antifragile only when they have liquidity, which provides them much needed optionality. When they are liquid, they use evolution and skin in the game principles to over time have traders that follow the other strategies.

20 comments

Comments sorted by top scores.

comment by quanticle · 2019-07-19T05:30:47.053Z · LW(p) · GW(p)

But it’s still the case that a system in a bad equilibrium with deeply immoral consequences rewarded the outcasts who pointed out those consequences with billions of dollars.

That's not exactly true. There were outcasts who correctly pointed out that the housing market was deeply troubled in e.g. 2004 and 2005. Did the market reward them? No. They went bust, as the market proved to be capable of staying irrational longer than they were capable of remaining solvent. Even in The Big Short, Michael Burry very nearly did go bust, and had to resort to exercising fail-safe clauses in his investment contracts to keep from going bust. The exercise of these clauses, and the resulting rancor they caused with his investors meant that even though he "won" and got a fair chunk of a billion dollar payout, he was basically frozen out of investing afterwards.

Replies from: Douglas_Knight, Benito
comment by Douglas_Knight · 2019-07-19T16:59:26.773Z · LW(p) · GW(p)

Who were the other people who tried to short housing in 2004-2005? Does Michael Lewis talk about them?

comment by Ben Pace (Benito) · 2019-07-19T08:30:02.517Z · LW(p) · GW(p)

Do you know why? I’d expect that investors who were angry for a few years would come round after he made them billions of dollars.

Replies from: quanticle
comment by quanticle · 2019-07-19T13:00:44.332Z · LW(p) · GW(p)

The problem was how he made those billions of dollars. Burry's initial investment thesis was stocks. When he pitched his fund to investors, it was a stock fund. Then, later, as Burry found that there was no way in the stock market to short the housing market, he branched out into the sorts of exotic collateralized debt obligations that would make him his profits.

From the perspective of his investors (a perspective I personally agree with), Burry was a loose cannon. The only reason he made a bunch of money instead of going down with every penny that his investors entrusted him with is that he managed to get lucky. Ask yourself, what would have happened to Burry's fund if the housing market hadn't cratered in 2007-2008. What if the housing market rally had gone on for another five or six years?

Replies from: Benito
comment by Ben Pace (Benito) · 2019-07-19T13:29:10.827Z · LW(p) · GW(p)

Ah, so they're very grateful that he made billions of dollars with their money, but that it was through a process that had a massive amount of risk, and they just weren't interested in risking that much again in the future.

I still think that they'd be interested in risking a smaller fraction of their money (e.g. give him 10-20% of what they gave last time and then invest the rest elsewhere). I don't get the 'frozen' out part.

Replies from: quanticle
comment by quanticle · 2019-07-19T14:04:00.616Z · LW(p) · GW(p)

It's not so much that the process had a massive amount of risk as it implemented a Taleb-style anti-fragile strategy. It lost money, by dribs and drabs every year when times were good, but when times turned bad, it made a massive amount of money. According to The Big Short Burry was paying out premiums on CDO insurance every year while times were good, and got the insurance payout when the market turned and things went bad. So, for three or four years, he was invested in these really weird securities, securities that his investors hadn't signed up for, securities that were losing money, while they waited for a payout.

As far as why they wouldn't be interested in risking a smaller fraction of their money, the strategy only works if you have enough buffer to wait out the good years and capitalize on the inevitable downturn when it happens. We've seen this with Taleb himself. While he did well in the dotcom crash and the global financial crisis, he's had basically negative returns since.

Replies from: jacobjacob, Hazard, Benito
comment by jacobjacob · 2019-07-19T18:49:12.538Z · LW(p) · GW(p)

Don't think I disagree, I've made a very similar point to yours in a previous LW thread here [LW(p) · GW(p)].

Also, my point is not that the gains from being a correct contrarian in the financial market always outweigh the social punishment for contrarianism, or that you can always trade between the two currencies. But despite being frozen out of investing, Michael Burry is still a multi-millionaire. That is an interesting observation. It's related to why I think Robin Hanson is excited about prediction markets -- they present a remarkable degree of robustness to social status games and manipulability.

___

Also I'm very curious about the outcome of Taleb's investments (some people say they're going awfully, which is why he's selling books...), so please share any links.

comment by Hazard · 2019-07-19T17:22:55.290Z · LW(p) · GW(p)

Also found this chain interesting. Thanks!

comment by Ben Pace (Benito) · 2019-07-19T14:15:46.046Z · LW(p) · GW(p)

Thanks, this was really interesting.

comment by quanticle · 2019-07-19T13:09:27.811Z · LW(p) · GW(p)

Many companies have their culture decline as they hire more, and have to spend an incredible amount of resources simply to prevent this (which is far from getting better as more people join). (E.g. big tech companies can probably have >=5 candidates spend >=10 hours in interviews for a a single position. And that’s not counting the probably >=50 candidates for that position spending >=1h.)

Is the super-elaborate hiring game really necessary, though? I've worked at Amazon and Microsoft. I've also worked at other firms which had much looser hiring practices. In my experience, the elaborate hiring game that these tech companies play are more about signalling to candidates, "We are a Very Serious Technology Company who use Only The Latest Modern Hiring Practices™." It's quite possible to me that these hiring practices could be considerably streamlined without actually affecting the quality of the candidates that got through. But, if they did that, then the hiring process would lose some of its signalling value, and the company wouldn't be seen as a Super Prestigious Institution™ which accepts Only The Best™.

tl;dr: In my view FAAMNG hiring process works in the same way as the Harvard application process. It's as much about advertising and signalling to candidates that the company is an elite institution as it is about actually hiring elite candidates.

Replies from: jmh
comment by jmh · 2019-07-19T13:41:42.468Z · LW(p) · GW(p)

From a slightly different slant, where I work the executives decided, maybe 5 years ago, that they would start hiring only from the 10 schools and only the top candidates from those schools. When we get a new CEO during one of the company "townhall" meetings that subject came up.

The new CEO noted a discussion in the board room related to that. The bottom line was that for the most part none of the top people had degrees from such schools. One might add that the company had grown to it's dominant market position with a workforce that did not reflect such a profile either.

It would seem the approach was scrapped.

I think the underlying approaches are actually the same -- the desire for a "simple" (at least in the sense of clearly defined process or heuristic) solution to a rather difficult problem. How does one recognize just how [one] will add great value to future activities that are by nature not really driven by any one individual's abilities or direct contribution?

comment by cousin_it · 2019-07-19T09:02:47.011Z · LW(p) · GW(p)

Are you sure the difference you've noticed actually exists? The financial system crashed and hurt a lot of people, but rewarded a few people greatly. The same thing can happen in companies or communities - they can fail overall, but reward a few people greatly.

Replies from: jacobjacob
comment by jacobjacob · 2019-07-19T18:52:01.302Z · LW(p) · GW(p)

My claim is definitely not about the global financial system. It's about single financial markets becoming more accurate at tracking the metrics they're measuring as more people join, by default.

If I became convinced that the proper analogy is that companies by default become better at optimising profit as more employees join, I'd change my mind about the importance of prediction/financial markets. But I'd bet strongly that that is not the case.

comment by quanticle · 2019-07-19T05:40:04.950Z · LW(p) · GW(p)

Online forums usually decline with growing user numbers (this happened to Reddit, HackerNews, as well as LessWrong 1.0)

Reddit and HackerNews, sure, but was the decline of LessWrong really due to growing user numbers? From what I've seen and read of LessWrong history, the decline was due to reductions in post volume, rather than post quality, which seems to me that it was a symptom of stagnating or shrinking active user numbers. Simply put, fewer people posting → fewer reasons to check the site → fewer comments → stagnation and death.

Replies from: clone of saturn, jacobjacob
comment by clone of saturn · 2019-07-19T18:43:23.234Z · LW(p) · GW(p)

LW 1.0 had an additional problem that no one wanted to risk writing a worse than average post in Main, leading to ever increasing standards and fewer posts, but I believe user numbers were still increasing, and quality of Discussion posts decreasing, during that process.

comment by jacobjacob · 2019-07-19T18:56:11.775Z · LW(p) · GW(p)

I'm not confident that was actually the decline and shouldn't have sounded so confident in my post.

Though your explanation is confusing to me, because it doesn't explain the data-point that LW ended up having a lot of bad content and discussion, rather than no content and discussion.

Anyhow, I believe this discussion should be had in the meta section of the site, and that we should focus more on the object-level of the question here.

Replies from: quanticle
comment by quanticle · 2019-07-19T23:53:31.132Z · LW(p) · GW(p)

I endorse clone of saturn's reply elsewhere in the thread. I didn't often go into the discussion section, so I thought that there were fewer active users, when in reality it could very well have been fewer active users posting in the Main section.

comment by Dagon · 2019-07-19T20:53:46.222Z · LW(p) · GW(p)

I'm not sure we have much evidence in whether actual prediction markets reliably benefit from an influx of new participants. I suspect it's as complicated as other endeavors: it'll depend on the selection and expectations of those new people, and how much training and/or accommodation is needed for them.

In my company, we often talk about "maximum team onboarding rate" in terms of how quickly we can bring new team members up to productivity and retain our team goals and culture. We do pretty reliably grow in scope, but not unboundedly and not without quite a bit of care in terms of selection and grooming of new members.

Replies from: quanticle
comment by quanticle · 2019-07-20T00:01:32.540Z · LW(p) · GW(p)

I don't know if this counts as evidence, per se, but DeLong, Schleiffer, Summers and Waldman had a fairly seminal paper on this in 1987: The Economic Consequences of Noise Traders. In it, they explain how the addition of "noise traders" (i.e. traders who trade randomly) can make financial markets less efficient. Conventional economic theory, at the time, held that the presence of noise traders didn't reduce the efficiency of the market, because rational investors would be able to profit off the noise traders and prices would still converge to their true value.

In the paper, DeLong, et. al. demonstrate that it's possible for noise traders to earn higher returns than rational investors, and, in the process significantly affect asset prices. Key to their insight is that, in the real world, investors have limited amounts of capital, and the presence of noise traders significantly raises the amount of risk that rational investors have to take on in order to invest in markets that have large numbers of noise traders. This risk causes potentially wide, but not permanent divergences between asset prices and fundamental values, which can serve to drive rational investors from the market.

I don't see any reason to believe that prediction markets would behave differently from the stock markets that DeLong et. al's paper targeted. My hypothesis would be that prediction markets have shown increasing accuracy with increasing participation so far, but that relationship will break down once the relatively limited pool of people who are willing to think before they trade is exhausted and further increases in prediction market participation draw from a pool of noise traders.

comment by Slider · 2019-07-19T21:43:26.910Z · LW(p) · GW(p)

Things can have minimum sizes and maximum sizes. You would for example think that adding hydrogen to a celestial body should make it glow brighter and this long remains true. However after some point you have a black hole in your hands which is very dim.

Prediction markets might be remarkably scalable but I think they also might have maximum limits even if they are not currently relevant or obvious. After some point instead of getting the prediction right its more economical to try to bait everyone else to guess wrong. For example if you had a mass media channel that broadcasted to 3 star systems full of colonized planets saying anything that would push anyone in the wrong direction might be a way to make money. In more tame analogy sending "nigerian prince" letters is remarkably economical with email even if single participants are unlikely to react to them in any way.

In order for a prediction market to work you can not have "epistemological market power" whatever you communicate to other participants can't be a large fraction how they spend money. In the limit of trying to grasp into tiniest information scraps there migth become a signifcant population of "sheep" that are economically suggestible even if they themselfs think they are sharks. Even with a big market you are only one brain with no better processing power but your words have better surface area to suggest others making figure it out/fabricate social reality ratio different.