Better name for "Heavy-tailedness of the world?"
post by Daniel Kokotajlo (daniel-kokotajlo) · 2020-04-17T20:50:06.407Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 2 Roko None 6 comments
There is an important variable (or cluster of similar correlated variables) that I need a better name for. I also appreciate feedback on whether or not this variable is even a thing, and if so how I should characterize it. I have two possible names and two attempts at explaining it so far.
Name 1: "Heavy-tailedness of the world."
Name 2: "Great Man Theory vs. Psychohistory"
Attempt 1: Sometimes history hinges on the deliberate actions of small groups, or even individuals. Other times the course of history cannot be altered by anything any small group might do. Relatedly, sometimes potential impact of an individual or group follows a heavy-tailed distribution, and other times it doesn’t.
Some examples of things which could make the world heavier-tailed in this sense:
- Currently there are some domains in which humans are similar in effectiveness (e.g. manual labor, voting) and others in which the distribution is heavy-tailed, such that most of the total progess/influence comes from a small fraction of individuals (e.g. theoretical math, donating to political parties). Perhaps in the future history will hinge more on what happens in the second sort of domain.
- Transformative technologies, such that when, where, and how they appear matters a lot.
- Such technologies being unknown to most people, governments, and corporations, such that competition over them is limited to the few who forsee their importance.
- Wealth inequality and political inequality concentrating influence in fewer people.
- Technologies such as brain-machine interfaces, genetic engineering, and wireheading increasing inequality in effectiveness and influentialness.
Attempt 2: Consider these three fictional worlds; I claim they form a spectrum, and it's important for us to figure out where on this spectrum our world is:
World One: How well the future goes depends on how effectively world governments regulate advanced AI. The best plan is to contribute to the respected academic literature on what sorts of regulations would be helpful while also doing activism and lobbying to convince world governments to pay attention to the literature.
World Two: How well the future goes depends on whether the first corporation to build AGI obeys proper safety protocols or not in the first week or so after they build it. Which safety protocols work is a hard problem which requires unusually smart people working for years to solve. Figuring out which corporation will build AGI and when is a complex forecasting task that can be done but only by the right sort of person, and likewise for the task of convincing them to follow safety protocols. The best plan is to try to assemble the right community of people so that all of these tasks get done.
World Three: How well the future goes depends on whether AGI of architecture A or B is built first. By default, AGI-A will be built first. The best plan involves assembling a team of unusually rational geniuses, founding a startup that makes billions of dollars, fleeing to New Zealand before World War Three erupts, and using the money and geniuses to invent and build AGI-B in secret.
Help?
For some other discussion of (facets of) this variable and its implications, see this talk [? · GW] and this post [EA · GW].
Answers
6 comments
Comments sorted by top scores.
comment by romeostevensit · 2020-04-17T21:33:11.107Z · LW(p) · GW(p)
Doesn't Taleb call this 'normal world' vs 'power law world'?
Replies from: greylag, daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-04-17T22:41:11.642Z · LW(p) · GW(p)
I haven't read Taleb; I guess I should. I think 'power law world' has the same problems that 'heavy-tailedness' has, as a name -- seems too specific, and the connection between statistical distributions and the properties I'm gesturing at seems too hand-wavy. But maybe it only seems hand-wavy to me because I haven't read his explanation.
Replies from: Seth_Goldin↑ comment by Seth_Goldin · 2020-04-21T00:56:51.107Z · LW(p) · GW(p)
You really should read Taleb; you can probably start with The Black Swan. His terms for these are "Mediocristan," domains that are described by Gaussian distributions, and "Extremistan," domains that are described by power laws.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-04-21T14:08:42.567Z · LW(p) · GW(p)
OK. But isn't power law too specific? There are other distributions with heavy tails, e.g. log-normal.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-04-18T15:49:32.470Z · LW(p) · GW(p)
I feel like this is related to the distinction between domains with efficient markets and domains with lots of thousand-dollar bills lying around to be picked up.