Posts

My guide to lifelogging 2020-08-28T21:34:40.397Z · score: 22 (7 votes)
Preface to the sequence on economic growth 2020-08-27T20:29:24.517Z · score: 45 (18 votes)
What specific dangers arise when asking GPT-N to write an Alignment Forum post? 2020-07-28T02:56:12.711Z · score: 41 (18 votes)
Are veterans more self-disciplined than non-veterans? 2020-03-23T05:16:18.029Z · score: 19 (7 votes)
What are the long-term outcomes of a catastrophic pandemic? 2020-03-01T19:39:17.457Z · score: 26 (9 votes)
Gary Marcus: Four Steps Towards Robust Artificial Intelligence 2020-02-22T03:28:28.376Z · score: 13 (8 votes)
Distinguishing definitions of takeoff 2020-02-14T00:16:34.329Z · score: 46 (18 votes)
The case for lifelogging as life extension 2020-02-01T21:56:38.535Z · score: 30 (11 votes)
Inner alignment requires making assumptions about human values 2020-01-20T18:38:27.128Z · score: 26 (12 votes)
Malign generalization without internal search 2020-01-12T18:03:43.042Z · score: 42 (13 votes)
Might humans not be the most intelligent animals? 2019-12-23T21:50:05.422Z · score: 55 (27 votes)
Is the term mesa optimizer too narrow? 2019-12-14T23:20:43.203Z · score: 30 (13 votes)
Explaining why false ideas spread is more fun than why true ones do 2019-11-24T20:21:50.906Z · score: 30 (12 votes)
Will transparency help catch deception? Perhaps not 2019-11-04T20:52:52.681Z · score: 46 (13 votes)
Two explanations for variation in human abilities 2019-10-25T22:06:26.329Z · score: 76 (32 votes)
Misconceptions about continuous takeoff 2019-10-08T21:31:37.876Z · score: 75 (32 votes)
A simple environment for showing mesa misalignment 2019-09-26T04:44:59.220Z · score: 69 (28 votes)
One Way to Think About ML Transparency 2019-09-02T23:27:44.088Z · score: 26 (9 votes)
Has Moore's Law actually slowed down? 2019-08-20T19:18:41.488Z · score: 13 (9 votes)
How can you use music to boost learning? 2019-08-17T06:59:32.582Z · score: 10 (5 votes)
A Primer on Matrix Calculus, Part 3: The Chain Rule 2019-08-17T01:50:29.439Z · score: 10 (4 votes)
A Primer on Matrix Calculus, Part 2: Jacobians and other fun 2019-08-15T01:13:16.070Z · score: 22 (10 votes)
A Primer on Matrix Calculus, Part 1: Basic review 2019-08-12T23:44:37.068Z · score: 23 (10 votes)
Matthew Barnett's Shortform 2019-08-09T05:17:47.768Z · score: 7 (5 votes)
Why Gradients Vanish and Explode 2019-08-09T02:54:44.199Z · score: 27 (14 votes)
Four Ways An Impact Measure Could Help Alignment 2019-08-08T00:10:14.304Z · score: 21 (25 votes)
Understanding Recent Impact Measures 2019-08-07T04:57:04.352Z · score: 17 (6 votes)
What are the best resources for examining the evidence for anthropogenic climate change? 2019-08-06T02:53:06.133Z · score: 11 (8 votes)
A Survey of Early Impact Measures 2019-08-06T01:22:27.421Z · score: 24 (8 votes)
Rethinking Batch Normalization 2019-08-02T20:21:16.124Z · score: 21 (7 votes)
Understanding Batch Normalization 2019-08-01T17:56:12.660Z · score: 20 (8 votes)
Walkthrough: The Transformer Architecture [Part 2/2] 2019-07-31T13:54:44.805Z · score: 9 (9 votes)
Walkthrough: The Transformer Architecture [Part 1/2] 2019-07-30T13:54:14.406Z · score: 35 (16 votes)

Comments

Comment by matthew-barnett on Forecasting Thread: AI Timelines · 2020-09-04T00:53:07.514Z · score: 4 (3 votes) · LW · GW
  • Your percentiles:
    • 5th: 2040-10-01
    • 25th: above 2100-01-01
    • 50th: above 2100-01-01
    • 75th: above 2100-01-01
    • 95th: above 2100-01-01

XD

Comment by matthew-barnett on Forecasting Thread: AI Timelines · 2020-09-03T23:59:37.594Z · score: 10 (6 votes) · LW · GW

If AGI is taken to mean, the first year that there is radical economic, technological, or scientific progress, then these are my AGI timelines.

My percentiles

  • 5th: 2029-09-09
  • 25th: 2049-01-17
  • 50th: 2079-01-24
  • 75th: above 2100-01-01
  • 95th: above 2100-01-01

I have a bit lower probability for near-term AGI than many people here are. I model my biggest disagreement as about how much work is required to move from high-cost impressive demos to real economic performance. I also have an intuition that it is really hard to automate everything and progress will be bottlenecked by the tasks that are essential but very hard to automate.

Comment by matthew-barnett on Reflections on AI Timelines Forecasting Thread · 2020-09-03T10:21:16.733Z · score: 4 (3 votes) · LW · GW

Here, Metaculus predicts when transformative economic growth will occur. Current status:

25% chance before 2058.

50% chance before 2093.

75% chance before 2165.

Comment by matthew-barnett on My guide to lifelogging · 2020-08-28T22:28:54.651Z · score: 3 (2 votes) · LW · GW
Other pros of some body cams: goes underwater without a casing blocking the mic (I think)

I haven't tried it, but I don't think it can go underwater. It is built to be water resistant but I'm not confident it can be completely submerged. Therefore, if you are a frequent snorkeler, I recommend getting an action camera.

Comment by matthew-barnett on Forecasting Thread: AI Timelines · 2020-08-27T03:57:18.595Z · score: 9 (4 votes) · LW · GW

It's unclear to me what "human-level AGI" is, and it's also unclear to me why the prediction is about the moment an AGI is turned on somewhere. From my perspective, the important thing about artificial intelligence is that it will accelerate technological, economic, and scientific progress. So, the more important thing to predict is something like, "When will real economic growth rates reach at least 30% worldwide?"

It's worth comparing the vagueness in this question with the specificity in this one on Metaculus. From the virtues of rationality,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade.
Comment by matthew-barnett on What specific dangers arise when asking GPT-N to write an Alignment Forum post? · 2020-07-28T06:01:37.307Z · score: 3 (2 votes) · LW · GW
To me the most obvious risk (which I don't ATM think of as very likely for the next few iterations, or possibly ever, since the training is myopic/SL) would be that GPT-N in fact is computing (e.g. among other things) a superintelligent mesa-optimization process that understands the situation it is in and is agent-y.

Do you have any idea of what the mesa objective might be. I agree that this is a worrisome risk, but I was more interested in the type of answer that specified, "Here's a plausible mesa objective given the incentives." Mesa optimization is a more general risk that isn't specific to the narrow training scheme used by GPT-N.

Comment by matthew-barnett on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T03:41:55.669Z · score: 21 (16 votes) · LW · GW
It’s embarrassing that I was confidently wrong about my understanding of so many things in the same domain. I’ve updated towards thinking that microeconomics is trickier than most other similarly simple-seeming subjects like physics, math, or computer science. I think that the above misconceptions are more serious than any misconceptions about other technical fields which I’ve discovered over the last few years

For some of these, I'm confused about your conviction that you were "confidently wrong" before. It seems that the general pattern here is that you used the Econ 101 model to interpret a situation, and then later discovered that there was a more complex model that provided different implications. But isn't it kind of obvious that for something in the social sciences, there's always going to be some sort of more complex model that gives slightly different predictions?

When I say that a basic model is wrong, I mean that it gives fundamentally incorrect predictions, and that a model of similar complexity would provide better ones. However (at least minimally in the cases of (3) and (4)) I'm not sure I'd really describe your previous models as "wrong" in this sense. And I think there's a meaningful distinction between saying you were wrong and saying you gained a more nuanced understanding of something.

Comment by matthew-barnett on Modelling Continuous Progress · 2020-06-23T19:20:35.063Z · score: 10 (6 votes) · LW · GW
Second, the major disagreement is between those who think progress will be discontinuous and sudden (such as Eliezer Yudkowsky, MIRI) and those who think progress will be very fast by normal historical standards but continuous (Paul Chrisiano, Robin Hanson).

I'm not actually convinced this is a fair summary of the disagreement. As I explained in my post about different AI takeoffs, I had the impression that the primary disagreement between the two groups was over locality rather than the amount of time takeoff lasts. Though of course, I may be misinterpreting people.

Comment by matthew-barnett on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-06-09T17:29:29.880Z · score: 8 (4 votes) · LW · GW

I tend to think that the pandemic shares more properties with fast takeoff than it does with slow takeoff. Under fast takeoff, a very powerful system will spring into existence after a long period of AI being otherwise irrelevant, in a similar way to how the virus was dormant until early this year. The defining feature of slow takeoff, by contrast, is a gradual increase in abilities from AI systems all across the world.

In particular, I object to this portion of your post,

The "moving goalposts" effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the "no fire alarm" hypothesis to hold in the slow takeoff scenario - there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as "overblown" until it is too late to address them.

I'm not convinced that these parallels to COVID-19 are very informative. Compared to this pandemic, I expect the direct effects of AI to be very obvious to observers, in a similar way that the direct effects of cars are obvious to people who go outside. Under a slow takeoff, AI will already be performing a lot of important economic labor before the world "goes crazy" in the important senses. Compare to the pandemic, in which

  • It is not empirically obvious that it's worse than a seasonal flu (we only know that it is due to careful data analysis after months of collection).
  • It's not clearly affecting everyone around you in the way that cars, computers, software, and other forms of engineering are.
  • Is considered natural, and primarily affects old people who are conventionally considered to be less worthy of concern (though people give lip service denying this).
Comment by matthew-barnett on Is AI safety research less parallelizable than AI research? · 2020-05-11T21:20:58.800Z · score: 12 (3 votes) · LW · GW

For an alternative view, you may find this response interesting from an 80000 hours podcast. Here, Paul Christiano appears to reject that AI safety research is less parallelizable.

Robert Wiblin: I guess there’s this open question of whether we should be happy if AI progress across the board just goes faster. What if yes, we can just speed up the whole thing by 20%. Both all of the safety and capabilities. As far as I understand there’s kind of no consensus on this. People vary quite a bit on how pleased they’d be to see everything speed up in proportion.
Paul Christiano: Yes. I think that’s right. I think my take which is a reasonably common take, is it doesn’t matter that much from an alignment perspective. Mostly, it will just accelerate the time at which everything happens and there’s some second-order terms that are really hard to reason about like, “How good is it to have more computing hardware available?” Or ”How good is it for there to be more or less kinds of other political change happening in the world prior to the development of powerful AI systems?”
There’s these higher order questions where people are very uncertain of whether that’s good or bad but I guess my take would be the net effect there is kind of small and the main thing is I think accelerating AI matters much more on the like next 100 years perspective. If you care about welfare of people and animals over the next 100 years, then acceleration of AI looks reasonably good.
I think that’s like the main upside. The main upside of faster AI progress is that people are going to be happy over the short term. I think if we care about the long term, it is roughly awash and people could debate whether it’s slightly positive or slightly negative and mostly it’s just accelerating where we’re going.
Comment by matthew-barnett on What would you do differently if you were less concerned with looking weird? · 2020-05-08T19:24:14.730Z · score: 5 (3 votes) · LW · GW

I'd probably look into brain reading tech, and try to figure out whether I can record my brain state of all times during the day (including when I'm with other people). I'd also write more posts about very speculative cause areas that don't have much evidence, though my concern here might not be weirdness necessarily but rather that people will judge my thinking quality to he poor or something.

Comment by matthew-barnett on OpportunisticBot's Shortform · 2020-04-24T03:11:15.174Z · score: 3 (2 votes) · LW · GW

In order for a voluntary eugenics scheme to work, the trait must be genetically heritable. What evidence is there that Alzheimer's is strongly genetically heritable? If there was a gene for Alzheimers, we could perform genetic testing and then implement the scheme you described. Personally, I'm pretty skeptical that this is a good use of money, for a few reasons.

As you said, the majority of suffers of Alzheimers are over 65. That means that it will take a minimum of 65 years for this scheme to start having any big effects. Over such timelines, I think it's plausible that there are more powerful technologies on the horizon. For instance, rather than focus on old-fashioned Eugenics, why not push for genetic engineering as a solution to Alzheimers?

Comment by matthew-barnett on Are there any active play-money prediction markets online? · 2020-04-12T09:40:32.818Z · score: 6 (5 votes) · LW · GW

I would describe Metaculus as a "play-money" prediction market. Why don't you think it's a prediction market? Players/users are rewarded with points (e.g. 'play-money') for making good/better predictions. What's missing?

What's missing is the ability to sell my shares of "yes" or "no" to other users. A market requires having the ability to exchange commodities. I think it's probably better to describe Metaculus as a prediction aggregator with an incentive system.

Comment by matthew-barnett on Why I'm Not Vegan · 2020-04-10T09:14:55.196Z · score: 5 (4 votes) · LW · GW

I view the value of veganism for effective altruists as a way to purchase moral consistency. If you take animal suffering very seriously but aren't vegan, it can be a bit emotionally uneasy unless you have a strong ability to dissociate your choices. There are costs to purchasing moral consistency of course, but no more than many other luxuries.

This way of approaching veganism mirrors a countersignaling framework:

  • Normal people who don't care at all about animals see veganism as a personal choice
  • People who care quite a bit about animal suffering see veganism as a moral imperative, not a personal choice.
  • Consequentialists who just want to reduce suffering, and are impartial to the methods they use to reduce suffering, tend to think that veganism is only worth it if the personal costs aren't high.

Tobias Leenaert has used the word post-vegan to describe the third stage, and I quite like the label myself.

Comment by matthew-barnett on Why I'm Not Vegan · 2020-04-10T09:06:25.279Z · score: 9 (5 votes) · LW · GW
if everyone was vegan, the problem would be solved

To generalize, while the problem of animal abuse would go away, the problem of animal suffering wouldn't. Likewise, unplugging your microwave would solve the problem of microwaves using too much energy, but wouldn't solve the problem of efficient energy capture, or climate change.

Comment by matthew-barnett on An alarm bell for the next pandemic · 2020-04-06T06:42:28.644Z · score: 7 (4 votes) · LW · GW

The thing I regret the most was not explicitly debunking myths that were common back in the early days. For example, a lot of people told me that this was just going to disappear like SARS. But I actually had got my hands on a dataset and SARS looked way different. There was little reason to believe it was like SARS.

I also didn't give people financial advice because I'm very averse to doing stuff like that, and I thought people would generally just say that I'm "trying to beat the market" against a common consensus that beating the market was impossible. Though once stuff started happening, I started to try my hand at it.

Comment by matthew-barnett on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-04T07:19:40.224Z · score: 13 (7 votes) · LW · GW
January 29th on the EA Forum

For vanity reasons, I'm going to point out that it was actually the 26th.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T23:24:42.955Z · score: 2 (1 votes) · LW · GW

The idea is that the context surrounding this pandemic is unique.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T21:03:33.076Z · score: 2 (1 votes) · LW · GW

Many LW people have now taken the view that the market is not efficient when it comes to black swan events.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T19:44:44.934Z · score: 5 (2 votes) · LW · GW

I think what ignoranceprior was originally asking was, given all the information you know, what is your best estimate of the infection fatality rate? Best estimate in this case implies adjusting for ways that some research can be wrong, and taking into account the rebuttals you've read here.

Comment by matthew-barnett on MichaelA's Shortform · 2020-03-28T08:26:02.997Z · score: 3 (2 votes) · LW · GW

I think you should add Clarifying some key hypotheses in AI alignment.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T08:14:15.398Z · score: 6 (4 votes) · LW · GW

If you can predict the result of the data ahead of time, that seems very important for making decisions (eg. predicting stock market moves).

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T07:33:24.517Z · score: 7 (4 votes) · LW · GW
The accuracy of similar tests for influenza is generally 50–70%.

I don't think these tests are similar. See here,

All of the coronavirus tests being used by public health agencies and private labs around the world start with a technique called polymerase chain reaction, or PCR, which can detect tiny amounts of a virus’s genetic material. SARS-CoV-2, the virus that causes COVID-19, has RNA as its genetic material. That RNA must first be copied into DNA. “That’s a lengthy part of the process, too,” says Satterfield, adding 15 to 30 minutes to the test.
[...]
Many doctors’ offices can do a rapid influenza test. But those flu tests don’t use PCR, Satterfield says. Instead, they detect proteins on the surface of the influenza virus. While the test is quick and cheap, it’s also not nearly as sensitive as PCR in picking up infections, especially early on before the virus has a chance to replicate, he says. By the CDC’s estimates, rapid influenza tests may miss 50 percent to 70 percent of cases that PCR can detect. The low sensitivity can lead to many false negative test results.
Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T01:56:52.832Z · score: 18 (6 votes) · LW · GW
This paper was written by an international team of highly cited disease modellers who know about the Diamond Princess and have put their reputation on the line to make the case that this the hypothesis of high infections rate and low infection fatality might be true.

Yes, but when you actually read the paper (I read some parts), it says that their model is based on an assumption of low IFR, and in itself did not argue for low IFR (feel free to prove me wrong here).

Comment by matthew-barnett on The case for lifelogging as life extension · 2020-03-26T20:59:50.113Z · score: 3 (2 votes) · LW · GW

This concept apparently goes back at least as far as Robert Ettinger, the originator of cryonics. From his seminal book introducing cryonics,

We normally think of information about the body as being preserved in the body - but this is not the only possibility. It is conceivable that ordinary written records, photographs, tapes, etc. may give future technicians enough clues to fill in missing or damaged areas in the brain of the frozen.
The time will certainly come when the brain's method of coding memories is thoroughly understood, and messages can be "read" directly from nervous tissue, and also "read" into it. It is not likely that the relation will be a simple one, nor will it necessarily even be exactly the same for every brain; nevertheless, by knowing that the frozen had a certain item of information, it may be possible to infer helpful conclusions about the character of certain regions in his brain and its cells and molecules.
Similarly, a mass of detailed information about what he did may allow advanced physiological psychologists to deduce important conclusions about what he was, once more providing opportunity to fill in gaps in brain structure.
It follows that we should all make reasonable efforts to obtain and preserve a substantial body of data concerning what we have seen, heard, felt, thought, said, written, and done in the course of our lives. These should probably include a battery of psychological tests. Encephalograms might also be useful.
Comment by matthew-barnett on 3 Interview-like Algorithm Questions for Programmers · 2020-03-26T16:32:42.606Z · score: 2 (1 votes) · LW · GW

Nevermind.

Comment by matthew-barnett on Adding Up To Normality · 2020-03-26T01:17:04.290Z · score: 2 (1 votes) · LW · GW
What are those implications?

Without heliocentrism (and its extension to other stars), it seems that the entire idea of going to space and colonizing the stars would not be on the table, because we wouldn't fundamentally even understand what stuff was out there. Since colonizing space is arguably the number one long-term priority for utilitarians, heliocentrism is therefore a groundbreaking theory of immense ethical importance. Without it, we would not have any desire to expand beyond the Earth.

I tend to prefer dealing with applications, not implications

Colonizing the universe is indeed an application.

Comment by matthew-barnett on Adding Up To Normality · 2020-03-25T21:07:29.400Z · score: 8 (4 votes) · LW · GW
"But many worlds implies..." No, it doesn't.

It seems implausible that a physical theory of the universe, especially one so fundamental to our understanding of matter, would have literally no practical implications. The geocentric and heliocentric model of the solar system give you the same predictions about where the stars will be in the sky, but the heliocentric model gives some important implications for the ethics of space travel. Other scientific revolutions have similarly had enormous effects on our interpretation of the world.

Can you point to why this physical dispute is different?

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-24T23:34:49.757Z · score: 2 (1 votes) · LW · GW

Both could be relevant. It could be that a subgroup that makes up the majority of the military gets benefits, so the median is higher productivity. But due to a small subgroup, the mean is lower. Any result seems interesting here.

[ETA: Don't you think something like, "People in the Army have lower productivity but people in the Air Force have higher" would be interesting? I just am looking for something that's relevant to the central question of the post: can training have long-term benefits on self-discipline?]

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-24T21:49:35.344Z · score: 2 (1 votes) · LW · GW
I wouldn't consider mid-30s to be old, and my guess is that those laws are protecting people at least 40 years old

To be clear, that was exactly my point. The laws themselves just specify that you can't discriminate based on age. It is possible that many veterans receive a benefit to self-discipline during their service, but the laws still exist because other veterans do not have that benefit -- similar to how some older people are actually more hirable even if there's another group who isn't.

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-24T03:41:18.276Z · score: 3 (2 votes) · LW · GW

I'm not sure that follows. For many jobs, we know that people in their mid 30s are generally more productive than people who are in early career, for example. But there are still anti-discrimination laws against not hiring old people. Point being that while some of X might be good, too much of X could be bad. This could tie into Ryan's point above that while there could be some average productivity benefits, for exceptional cases,

I expect that the veterans who fail to re-adapt to civilian life suffer an almost complete collapse of productivity.

[ETA: Also, wouldn't you expect there to be charities for some interest group even if they were better off on average? Especially if they held a revered role within society.]

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-23T20:54:38.868Z · score: 4 (2 votes) · LW · GW
is wrong as a consequence, because you can never train yourself like you are in the Army. That fundamentally needs a group, entirely separate from the question of social incentives and environment.

How is a group separate from the question of social incentives and environment? Having a group of people to motivate you seems like intrinsically a question of social incentives and environments.

I took my friend's suggestion to be less that we can actually gather the resources to train ourselves like we are in the military, and more that if we were to do so, it would improve our discipline in the long-run. Hence the popular wisdom (or misconception) that military "straightens people out."

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-22T21:13:36.437Z · score: 2 (1 votes) · LW · GW
weaker claim?

Oops yes. That's the weaker claim, that I agree with. The stronger claim is that because we can't understand something "all at once" then mechanistic transparency is too hard and so we shouldn't take Daniel's approach. But the way we understand laptops is also in a mechanistic sense. No one argues that because laptops are too hard to understand all at once, then we should't try to understand them mechanistically.

This seems to be assuming that we have to be able to take any complex trained AGI-as-a-neural-net and determine whether or not it is dangerous. Under that assumption, I agree that the problem is itself very hard, and mechanistic transparency is not uniquely bad relative to other possibilities.

I didn't assume that. I objected to the specific example of a laptop as an instance of mechanistic transparency being too hard. Laptops are normally understood well because understanding can be broken into components and built up from abstractions. But each our understanding of each component and abstraction is pretty mechanistic -- and this understanding is useful.

Furthermore, because laptops did not fall out of the sky one day, but instead slowly built over successive years of research and development, it seems like a great example of how Daniel's mechanistic transparency approach does not rely on us having to understand arbitrary systems. Just as we built up an understanding of laptops, presumably we could do the same with neural networks. This was my interpretation of why he is using Zoom In as an example.

All of the other stories for preventing catastrophe that I mentioned in the grandparent are tackling a hopefully easier problem than "detect whether an arbitrary neural net is dangerous".

Indeed, but I don't think this was the crux of my objection.

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-22T08:08:49.991Z · score: 2 (1 votes) · LW · GW
I'd be shocked if there was anyone to whom it was mechanistically transparent how a laptop loads a website, down to the gates in the laptop.

Could you clarify why this is an important counterpoint. It seems obviously useful to understand mechanistic details of a laptop in order to debug it. You seem to be arguing the [ETA: weaker] claim that nobody understands the an entire laptop "all at once", as in, they can understand all the details in their head simultaneously. But such an understanding is almost never possible for any complex system, and yet we still try to approach it. So this objection could show that mechanistic transparency is hard in the limit, but it doesn't show that mechanistic transparency is uniquely bad in any sense. Perhaps you disagree?

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-21T20:20:59.511Z · score: 2 (1 votes) · LW · GW

I liked it.

Comment by matthew-barnett on March Coronavirus Open Thread · 2020-03-13T21:59:35.595Z · score: 31 (11 votes) · LW · GW

Are the economic forecasts still too sunny?

(Warning: Long comment)

Two weeks ago Wei Dai released his financial statement on his bet that the coronavirus would negatively impact the stock market. Since then (at the time of writing) the S&P has dropped another 9%. This move has been considered by many to be definitive evidence against the efficient market hypothesis, given that the epistemic situation with respect to the coronavirus has apparently not changed much in weeks (at least to a first approximation).

One hypothesis for why the stock market reacted as it did seems to be that people are failing to take exponential growth of the virus into account, and thus make overly optimistic predictions. This parallels Ray Kurzweil's observations of how people view technological progress,

When people think of a future period, they intuitively assume that the current rate of progress will continue for future periods. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. [...] From the mathematician’s perspective, a primary reason for this is that an exponential curve approximates a straight line when viewed for a brief duration.

The idea that smart investors don't understand exponential curves is absurd on its face, so another hypothesis is that people were afraid to "ring the alarm bell" about the virus, since no one else was ringing it at the time.

Determining which of the above hypotheses is true is important for determining whether you expect the market to continue declining. To see why, consider that if the "alarm bell" hypothesis was true, you might expect that now that the alarm bell has been set off, you now have no epistemic advantage over the market. The efficient market is thus reset. Nonetheless, the alarm bell might be a gradient, and therefore it could be that more people have yet to ring it. And of course both hypotheses might have some grain of truth.

Now that the market has dropped another 9%, the question on every investor's mind is, will it drop further? Yet, if the efficient market has really been debunked, then answering this question should be doable -- and I minimally attempt to do so here.

The approach I take in this post is to analyze the working assumptions of the most recent economic forecasts I could find, ie. try to determine what conditions they expect, which lead to their predictions. If I find these working assumptions to underestimate the virus' impact based on my best estimates, then I conclude, very tentatively, that the forecast is still too sunny. Otherwise, I conclude that the alarm bell has been rung. Overall, there are no fast and easy conclusions here.

The main issue is that this crisis has unfolded far too quickly for many up-to-date forecasts to come out. Still, I find a few that may help in my inquiry.

Disclaimer: I am in no position to offer specific financial advice to anyone, and I write this post for informational purposes only. I have no expertise in finance, and I am not creating this post to be authoritative. Please do not cite this post as proof that everyone should do some action X.

My Parameters

I offer the following predictions about particular parameters of the virus. I admit that many of my parameters are probably wrong. But at the same time, I make a stronger claim that no one else really has a much better idea of what they are talking about. Of course, I gladly welcome people to critique my estimates here.

  • I expect that the coronavirus will infect at least a few hundred million people by 1/1/2022. However, I think that as the virus progresses, people will take it very seriously, which implies that the reproduction constant probably won't be high enough for 70 - 80% of the population to be infected. I doubt that countries like the United States will be able to replicate the success at containment found in China, though I'm open to changing my mind here.
  • I expect the infection fatality rate (a nonstandard term that means dividing the estimated number of people infected by the number of deaths caused by the virus) to be around 0.7 to 1 percent, with significant uncertainty in both directions. (That said, a paper that was released in the Lancet yesterday says the true figure is probably closer to 5.6% and could be as high as 20%. The sheer insanity of such a prediction should give you an idea of how uncertain this whole thing still is.)
  • I expect the virus to temporarily peak in late April or May, but probably return in the winter and do a lot more relative damage given the cold weather.
  • I expect hospitals in every major country to be overwhelmed at some point. This will cause the number of deaths to rise, making the 1 percent an underestimate of the true risk. My current (wildly speculative) guess is the true number is 2 percent in untreated populations.
  • I expect that a vaccine will not be widespread by 1/1/2021, though I do expect one by 10/1/2021.
  • I expect that some sort of anti-viral will be available by this winter, somewhat dampening the impact of the virus when it hits full force. Though it has yet to be seen whether anti-virals will be effective.
  • I expect pretty much every country to implement measures like Italy is right now at some point, with the exception of countries with poor infrastructure that cannot manage such a quarantine.

I welcome people to view the estimates from Metaculus, which are more optimistic on some of these parameters than I am. So obviously, take the following analyses with a grain of salt.

Note: throughout this article I use the terms infection fatality rate, case fatality rate, and mortality rate somewhat interchangeably, and at times I do not know whether the author means something different by them. Some people often make careful distinctions between these terms, but it appears most people don't. Therefore, it's really difficult to understand what these analyses are actually saying at times.

JP Morgan

In the last 24 hours, JP Morgan announced that

The US economy could shrink by 2% in the first quarter and 3% in the second, JPMorgan projected, while the eurozone economy could contract by 1.8% and 3.3% in the same periods.

Their prediction is based on their research concerning the coronavirus, compiled here. In many ways, their estimates are quite similar to mine, and they share my sense that this virus will be long-lasting and painful. But in other ways they seem too optimistic. Here are some points,

  • At one point they criticize the UK Government's apparent estimate of 100,000 predicted deaths, by saying "To arrive at such an outcome, we had to assume that 38% of the entire UK population is infected (i.e., similar to the 1918 Spanish flu), and that 40% of infected people get sick and then experience 1% mortality; or we had to assume that only 10% of infected people get sick but then experience 4.4% mortality that’s equal to the epicenter of the virus outbreak in Wuhan. Even after accounting for Chinese infection/death underreporting and the difficulty Western countries might have replicating what China has done (the largest lockdown/ quarantine in the history of the world, accomplished via AI, big data and different privacy rules8 ), both of our modeled UK outcomes would be magnitudes worse than what’s occurring in China and South Korea."
  • They concur with my vaccine and anti-viral timelines, "While the fastest timeline for vaccines to reach patients is generally 12-18 months, (i.e., Massachusettsbased Moderna’s mRNA vaccine), COVID-19 treatments could possibly become available later this year"
  • They cite the fall of H1N1's mortality rate estimate (seemingly) as reason to think that this coronavirus will follow the same pattern, "Early estimates in the fall of 2009 from the WHO3 pegged the H1N1 mortality rate at 1.0%-1.3%, since they were dividing (d) by (c). Four years later, a study from the WHO and the Imperial College of London4 estimated H1N1 mortality as a function of total infections, including both the asymptomatic and the sick. Their revised H1N1 mortality rate using (b) as a denominator: just 0.02%."

My opinion

Whoever wrote this report has done a ton of research, and makes some very intelligent points. It think it would be unfair to say that intelligent investors from JP Morgan "don't understand exponential growth."

That said, I differ significantly in my estimate of whether the UK Government's estimate is valid, and whether the mortality rate will fall just as H1N1 did. The author seemed to be saying that the mortality rate can safely be only be calculated as a fraction of those who got sick with severe symptoms, rather than the total infected population. This fact makes me think that they are underestimating the infection fatality rate.

Moody's Analytics

On March 4th Moody's Analytics released a forecast of economic growth conditioned on the coronavirus becoming a pandemic, which at the time they considered to have only a 35% chance of occurring. Even though this report is somewhat old now, I still include it because this was their 'worst case' report. Their conclusion was that,

Under the pandemic scenario, the global economy suffers a recession during the first three quarters of 2020. Real GDP decreases by almost 2 percentage points peak to trough and declines for 2020 [...] The U.S. economy contracts in all four quarters of 2020 in the pandemic scenario, with real GDP falling by approximately 1.5 percentage points peak to trough and the unemployment rate rising by 175 basis points. The struggling manufacturing, transportation, agriculture and energy industries are hit hard, but so too are the travel and tourism industries and the construction trades. However, there are significant layoffs across nearly all industries, with healthcare and government being the notable exceptions.

The modeling assumption was that "millions" would be infected, and that it would peak by March or April.

Under our alternative Global Pandemic scenario, we expect that there are ultimately millions of infections across the globe, including in Europe and the U.S. COVID-19’s mortality rate is assumed to be 2%-3%, consistent with the experience so far, and a similar percentage of those infected become so sick they need some form of hospitalization. The peak of the pandemic is assumed to occur in March and April, winding down quickly by this summer, with a vaccine in place before next winter.

My opinion

While I find their estimate of the mortality rate to be rather high, this consideration is swamped by the fact that they only think it will infect "millions" of people (which I take to be perhaps 5 - 10 million) worldwide, and the fact that they think we will have a vaccine by next winter. I think Moody's Analytics are seriously low-balling this virus.

This report is probably the best evidence that investors still aren't taking the virus seriously. However, given that this report is about 9 days old though, I think that conclusions from this report should be interpreted with caution.

Capital Economics

A report from Capital Economics came out in the last few days, however, I've been unable to find the exact report. Instead, I can quote media article such as this one, and this one. They report,

Capital Economics also cut its estimate for gross domestic product in 2020, saying the economy would expand just 0.6% instead of 1.8% as previously forecast. [...] Many economists have downgraded their growth forecasts for the second quarter and beyond, but the Capital Economics call is the most pessimistic one yet.

So apparently they expect positive growth for the year, and yet this is one of the most pessimistic predictions from economists? That is striking on its own.

Capital Economics predicts a rebound in 2021 on the assumption that strict social distancing works to contain the coronavirus epidemic.
“If such measures helped to stem the spread of the virus ... they may reduce the risk of a worse-case scenario, in which one-third of the population become infected resulting in a prolonged recession,” Hunter said.
[...]
"We think this is going to have a very significant impact on activity over the next few months," said Andrew Hunter senior U.S. economist at Capital Economics.
[..]
However, Hunter expressed hope that if the number of coronavirus cases in the U.S. peaked in the tens of thousands, then the U.S. economy could "start to recover reasonably quickly."

It's not clear whether their "tens of thousands" in the US is a best case or median case scenario.

My opinion

It's hard to get a real sense of what Capital Economics expects, but the article itself gives the impression that we can still contain the effects, and things will wrap up in a few months. But given that they also mention that billions of people could be infected, it's hard to tell whether they are over or underestimating. I don't have a strong opinion here.

United Kingdom report

On March 11, the United Kingdom released a (long) report on their economic forecast, taking into account the expected impact from the coronavirus. Unfortunately their report did not include the latest figures from the coronavirus, and therefore it's hard to tell whether they are underestimating things.

As set out below, we agreed to close the pre-measures forecasts for the economy and public finances on 18 and 25 February respectively, to provide a stable base against which to assess the impact of the large Budget package. This was before the spread of the coronavirus was expected to have a significant effect on economic activity outside China. As discussed in the document, the outlook is therefore likely to be significantly less favourable than this central forecast suggests – especially in the short term – but to a degree that remains highly uncertain even now.

RaboResearch

A firm called RaboResearch released a forecast on March 12th. They are relatively optimistic,

The coronavirus outbreak has led us to reduce our growth projection for the global economy to 1.6% y/y in 2020

However, their assumptions appear to diverge substantially from mine

Whether we will see a similar spread in other Eurozone member states as we have seen in Italy still remains in doubt – and for now we are not yet assuming that as a base scenario.

In their "ugly" scenario, which they consider unlikely,

would see the virus continue to rage in China, spread to ASEAN, Australia and New Zealand, and the cluster of cases in the US and Europe snowball at an exponential growth rate from their currently low base. In other words, developed economies would also be hit.

Unfortunately, they don't include any actual numbers, so it's hard to tell how bad their ugly scenario actually is. Their absolute worst case scenario, which they call "the unthinkable" also contains no facts or figures,

This scenario is very short. The virus spreads globally and also mutates, with its transmissibility increasing and its lethality increasing too. The numbers infected would skyrocket, as would casualties. We could be looking at a global pandemic, and at scenarios more akin to dystopian Hollywood films than the realms of economic analysis. Let’s all pray it does not come to pass and just remains a very fat tail risk.

Note that I did not bold global pandemic. That was their emphasis.

My opinion

Given that their "unthinkable" scenario describes a global pandemic, which the WHO has already declared, I find it hard to believe that this firm has a clear idea of the economic effects of the coronavirus. Their vagueness makes me think that they are not using solid models of the virus, but instead unsubstantiated intuition, and that they are probably underestimating the impact.

Media reports

According to this investopedia article, the top three stock market news websites are MarketWatch, Bloomberg, and Reuters. Due to the paywall on Bloomberg I only accessed MarketWatch and Reuters. Therefore, I have taken the time to open each of these websites, read the first article that I can see that seems to include both an economic forecast and some type of prediction about a parameter of the coronavirus. To be honest, I wasn't able to find anything really specific. Nonetheless, here are some quotes I found,

The vast majority of economists predict the U.S. will start to rebound later in the year, though they are split over how soon and how fast. Some like Donabedian see a rapid recovery starting in the summer. Others predict a short recession that extends through the fall.
The more optimistic view is based on the assumption that the U.S. approach to containing the coronavirus more closely mirrors that of South Korea or Hong Kong than Italy or Iran.
[...]
“We think we will see a nice bounce back in the third quarter,” Guatieri said.
Still, even relative optimists such as Guatieri say there’s still too much uncertainty to feel confident. He and Wells Fargo’s Bullard say their firms have been changing their forecasts almost daily in the past week as the situation deteriorated. What’s made matters worse is simply not knowing the scope of the problem
“We’re not getting the insight into where we are or where we are going,” Bullard said. “So we’re all just speculating.”

My opinion

Like many of the forecasts above, the articles are very vague about what they expect, and it's hard to see what values are being plugged into these economic models, or whether their prediction is intuition alone.

Conclusion

I have not seen strong evidence that economic forecasters are now predicting doom. However, I have seen some weak evidence that suggests that many are misinformed about the scope of the virus, and its potential future impacts. Some forecasters, like JP Morgan, have clearly done a lot of research. Other firms are barely even using mathematical models of the virus. My own interpretation is that the places I surveyed are probably fairly overoptimistic, though it's really hard to tell without more evidence and concrete numbers.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-13T07:11:06.191Z · score: 6 (4 votes) · LW · GW

For my part, I think you summarized my position fairly well. However, after thinking about this argument for another few days, I have more points to add.

  • Disease seems especially likely to cause coordination failures since it's an internal threat rather than an external threat (which unlike internal threats, tend to unite empires). We can compare the effects of the smallpox epidemic in the Aztec and Inca empires alongside other historical diseases during wartime, such as the Plauge of Athens which arguably is what caused Athens to lose the Peloponnesian War.
  • Along these same lines, the Aztec/Inca didn't have any germ theory of disease, and therefore didn't understand what was going on. They may have thought that the gods were punishing them for some reason, and therefore they probably spent a lot of time blaming random groups for the catastrophe. We can contrast these circumstances to eg. the Paraguayan War which killed up to 90% of the male population, but people probably had a much better idea what was going on and who was to blame, so I expect that the surviving population had an easier time coordinating.
  • A large chunk of the remaining population likely had some sort of disability. Think of what would happen if you got measles and smallpox in the same two year window: even if you survived it probably wouldn't look good. This means that the pure death rate is an underestimate of the impact of a disease. The Aztecs, for whom "only" 40 percent died of disease, were still greatly affected
It killed many of its victims outright, particularly infants and young children. Many other adults were incapacitated by the disease – because they were either sick themselves, caring for sick relatives and neighbors, or simply lost the will to resist the Spaniards as they saw disease ravage those around them. Finally, people could no longer tend to their crops, leading to widespread famine, further weakening the immune systems of survivors of the epidemic. [...] a third of those afflicted with the disease typically develop blindness.
Comment by matthew-barnett on Open & Welcome Thread - February 2020 · 2020-03-12T16:48:10.228Z · score: 3 (2 votes) · LW · GW

After today's crash, what are you at now?

Comment by matthew-barnett on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2020-03-12T03:37:49.136Z · score: 4 (2 votes) · LW · GW
Either we'll have a positive singularity, and material abundance ensues, or we'll have a negative singularity, and paperclips ensue. That's why my retirement portfolio is geared towards business-as-usual scenarios.

My objection to this argument is just, more generally, before the singularity there should be some period in which we have powerful AI, but the economy still looks somewhat familiar. The operationalization for this is Paul's slow takeoff, where economic growth rates should start to pick up a little before picking up by a lot.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T09:49:12.638Z · score: 2 (1 votes) · LW · GW
Later, other Europeans would come along with other advantages, and they would conquer India, Persia, Vietnam, etc., evidence that while disease was a contributing factor (I certainly am not denying it helped!) it wasn't so important a factor as to render my conclusion invalid (my conclusion, again, is that a moderate technological and strategic advantage can enable a small group to take over a large region.)

Europeans conquered places such as India, but that was centuries later, after they had a large technological advantage, and they also didn't come with just a few warships either: they came with vast armadas. I don't see why that supports the point that a small group can take over a large region?

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T09:38:25.931Z · score: 2 (1 votes) · LW · GW
I really don't think the disease thing is important enough to undermine my conclusion. For the two reasons I gave: One, Afonso didn't benefit from disease

This makes sense, but I think the case of Afonso is sufficiently different from the others that it's a bit of a stretch to use it to imply much about AI takeovers. I think if you want to make a more general point about how AI can be militarily successful, then a better point of evidence is a broad survey of historical military campaigns. Of course, it's still a historically interesting case to consider!

two, the 90% argument: Suppose there was no disease but instead the Aztecs and Incas were 90% smaller in population and also in the middle of civil war. Same result would have happened, and it still would have proved my point.

Yeah but why are we assuming that they are still in the civil war? Call me out if I'm wrong here, but your thesis now seems to be: if some civilization is in complete disarray, then a well coordinated group of slightly more advanced people/AI can take control of the civilization.

This would be a reasonable thesis, but it doesn't shed too much light on AI takeovers. The important part lies in the "if some civilization is in complete disarray" conditional, and I think it's far from obvious that AI will emerge in such a world, unless some other more important causal factor already occurred that gave rise to the massive disarray in the first place. But even in that case, don't you think we should focus on that thing that caused the disarray instead?

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T03:59:51.233Z · score: 4 (2 votes) · LW · GW
I agree that it would be good to think about how AI might create devastating pandemics. I suspect it wouldn't be that hard to do, for an AI that is generally smarter than us. However, I think my original point still stands.

It's worth clarifying exactly what "original point" stands because I'm currently unsure.

I don't get why you think a small technologically primitive tribe could take over the world if they were immune to disease. Seems very implausible to me.

Sorry, I meant to say, "Were immune to diseases that were currently killing everyone else." If everyone is dying around you, then your level of technology doesn't really matter that much. You just wait for your enemy to die and then settle the land after they are gone. This is arguably what Europeans did in America. My point is that by focusing on technology, you are missing the main reason for the successful conquest.

But I don't want to do this yet, because it seems to me that even with disease factored in, "most" of the "credit" for Cortes and Pizarro's success goes to the factors I mentioned.
After all, suppose the disease reduced the on-paper strength of the Americans by 90%. They were still several orders of magnitude stronger than Cortes and Pizarro. So it's still surprising that Cortes/Pizarro won... until we factor in the technological and strategic advantages I mentioned.

I feel like you don't actually have a civilization if 90% of your people died. I think it's more fair to say that when 90% of your people die, your civilization basically stops existing rather than just being weakened. For example, I can totally imagine an Incan voyage to Spain conquering Madrid if 90% of the Spanish died. Their chain of command would be in complete shambles. It wouldn't just be like some clean 90% reduction in GDP with everything else held constant.

But the civilizations wouldn't have been destroyed without the Spaniards. (I might be wrong about this, but... hadn't the disease mostly swept through Inca territory by the time Pizarro arrived? So clearly their civilization had survived.)
I think I am somewhat close to being convinced by your criticism, at least when phrased in the way you just did: "your thesis is trivial!" But I'm not yet convinced, because of my argument about the 90% reduction. (I keep making the same argument basically in response to all your points; it is the crux for me I think.)

Look, if 90% of a country dies of a disease, and then the surviving 10% become engulfed in a civil war, and then some military group who is immune to the disease comes in and takes the capital city during this all, don't you think it's very misleading to conclude "A small group of people with a slight military advantage can take over a large civilization" without heavily emphasizing the whole 90% of people dying of a disease part? This is the heart of my critique.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T03:16:37.412Z · score: 4 (2 votes) · LW · GW

Here's what I'll be putting in the Alignment Newsletter about this piece. Let me know if you spot inaccuracies or lingering disagreement regarding the opinion section.

Summary:

This post lists three historical examples of how small human groups conquered large parts of the world, and shows how they are arguably precedents for AI takeover scenarios. The first two historical examples are the conquests of American civilizations by Hernán Cortés and Francisco Pizarro in the early 16th century. The third example is the Portugese capture of key Indian Ocean trading ports, which happened at roughly the same time as the other conquests. Daniel argues that technological and strategic advantages were the likely causes of these European victories. However, since a European technological advantage was small in this period, we might expect that an AI coalition could similarly take over a large portion of the world, even without a large technological advantage.

Opinion:

In a comment, I dispute the claimed reasons for why Europeans conquered American civilizations. I think that a large body of historical literature supports the conclusion that American civilizations fell primarily because of their exposure to diseases which they lacked immunity to, rather than because of European military power. I also think that this helps explain why Portugal was "only" able to capture Indian Ocean trading ports during this time period, rather than whole civilizations. I think the primary insight here should instead be that pandemics can kill large groups of humans, and therefore it would be worth exploring the possibility that AI systems use pandemics as a mechanism to kill large numbers of biological humans.
Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T03:10:40.101Z · score: 2 (1 votes) · LW · GW

I agree it's evidence. Though, I would estimate that the Spanish conquest of the Inca civilization was something like 80% due to disease, 20% due to other factors.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T02:20:40.645Z · score: 9 (6 votes) · LW · GW

[ETA: Another way of framing my disagreement is that if you are trying to argue that small groups can take over the world, it seems almost completely irrelevant to focus on relative strategic or technological advantages in light of these historical examples. For instance, it could have theoretically been that some small technologically primitive tribe took over the world if they had some sort of immunity to disease. This would seem to imply that relative strategic advantages in Europeans vs. Americans was not that important. Instead we should focus on what ways AIs could create eg. artificial pandemics, and we could use the smallpox epidemic in America as an example of how devastating pandemics can be.]

First response: Disease wasn't a part of Afonso's success. It helped the Europeans take over the Americas but did not help them take over Africa or Asia or the middle east; this suggests to me that it may have been a contributing factor but was not the primary explanation.

That makes sense. I'm much less familiar with Afonso de Albuquerque, though my understanding is that he didn't really conquer civilizations, mostly just trading ports. I think it's safe to say that successful military campaigns are common in history, and therefore I don't find his success very unique or indicative of a future AI takeover.

Second response: Even if we decide that Cortes and Pizarro wouldn't have been able to succeed without the disease, my overall conclusion still stands.

Well, it depends. If your conclusion is that "small groups with relatively little military or strategic advantages can still take over large areas of the world" then I completely agree. If your conclusion is that, "small military or strategic advantages are by themselves often sufficient for small groups to take over large areas of the world" then I disagree. I worry your post gave the impression that the second conclusion was true.

Then the modified conclusion in light of your claim about disease would be "In times of chaos and disruption, a force with a small tech and cunning/experience advantage can take over a region 1,000 times its size." This modified conclusion is, as far as I'm concerned, still almost as powerful and interesting as the original conclusion.

A big part of my critique here is that you need to focus way more on getting the true the causal factors that lead to these historical success, because otherwise you can't use them to argue why AI is going to be anything like it.

Since disease is, in my opinion, the primary causal factor at play here, I think we should instead explore the potential for AI to engineer pandemics that kill everyone -- but that seems way different than what you were arguing.

I don't think making the thesis "in times of chaos and destruction, groups can conquer other groups" really makes the argument say much. The thing that destroyed the Incas and Aztecs was disease, not European military power, so maybe that's the lesson we should learn? Saying that merely "times of chaos" destroyed the Incas and Aztecs is tautological and not interesting.

For example, it's true that the disease may have sparked the Incan civil war -- but civil wars happen pretty often anyway, historically. And when civil wars aren't happening, ordinary wars often are.

Yes, but this Incan civil war was particularly extreme and unusual, and from the source I listed, it seems that between 60% and 90% of Incans had died. So again, determining the underlying causal factors is key to this sort of analysis.

Nitpick: The war was Cortez + allies vs. Tenochtitlan + allies. The vast majority of people on both sides were Americans. So the smallpox wreaked havoc on all sides. (Maybe I should have said "both sides" instead of "all sides")

Yeah that makes sense, but it's important to note that neither the Aztec nor the Cortez-allied Americans survived in great numbers. It was only the Spanish that were prosperous afterwards, and that's really important!

Nitpick: If it turns out that getting sick from various diseases was what kept the Europeans out of Africa for so long, that actually supports my overall argument. (Because, imagine instead that Europeans had no problem with disease in Africa, but simply were unable to conquer much of it due to ordinary military/political reasons. Then their tech+cunning/experience advantage would have failed to be enough in that case, which makes their successes in America seem more like a fluke than a pattern explained by tech+cunning/experience. In other words, if disease wasn't a factor in Africa, that would be evidence against my claims.)

I'm not sure if I understand this point well, but I think I agree. However, the quinine drug treatment for malaria was a technological advantage brought by the industrial revolution, and wasn't just some innate advantage that the Europeans eventually got.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-07T22:59:06.447Z · score: 2 (1 votes) · LW · GW

Do you have any thoughts on the critique I just posted?

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-07T22:51:41.404Z · score: 35 (14 votes) · LW · GW

Very interesting post! However, I have a big disagreement with your interpretation of why the European conquerors succeeded in America, and I think that it undermines much of your conclusion.

In your section titled "What explains these devastating takeovers?" you cite technology and strategic ability, but Old World diseases destroyed the communities in America before the European invaders arrived, most notably smallpox, but also measles, influenza, typhus and the bubonic plague. My reading of historians (from Charles Mann's book 1493, to Alfred W. Crosby's The Columbian Exchange and Jared Diamond's Guns Germs and Steel) leads me to conclude that the historical consensus is that the reason for all of these takeovers was due to Old World diseases, and had relatively little to do with technology or strategy per se.

In Chapter 11 of Guns Germs and Steel, Jared Diamond analyzes the European takeovers in America you cite from the perspective of old World diseases (Here's a video from a Youtuber named CGP Grey who made a video on the same topic). The basic thesis is that Europeans had acquired immunity from these diseases, whereas people in America hadn't. From Wikipedia,

After first contacts with Europeans and Africans, some believe that the death of 90–95% of the native population of the New World was caused by Old World diseases.[43] It is suspected that smallpox was the chief culprit and responsible for killing nearly all of the native inhabitants of the Americas.

These diseases were endemic by the time that Cortes and Pizarro arrived on the continent, and therefore it seems very unlikely that their victory was achieved primarily from military and technological might. From Wikipedia again,

The Spanish Franciscan Motolinia left this description: "As the Indians did not know the remedy of the disease…they died in heaps, like bedbugs. In many places it happened that everyone in a house died and, as it was impossible to bury the great number of dead, they pulled down the houses over them so that their homes become their tombs."[46] On Cortés's return, he found the Aztec army’s chain of command in ruins. The soldiers who still lived were weak from the disease. Cortés then easily defeated the Aztecs and entered Tenochtitlán.[47] The Spaniards said that they could not walk through the streets without stepping on the bodies of smallpox victims
The effects of smallpox on Tahuantinsuyu (or the Inca empire) were even more devastating. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Within months, the disease had killed the Incan Emperor Huayna Capac, his successor, and most of the other leaders. Two of his surviving sons warred for power and, after a bloody and costly war, Atahualpa become the new emperor. As Atahualpa was returning to the capital Cuzco, Francisco Pizarro arrived and through a series of deceits captured the young leader and his best general. Within a few years smallpox claimed between 60% and 90% of the Inca population,[49] with other waves of European disease weakening them further.

The theory that disease was more important than technology is further supported empirically by the fact that Europeans were unable to conquer African tribes/civilizations until the late 19th century, long after the conquest of the New World, despite the fact that many African civilizations had similar or even lower technological capabilities compared to the Inca and Aztecs. The reason is because Africans had immunity to Old World diseases, unlike Americans. However, even in the 19th century conquests, historians often cite the development of the drug quinine, and thus immunity to disease, as one of the primary reasons why European civilizations were able to conquer African nations.

By contrast, I was only able to find one mention of smallpox in your entire post, and the place where you do mention it, you say

Smallpox sweeps through the land, killing many on all sides and causing general chaos.

If I'm reading "all sides" correctly, this is just flat-out incorrect. It killed mainly Americans.

At one point you state that during Pizarro's conquest,

The Inca empire is in the middle of a civil war and a devastating plague.

This "plague" was smallpox carried from earlier European travelers. Jared Diamond says

The reason for the civil war was that an epidemic of smallpox, spreading overland among South American Indians after its arrival with Spanish settlers in Panama and Colombia, had killed the Inca emperor Huayna Capac and most of his court around 1526, and then immediately killed his designated heir, Ninan Cuyuchi.

You may ask why there was an asymmetry: after all, didn't the New World have diseases that Europeans were not immune to? Yes, but basically only syphilis. Europeans had exposure to many infectious diseases because those diseases had been acquired from livestock, but livestock was not an important component of American civilizations in the pre-Columbian period.

One reason why disease might not be salient in descriptions of the American conquest is because until modern times, historians emphasized explanations of events in terms of human-factors, such as personalities of rulers and tendencies of groups of people. According to this source, it wasn't until the 1960s that historians started to take seriously the idea that disease was the primary culprit in the destruction of American civilizations.

There still could be an analogous situation where AI develops diseases that kills humans but not AI, but I think it's worth exploring this type of existential risk in its own category, and emphasize that this thesis does not depend on a historical precedent of conquerors having strategic or technological advantages.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-07T22:07:58.480Z · score: 4 (2 votes) · LW · GW

I don't think that specific fact really disputes that they "had access to a deep historical archive." From Jared Diamond's Guns Germs and Steel,

On a mundane level, the miscalculations by Atahuallpa, Chalcuchima, Montezuma, and countless other Native American leaders deceived by Europeans were due to the fact that no living inhabitants of the New World had been to the Old World, so of course they could have had no specific information about the Spaniards. Even so, we find it hard to avoid the conclusion that Atahuallpa "should" have been more suspicious, if only his society had experienced a broader range of human behavior. Pizarro too arrived at Cajamarca with no information about the Incas other than what he had learned by interrogating the Inca subjects he encountered in 1527 and 1531.
However, while Pizarro himself happened to be illiterate, he belonged to a literate tradition. From books, the Spaniards knew of many contemporary civilizations remote from Europe, and about several thousand years of European history. Pizarro explicitly modeled his ambush of Atahuallpa on the successful strategy of Cortes. In short, literacy made the Spaniards heirs to a huge body of knowledge about human behavior and history. By contrast, not only did Atahuallpa have no conception of the Spaniards themselves, and no personal experience of any other invaders from overseas, but he also had not even heard (or read) of similar threats to anyone else, anywhere else, anytime previously in history. That gulf of experience encouraged Pizarro to set his trap and Atahuallpa to walk into it.
Comment by matthew-barnett on Coherence arguments do not imply goal-directed behavior · 2020-03-07T21:59:21.678Z · score: 6 (3 votes) · LW · GW

See also Alex Turner's work on formalizing instrumentally convergent goals, and his walkthrough of the MIRI paper.

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-02T21:13:01.792Z · score: 2 (1 votes) · LW · GW
That's not what I said.

That's fair. I didn't actually quite understand what your position was and was trying to clarify.