Posts

My guide to lifelogging 2020-08-28T21:34:40.397Z
Preface to the sequence on economic growth 2020-08-27T20:29:24.517Z
What specific dangers arise when asking GPT-N to write an Alignment Forum post? 2020-07-28T02:56:12.711Z
Are veterans more self-disciplined than non-veterans? 2020-03-23T05:16:18.029Z
What are the long-term outcomes of a catastrophic pandemic? 2020-03-01T19:39:17.457Z
Gary Marcus: Four Steps Towards Robust Artificial Intelligence 2020-02-22T03:28:28.376Z
Distinguishing definitions of takeoff 2020-02-14T00:16:34.329Z
The case for lifelogging as life extension 2020-02-01T21:56:38.535Z
Inner alignment requires making assumptions about human values 2020-01-20T18:38:27.128Z
Malign generalization without internal search 2020-01-12T18:03:43.042Z
Might humans not be the most intelligent animals? 2019-12-23T21:50:05.422Z
Is the term mesa optimizer too narrow? 2019-12-14T23:20:43.203Z
Explaining why false ideas spread is more fun than why true ones do 2019-11-24T20:21:50.906Z
Will transparency help catch deception? Perhaps not 2019-11-04T20:52:52.681Z
Two explanations for variation in human abilities 2019-10-25T22:06:26.329Z
Misconceptions about continuous takeoff 2019-10-08T21:31:37.876Z
A simple environment for showing mesa misalignment 2019-09-26T04:44:59.220Z
One Way to Think About ML Transparency 2019-09-02T23:27:44.088Z
Has Moore's Law actually slowed down? 2019-08-20T19:18:41.488Z
How can you use music to boost learning? 2019-08-17T06:59:32.582Z
A Primer on Matrix Calculus, Part 3: The Chain Rule 2019-08-17T01:50:29.439Z
A Primer on Matrix Calculus, Part 2: Jacobians and other fun 2019-08-15T01:13:16.070Z
A Primer on Matrix Calculus, Part 1: Basic review 2019-08-12T23:44:37.068Z
Matthew Barnett's Shortform 2019-08-09T05:17:47.768Z
Why Gradients Vanish and Explode 2019-08-09T02:54:44.199Z
Four Ways An Impact Measure Could Help Alignment 2019-08-08T00:10:14.304Z
Understanding Recent Impact Measures 2019-08-07T04:57:04.352Z
What are the best resources for examining the evidence for anthropogenic climate change? 2019-08-06T02:53:06.133Z
A Survey of Early Impact Measures 2019-08-06T01:22:27.421Z
Rethinking Batch Normalization 2019-08-02T20:21:16.124Z
Understanding Batch Normalization 2019-08-01T17:56:12.660Z
Walkthrough: The Transformer Architecture [Part 2/2] 2019-07-31T13:54:44.805Z
Walkthrough: The Transformer Architecture [Part 1/2] 2019-07-30T13:54:14.406Z

Comments

Comment by matthew-barnett on Anti-Aging: State of the Art · 2021-01-11T01:20:39.627Z · LW · GW

You're right about (1). I seemed to have misread the chart, presumably because I was focused on worms.

Concerning (2), I don't see how your argument implies that the marginal returns to new resources are high. Can you clarify?

Comment by matthew-barnett on Two explanations for variation in human abilities · 2021-01-10T17:29:38.711Z · LW · GW

Formulations are basically just lifted from the post verbatim, so the response might be some evidence that it would be good to rework the post a bit before people vote on it. 

But I think I already addressed the fundamental reply at the beginning of the section 2. The theses themselves are lifted from the post verbatim, however, I state that they are incomplete.

Maybe you'd class that under "background knowledge"? Or maybe the claim is that, modulo broken parts, motivation, and background knowledge, different people can meta-learn the same effective learning strategies? 

I would really rather avoid making strict claims about learning rates being "roughly equal" and would prefer to talk about how, given the same learning environment (say, a lecture) and backgrounds, human learning rates are closer to equal than human performance in learned tasks.

Comment by matthew-barnett on Two explanations for variation in human abilities · 2021-01-10T00:11:45.794Z · LW · GW

I think it's important to understand that the two explanations I gave in the post can work together. After more than a year, I would state my current beliefs as something closer to the following thesis:

Given equal background and motivation, there is a lot less inequality in the rates human learn new tasks, compared to the inequality in how humans perform learned tasks. By "less inequality" I don't mean "roughly equal" as your prediction-specifications would indicate; the reason is because human learning rates are still highly unequal, despite the fact that nearly all humans have similar neural architectures. As I explained in section two of the post, a similar architecture does not imply similar performance. A machine with a broken part is nearly structurally identical to a machine with no broken parts, yet it does not work.

Comment by matthew-barnett on Anti-Aging: State of the Art · 2021-01-02T00:21:17.322Z · LW · GW

The personal strategies for slowing aging are interesting, but I was under the impression that your post's primary thesis was that we should give money to, work for, and volunteer for anti-aging organizations. It's difficult to see how doing any of that would personally make me live longer, unless we're assuming unrealistic marginal returns to more effort.

In other words, it's unclear why you're comparing anti-aging and cryonics in the way you described. In the case of cryonics, people are looking for a selfish return. In the case of funding anti-aging, people are looking for an altruistic return. A more apt comparison would be about prioritizing cryonics vs. personal anti-aging strategies, but your main post didn't discuss personal anti-aging strategies.

Comment by matthew-barnett on Anti-Aging: State of the Art · 2021-01-01T22:43:06.117Z · LW · GW

I appreciate the detailed and thoughtful reply. :)

I and others think that anti-aging and donating to SENS is probably a more important cause area than most EA cause areas (especially short-term ones) besides X-risk for the reasons below.

I agree that anti-aging is neglected in EA compared to other short-term, human focused cause areas. The reason is likely because the people who would be most receptive to anti-aging move to other fields. As Pablo Stafforini said,

Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.

I wrote a post about how anti-aging might be competitive with longtermist charities here.

Data from human trials suggest many of these approaches have already been shown to reduce the rate of cognitive impairment, cancer, and many other features of aging in humans. Given these changes are highly correlated with biological aging, the evidence strongly suggests the capacity for the approaches mentioned to slow biological in humans.

Again, this is nice, and I think it's good evidence that we could achieve modest success in the coming decades. But in the post you painted a different picture. Specifically, you said,

The 'white mirror' of aging is a world in which biological age is halted at 20-30 years, and people maintain optimal health for a much longer or indefinite period of time. Although people will still age chronologically (exist over time) they will not undergo physical and cognitive decline associated with biological aging. At chronological ages of 70s, 80s, even 200s, they would maintain the physical appearance and much lower disease risk of a 20-30-year-old.

If humans make continuous progress, then eventually we'll get here. I have no issue with that prediction. But my objection concerned the pace and tractability of research. And it seems like there's going to be a ton of work going from modest treatments for aging to full cures.

One possible response is that the pace of research will soon speed up dramatically. Aubrey de Grey has argued along these lines on several occasions. In his opinion, there will be a point at which humanity wakes up from its pro-aging trance. From this perspective, the primary value of research in the present is to advance the timeline when humanity wakes up and gets started on anti-aging for real.

Unfortunately, I see no strong evidence for this theory. People's minds tend to change gradually in response to gradual technological change. The researchers who said this year that "I'll wait until you have robust mouse rejuvenation" will just say "I'll wait until you have results in humans" when you have results in mice. Humans aren't going to just suddenly realize that their whole ethical system is flawed; that rarely ever happens.

More likely, we will see gradual progress over several decades. I'm unsure whether the overall project (ie. longevity escape velocity) will succeed within my own lifetime, but I'm very skeptical that it will happen within eg. 20 years.

In addition, in the past 2 years, human biological aging has already been reversed using calorie restriction, and with thymic rejuvenation, as measured by epigenetic (DNAm) aging.

I don't think either of these results are strong evidence of recent progress. Calorie restriction has been known about for at least 85 years. The thymic rejuvenation result was a tiny trial with ten participants, and the basic results have been known since at least 1992.

The recent progress in epigenetic clocks is promising, and I do think that's been one of the biggest developments in the field. But it's important to see the bigger picture. When I open up old Alcor Magazine archives, or old longevity books from the 1980s and 1990s, I find pretty much same arguments that I hear today for why a longevity revolution is near. People tend to focus on a few small laboratory successes without considering whether the rate of laboratory successes have gone up, or whether it's common to quickly go from laboratory success to clinical success. 

Given that 86 percent of clinical trails eventually fail, and the marginal returns to new drug R&D has gone down exponentially over time, I want to know what specifically should make us optimistic about anti-aging, that's different from previous failed predictions.

I understand that the number of longevity biotech companies may (wrongly) suggest that the field is well-funded. But this number is not an accurate proxy for the relative funding received by basic geroscience to develop cures for aging, from which these companies are spun-out of. 

If the number of companies working on rejuvenation biotechnology did not accurately represent the amount of total effort in the field, then what was the point of bringing it up in the introduction?

I think many EA's assume academia is an efficient market that will self-correct to prioritise research with the greatest potential impact

Interestingly, I get the opposite impression. But maybe we talk to different EAs.

Aubrey de Grey who has significant insight into the landscape of funding for anti-aging believes that $250-500 million over 10 years is required to kickstart the field sufficiently so that larger sources of funding will flow in.

I don't doubt Aubrey de Grey's expertise or his intentions. But I've heard him say this line too, and I've never heard him give any strong arguments for it. Why isn't the number $10 billion or $1 trillion? If you think about comparably large technological projects in the past, $500 million is a paltry sum; yet, I don't see a good reason to believe that this field is different than all the others. Moreover, there is a well-known bias that people within a field are more optimistic about their work than people outside of it.

For example, a drug or cocktail of therapies that extend life of all humans on Earth by 10 years essentially allows 10-years' worth of people who would otherwise have died of aging (~400 million people) to potentially reach the point at which AI solves aging and hence, longevity escape velocity.

This is only true so long as the drug can be distributed widely almost instantaneously. By comparison, it usually takes vaccines several decades to be widely distributed. I also find it very unlikely that any currently researched treatment will add 10 years of healthy life discontinuously. Again, progress tends to happen gradually.

Comment by matthew-barnett on Anti-Aging: State of the Art · 2021-01-01T21:55:10.280Z · LW · GW

Oops, that was a typo. I meant curing cancer. And I overlooked the typo twice! Oops.

Comment by matthew-barnett on Anti-Aging: State of the Art · 2021-01-01T19:19:08.282Z · LW · GW

This seems untrue on its face. What we mean by "curing aging" is negligible senescence.

And presumably what the cancer researcher meant by curing cancer was something like, "Can reliably remove tumors without them growing back"? Do you have evidence that we have not done this in mice?

Comment by matthew-barnett on Against GDP as a metric for timelines and takeoff speeds · 2021-01-01T08:00:21.811Z · LW · GW

In addition to the reasons you mentioned, there's also empirical evidence that technological revolutions generally precede the productivity growth that they eventually cause. In fact, economic growth may even slow down as people pay costs to adopt new technologies. Philippe Aghion and Peter Howitt summarize the state of the research in chapter 9 of The Economics of Growth,

Although each [General Purpose Technology (GPT)] raises output and productivity in the long run, it can also cause cyclical fluctuations while the economy adjusts to it. As David (1990) and Lipsey and Bekar (1995) have argued, GPTs like the steam engine, the electric dynamo, the laser, and the computer require costly restructuring and adjustment to take place, and there is no reason to expect this process to proceed smoothly over time. Thus, contrary to the predictions of real-business-cycle theory, the initial effect of a “positive technology shock” may not be to raise output, productivity, and employment but to reduce them.

Comment by matthew-barnett on Anti-Aging: State of the Art · 2021-01-01T06:42:41.100Z · LW · GW

As an effective altruist, I like to analyze how altruistic cause areas fare on three different axes: importance, tractability and neglectedness. The arguments you gave for the importance of aging are compelling to me (at least from a short-term, human-focused perspective). I'm less convinced that anti-aging efforts are worth it according to the other axes, and I'll explain some of my reasons here.

The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans.
[...]
In the lab, we have demonstrated that various anti-aging approaches can extend healthy lifespan in many model organisms including yeast, worms, fish, flies, mice and rats. Life extension of model organisms using anti-aging approaches ranges from 30% to 1000%

When looking at the graph you present, a clear trend emerges: the more complex and larger the organism, the less progress we have made on slowing aging for that organism. Given that humans are much more complex and larger than the model organisms you presented, I'd caution against extrapolating lab results to them.

I once heard from a cancer researcher that we had, for all practical purposes, cured cancer in mice, but the results have not yet translated into humans. Whether or not this claim is true, it's clear that progress has been slower than the starry-eyed optimists had expected back in 1971.

That's not to say that there hasn't been progress in cancer research, or biological research more broadly. It's just that progress tends to happen gradually. I don't doubt that we can achieve modest success; I think it's plausible (>30% credence) that we will have FDA approved anti-aging treatments by 2030. But I'm very skeptical that these modest results will trigger an anti-aging revolution that substantially affects lifespan and quality of life in the way that you have described.

Most generally, scientific fields tend to have diminishing marginal returns, since all the low-hanging fruit tends to get plucked early on. In the field of anti-aging, even the lowest hanging fruit (ie. the treatments you described) don't seem very promising. At best, they might deliver an impact roughly equivalent to adding a decade or two of healthy life. At that level, human life would be meaningfully affected, but the millennia-old cycle of birth-to-death would remain almost unchanged.

Today, there are over 130 longevity biotechnology companies

From the perspective of altruistic neglectedness, this fact counts against anti-aging as a promising field to go into. The fact that there are 130 companies working on the problem with only minor laboratory success in the last decade indicates that the marginal returns to new inputs is low. One more researcher, or one more research grant will add little to the rate of progress.

In my opinion, if robust anti-aging technologies do exist in say, 50 years, the most likely reason would be that overall technological progress sped up dramatically (for example, due to transformative AI), and progress in anti-aging was merely a side effect of this wave of progress. 

It's also possible that anti-aging science is a different kind of science than most fields, and we have reason to expect a discontinuity in progress some time soon (for one potential argument, see the last several paragraphs of my post here). The problem is that this argument is vunerable to the standard reply usually given against arguments for technological discontinuities: they're rare. 

(However I do recommend reading some material investigating the frequency of technological discontinuities here. Maybe you can find some similarities with past technological discontinuities? :) )

Comment by matthew-barnett on Forecasting Thread: AI Timelines · 2020-09-04T00:53:07.514Z · LW · GW
  • Your percentiles:
    • 5th: 2040-10-01
    • 25th: above 2100-01-01
    • 50th: above 2100-01-01
    • 75th: above 2100-01-01
    • 95th: above 2100-01-01

XD

Comment by matthew-barnett on Forecasting Thread: AI Timelines · 2020-09-03T23:59:37.594Z · LW · GW

If AGI is taken to mean, the first year that there is radical economic, technological, or scientific progress, then these are my AGI timelines.

My percentiles

  • 5th: 2029-09-09
  • 25th: 2049-01-17
  • 50th: 2079-01-24
  • 75th: above 2100-01-01
  • 95th: above 2100-01-01

I have a bit lower probability for near-term AGI than many people here are. I model my biggest disagreement as about how much work is required to move from high-cost impressive demos to real economic performance. I also have an intuition that it is really hard to automate everything and progress will be bottlenecked by the tasks that are essential but very hard to automate.

Comment by matthew-barnett on Reflections on AI Timelines Forecasting Thread · 2020-09-03T10:21:16.733Z · LW · GW

Here, Metaculus predicts when transformative economic growth will occur. Current status:

25% chance before 2058.

50% chance before 2093.

75% chance before 2165.

Comment by matthew-barnett on My guide to lifelogging · 2020-08-28T22:28:54.651Z · LW · GW
Other pros of some body cams: goes underwater without a casing blocking the mic (I think)

I haven't tried it, but I don't think it can go underwater. It is built to be water resistant but I'm not confident it can be completely submerged. Therefore, if you are a frequent snorkeler, I recommend getting an action camera.

Comment by matthew-barnett on Forecasting Thread: AI Timelines · 2020-08-27T03:57:18.595Z · LW · GW

It's unclear to me what "human-level AGI" is, and it's also unclear to me why the prediction is about the moment an AGI is turned on somewhere. From my perspective, the important thing about artificial intelligence is that it will accelerate technological, economic, and scientific progress. So, the more important thing to predict is something like, "When will real economic growth rates reach at least 30% worldwide?"

It's worth comparing the vagueness in this question with the specificity in this one on Metaculus. From the virtues of rationality,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade.
Comment by matthew-barnett on What specific dangers arise when asking GPT-N to write an Alignment Forum post? · 2020-07-28T06:01:37.307Z · LW · GW
To me the most obvious risk (which I don't ATM think of as very likely for the next few iterations, or possibly ever, since the training is myopic/SL) would be that GPT-N in fact is computing (e.g. among other things) a superintelligent mesa-optimization process that understands the situation it is in and is agent-y.

Do you have any idea of what the mesa objective might be. I agree that this is a worrisome risk, but I was more interested in the type of answer that specified, "Here's a plausible mesa objective given the incentives." Mesa optimization is a more general risk that isn't specific to the narrow training scheme used by GPT-N.

Comment by matthew-barnett on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T03:41:55.669Z · LW · GW
It’s embarrassing that I was confidently wrong about my understanding of so many things in the same domain. I’ve updated towards thinking that microeconomics is trickier than most other similarly simple-seeming subjects like physics, math, or computer science. I think that the above misconceptions are more serious than any misconceptions about other technical fields which I’ve discovered over the last few years

For some of these, I'm confused about your conviction that you were "confidently wrong" before. It seems that the general pattern here is that you used the Econ 101 model to interpret a situation, and then later discovered that there was a more complex model that provided different implications. But isn't it kind of obvious that for something in the social sciences, there's always going to be some sort of more complex model that gives slightly different predictions?

When I say that a basic model is wrong, I mean that it gives fundamentally incorrect predictions, and that a model of similar complexity would provide better ones. However (at least minimally in the cases of (3) and (4)) I'm not sure I'd really describe your previous models as "wrong" in this sense. And I think there's a meaningful distinction between saying you were wrong and saying you gained a more nuanced understanding of something.

Comment by matthew-barnett on Modelling Continuous Progress · 2020-06-23T19:20:35.063Z · LW · GW
Second, the major disagreement is between those who think progress will be discontinuous and sudden (such as Eliezer Yudkowsky, MIRI) and those who think progress will be very fast by normal historical standards but continuous (Paul Chrisiano, Robin Hanson).

I'm not actually convinced this is a fair summary of the disagreement. As I explained in my post about different AI takeoffs, I had the impression that the primary disagreement between the two groups was over locality rather than the amount of time takeoff lasts. Though of course, I may be misinterpreting people.

Comment by matthew-barnett on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-06-09T17:29:29.880Z · LW · GW

I tend to think that the pandemic shares more properties with fast takeoff than it does with slow takeoff. Under fast takeoff, a very powerful system will spring into existence after a long period of AI being otherwise irrelevant, in a similar way to how the virus was dormant until early this year. The defining feature of slow takeoff, by contrast, is a gradual increase in abilities from AI systems all across the world.

In particular, I object to this portion of your post,

The "moving goalposts" effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the "no fire alarm" hypothesis to hold in the slow takeoff scenario - there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as "overblown" until it is too late to address them.

I'm not convinced that these parallels to COVID-19 are very informative. Compared to this pandemic, I expect the direct effects of AI to be very obvious to observers, in a similar way that the direct effects of cars are obvious to people who go outside. Under a slow takeoff, AI will already be performing a lot of important economic labor before the world "goes crazy" in the important senses. Compare to the pandemic, in which

  • It is not empirically obvious that it's worse than a seasonal flu (we only know that it is due to careful data analysis after months of collection).
  • It's not clearly affecting everyone around you in the way that cars, computers, software, and other forms of engineering are.
  • Is considered natural, and primarily affects old people who are conventionally considered to be less worthy of concern (though people give lip service denying this).
Comment by matthew-barnett on Is AI safety research less parallelizable than AI research? · 2020-05-11T21:20:58.800Z · LW · GW

For an alternative view, you may find this response interesting from an 80000 hours podcast. Here, Paul Christiano appears to reject that AI safety research is less parallelizable.

Robert Wiblin: I guess there’s this open question of whether we should be happy if AI progress across the board just goes faster. What if yes, we can just speed up the whole thing by 20%. Both all of the safety and capabilities. As far as I understand there’s kind of no consensus on this. People vary quite a bit on how pleased they’d be to see everything speed up in proportion.
Paul Christiano: Yes. I think that’s right. I think my take which is a reasonably common take, is it doesn’t matter that much from an alignment perspective. Mostly, it will just accelerate the time at which everything happens and there’s some second-order terms that are really hard to reason about like, “How good is it to have more computing hardware available?” Or ”How good is it for there to be more or less kinds of other political change happening in the world prior to the development of powerful AI systems?”
There’s these higher order questions where people are very uncertain of whether that’s good or bad but I guess my take would be the net effect there is kind of small and the main thing is I think accelerating AI matters much more on the like next 100 years perspective. If you care about welfare of people and animals over the next 100 years, then acceleration of AI looks reasonably good.
I think that’s like the main upside. The main upside of faster AI progress is that people are going to be happy over the short term. I think if we care about the long term, it is roughly awash and people could debate whether it’s slightly positive or slightly negative and mostly it’s just accelerating where we’re going.
Comment by matthew-barnett on What would you do differently if you were less concerned with looking weird? · 2020-05-08T19:24:14.730Z · LW · GW

I'd probably look into brain reading tech, and try to figure out whether I can record my brain state of all times during the day (including when I'm with other people). I'd also write more posts about very speculative cause areas that don't have much evidence, though my concern here might not be weirdness necessarily but rather that people will judge my thinking quality to he poor or something.

Comment by matthew-barnett on OpportunisticBot's Shortform · 2020-04-24T03:11:15.174Z · LW · GW

In order for a voluntary eugenics scheme to work, the trait must be genetically heritable. What evidence is there that Alzheimer's is strongly genetically heritable? If there was a gene for Alzheimers, we could perform genetic testing and then implement the scheme you described. Personally, I'm pretty skeptical that this is a good use of money, for a few reasons.

As you said, the majority of suffers of Alzheimers are over 65. That means that it will take a minimum of 65 years for this scheme to start having any big effects. Over such timelines, I think it's plausible that there are more powerful technologies on the horizon. For instance, rather than focus on old-fashioned Eugenics, why not push for genetic engineering as a solution to Alzheimers?

Comment by matthew-barnett on Are there any active play-money prediction markets online? · 2020-04-12T09:40:32.818Z · LW · GW

I would describe Metaculus as a "play-money" prediction market. Why don't you think it's a prediction market? Players/users are rewarded with points (e.g. 'play-money') for making good/better predictions. What's missing?

What's missing is the ability to sell my shares of "yes" or "no" to other users. A market requires having the ability to exchange commodities. I think it's probably better to describe Metaculus as a prediction aggregator with an incentive system.

Comment by matthew-barnett on Why I'm Not Vegan · 2020-04-10T09:14:55.196Z · LW · GW

I view the value of veganism for effective altruists as a way to purchase moral consistency. If you take animal suffering very seriously but aren't vegan, it can be a bit emotionally uneasy unless you have a strong ability to dissociate your choices. There are costs to purchasing moral consistency of course, but no more than many other luxuries.

This way of approaching veganism mirrors a countersignaling framework:

  • Normal people who don't care at all about animals see veganism as a personal choice
  • People who care quite a bit about animal suffering see veganism as a moral imperative, not a personal choice.
  • Consequentialists who just want to reduce suffering, and are impartial to the methods they use to reduce suffering, tend to think that veganism is only worth it if the personal costs aren't high.

Tobias Leenaert has used the word post-vegan to describe the third stage, and I quite like the label myself.

Comment by matthew-barnett on Why I'm Not Vegan · 2020-04-10T09:06:25.279Z · LW · GW
if everyone was vegan, the problem would be solved

To generalize, while the problem of animal abuse would go away, the problem of animal suffering wouldn't. Likewise, unplugging your microwave would solve the problem of microwaves using too much energy, but wouldn't solve the problem of efficient energy capture, or climate change.

Comment by matthew-barnett on An alarm bell for the next pandemic · 2020-04-06T06:42:28.644Z · LW · GW

The thing I regret the most was not explicitly debunking myths that were common back in the early days. For example, a lot of people told me that this was just going to disappear like SARS. But I actually had got my hands on a dataset and SARS looked way different. There was little reason to believe it was like SARS.

I also didn't give people financial advice because I'm very averse to doing stuff like that, and I thought people would generally just say that I'm "trying to beat the market" against a common consensus that beating the market was impossible. Though once stuff started happening, I started to try my hand at it.

Comment by matthew-barnett on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-04T07:19:40.224Z · LW · GW
January 29th on the EA Forum

For vanity reasons, I'm going to point out that it was actually the 26th.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T23:24:42.955Z · LW · GW

The idea is that the context surrounding this pandemic is unique.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T21:03:33.076Z · LW · GW

Many LW people have now taken the view that the market is not efficient when it comes to black swan events.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T19:44:44.934Z · LW · GW

I think what ignoranceprior was originally asking was, given all the information you know, what is your best estimate of the infection fatality rate? Best estimate in this case implies adjusting for ways that some research can be wrong, and taking into account the rebuttals you've read here.

Comment by matthew-barnett on MichaelA's Shortform · 2020-03-28T08:26:02.997Z · LW · GW

I think you should add Clarifying some key hypotheses in AI alignment.

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T08:14:15.398Z · LW · GW

If you can predict the result of the data ahead of time, that seems very important for making decisions (eg. predicting stock market moves).

Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T07:33:24.517Z · LW · GW
The accuracy of similar tests for influenza is generally 50–70%.

I don't think these tests are similar. See here,

All of the coronavirus tests being used by public health agencies and private labs around the world start with a technique called polymerase chain reaction, or PCR, which can detect tiny amounts of a virus’s genetic material. SARS-CoV-2, the virus that causes COVID-19, has RNA as its genetic material. That RNA must first be copied into DNA. “That’s a lengthy part of the process, too,” says Satterfield, adding 15 to 30 minutes to the test.
[...]
Many doctors’ offices can do a rapid influenza test. But those flu tests don’t use PCR, Satterfield says. Instead, they detect proteins on the surface of the influenza virus. While the test is quick and cheap, it’s also not nearly as sensitive as PCR in picking up infections, especially early on before the virus has a chance to replicate, he says. By the CDC’s estimates, rapid influenza tests may miss 50 percent to 70 percent of cases that PCR can detect. The low sensitivity can lead to many false negative test results.
Comment by matthew-barnett on The case for C19 being widespread · 2020-03-28T01:56:52.832Z · LW · GW
This paper was written by an international team of highly cited disease modellers who know about the Diamond Princess and have put their reputation on the line to make the case that this the hypothesis of high infections rate and low infection fatality might be true.

Yes, but when you actually read the paper (I read some parts), it says that their model is based on an assumption of low IFR, and in itself did not argue for low IFR (feel free to prove me wrong here).

Comment by matthew-barnett on The case for lifelogging as life extension · 2020-03-26T20:59:50.113Z · LW · GW

This concept apparently goes back at least as far as Robert Ettinger, the originator of cryonics. From his seminal book introducing cryonics,

We normally think of information about the body as being preserved in the body - but this is not the only possibility. It is conceivable that ordinary written records, photographs, tapes, etc. may give future technicians enough clues to fill in missing or damaged areas in the brain of the frozen.
The time will certainly come when the brain's method of coding memories is thoroughly understood, and messages can be "read" directly from nervous tissue, and also "read" into it. It is not likely that the relation will be a simple one, nor will it necessarily even be exactly the same for every brain; nevertheless, by knowing that the frozen had a certain item of information, it may be possible to infer helpful conclusions about the character of certain regions in his brain and its cells and molecules.
Similarly, a mass of detailed information about what he did may allow advanced physiological psychologists to deduce important conclusions about what he was, once more providing opportunity to fill in gaps in brain structure.
It follows that we should all make reasonable efforts to obtain and preserve a substantial body of data concerning what we have seen, heard, felt, thought, said, written, and done in the course of our lives. These should probably include a battery of psychological tests. Encephalograms might also be useful.
Comment by matthew-barnett on 3 Interview-like Algorithm Questions for Programmers · 2020-03-26T16:32:42.606Z · LW · GW

Nevermind.

Comment by matthew-barnett on Adding Up To Normality · 2020-03-26T01:17:04.290Z · LW · GW
What are those implications?

Without heliocentrism (and its extension to other stars), it seems that the entire idea of going to space and colonizing the stars would not be on the table, because we wouldn't fundamentally even understand what stuff was out there. Since colonizing space is arguably the number one long-term priority for utilitarians, heliocentrism is therefore a groundbreaking theory of immense ethical importance. Without it, we would not have any desire to expand beyond the Earth.

I tend to prefer dealing with applications, not implications

Colonizing the universe is indeed an application.

Comment by matthew-barnett on Adding Up To Normality · 2020-03-25T21:07:29.400Z · LW · GW
"But many worlds implies..." No, it doesn't.

It seems implausible that a physical theory of the universe, especially one so fundamental to our understanding of matter, would have literally no practical implications. The geocentric and heliocentric model of the solar system give you the same predictions about where the stars will be in the sky, but the heliocentric model gives some important implications for the ethics of space travel. Other scientific revolutions have similarly had enormous effects on our interpretation of the world.

Can you point to why this physical dispute is different?

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-24T23:34:49.757Z · LW · GW

Both could be relevant. It could be that a subgroup that makes up the majority of the military gets benefits, so the median is higher productivity. But due to a small subgroup, the mean is lower. Any result seems interesting here.

[ETA: Don't you think something like, "People in the Army have lower productivity but people in the Air Force have higher" would be interesting? I just am looking for something that's relevant to the central question of the post: can training have long-term benefits on self-discipline?]

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-24T21:49:35.344Z · LW · GW
I wouldn't consider mid-30s to be old, and my guess is that those laws are protecting people at least 40 years old

To be clear, that was exactly my point. The laws themselves just specify that you can't discriminate based on age. It is possible that many veterans receive a benefit to self-discipline during their service, but the laws still exist because other veterans do not have that benefit -- similar to how some older people are actually more hirable even if there's another group who isn't.

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-24T03:41:18.276Z · LW · GW

I'm not sure that follows. For many jobs, we know that people in their mid 30s are generally more productive than people who are in early career, for example. But there are still anti-discrimination laws against not hiring old people. Point being that while some of X might be good, too much of X could be bad. This could tie into Ryan's point above that while there could be some average productivity benefits, for exceptional cases,

I expect that the veterans who fail to re-adapt to civilian life suffer an almost complete collapse of productivity.

[ETA: Also, wouldn't you expect there to be charities for some interest group even if they were better off on average? Especially if they held a revered role within society.]

Comment by matthew-barnett on Are veterans more self-disciplined than non-veterans? · 2020-03-23T20:54:38.868Z · LW · GW
is wrong as a consequence, because you can never train yourself like you are in the Army. That fundamentally needs a group, entirely separate from the question of social incentives and environment.

How is a group separate from the question of social incentives and environment? Having a group of people to motivate you seems like intrinsically a question of social incentives and environments.

I took my friend's suggestion to be less that we can actually gather the resources to train ourselves like we are in the military, and more that if we were to do so, it would improve our discipline in the long-run. Hence the popular wisdom (or misconception) that military "straightens people out."

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-22T21:13:36.437Z · LW · GW
weaker claim?

Oops yes. That's the weaker claim, that I agree with. The stronger claim is that because we can't understand something "all at once" then mechanistic transparency is too hard and so we shouldn't take Daniel's approach. But the way we understand laptops is also in a mechanistic sense. No one argues that because laptops are too hard to understand all at once, then we should't try to understand them mechanistically.

This seems to be assuming that we have to be able to take any complex trained AGI-as-a-neural-net and determine whether or not it is dangerous. Under that assumption, I agree that the problem is itself very hard, and mechanistic transparency is not uniquely bad relative to other possibilities.

I didn't assume that. I objected to the specific example of a laptop as an instance of mechanistic transparency being too hard. Laptops are normally understood well because understanding can be broken into components and built up from abstractions. But each our understanding of each component and abstraction is pretty mechanistic -- and this understanding is useful.

Furthermore, because laptops did not fall out of the sky one day, but instead slowly built over successive years of research and development, it seems like a great example of how Daniel's mechanistic transparency approach does not rely on us having to understand arbitrary systems. Just as we built up an understanding of laptops, presumably we could do the same with neural networks. This was my interpretation of why he is using Zoom In as an example.

All of the other stories for preventing catastrophe that I mentioned in the grandparent are tackling a hopefully easier problem than "detect whether an arbitrary neural net is dangerous".

Indeed, but I don't think this was the crux of my objection.

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-22T08:08:49.991Z · LW · GW
I'd be shocked if there was anyone to whom it was mechanistically transparent how a laptop loads a website, down to the gates in the laptop.

Could you clarify why this is an important counterpoint. It seems obviously useful to understand mechanistic details of a laptop in order to debug it. You seem to be arguing the [ETA: weaker] claim that nobody understands the an entire laptop "all at once", as in, they can understand all the details in their head simultaneously. But such an understanding is almost never possible for any complex system, and yet we still try to approach it. So this objection could show that mechanistic transparency is hard in the limit, but it doesn't show that mechanistic transparency is uniquely bad in any sense. Perhaps you disagree?

Comment by matthew-barnett on An Analytic Perspective on AI Alignment · 2020-03-21T20:20:59.511Z · LW · GW

I liked it.

Comment by matthew-barnett on March Coronavirus Open Thread · 2020-03-13T21:59:35.595Z · LW · GW

Are the economic forecasts still too sunny?

(Warning: Long comment)

Two weeks ago Wei Dai released his financial statement on his bet that the coronavirus would negatively impact the stock market. Since then (at the time of writing) the S&P has dropped another 9%. This move has been considered by many to be definitive evidence against the efficient market hypothesis, given that the epistemic situation with respect to the coronavirus has apparently not changed much in weeks (at least to a first approximation).

One hypothesis for why the stock market reacted as it did seems to be that people are failing to take exponential growth of the virus into account, and thus make overly optimistic predictions. This parallels Ray Kurzweil's observations of how people view technological progress,

When people think of a future period, they intuitively assume that the current rate of progress will continue for future periods. However, careful consideration of the pace of technology shows that the rate of progress is not constant, but it is human nature to adapt to the changing pace, so the intuitive view is that the pace will continue at the current rate. [...] From the mathematician’s perspective, a primary reason for this is that an exponential curve approximates a straight line when viewed for a brief duration.

The idea that smart investors don't understand exponential curves is absurd on its face, so another hypothesis is that people were afraid to "ring the alarm bell" about the virus, since no one else was ringing it at the time.

Determining which of the above hypotheses is true is important for determining whether you expect the market to continue declining. To see why, consider that if the "alarm bell" hypothesis was true, you might expect that now that the alarm bell has been set off, you now have no epistemic advantage over the market. The efficient market is thus reset. Nonetheless, the alarm bell might be a gradient, and therefore it could be that more people have yet to ring it. And of course both hypotheses might have some grain of truth.

Now that the market has dropped another 9%, the question on every investor's mind is, will it drop further? Yet, if the efficient market has really been debunked, then answering this question should be doable -- and I minimally attempt to do so here.

The approach I take in this post is to analyze the working assumptions of the most recent economic forecasts I could find, ie. try to determine what conditions they expect, which lead to their predictions. If I find these working assumptions to underestimate the virus' impact based on my best estimates, then I conclude, very tentatively, that the forecast is still too sunny. Otherwise, I conclude that the alarm bell has been rung. Overall, there are no fast and easy conclusions here.

The main issue is that this crisis has unfolded far too quickly for many up-to-date forecasts to come out. Still, I find a few that may help in my inquiry.

Disclaimer: I am in no position to offer specific financial advice to anyone, and I write this post for informational purposes only. I have no expertise in finance, and I am not creating this post to be authoritative. Please do not cite this post as proof that everyone should do some action X.

My Parameters

I offer the following predictions about particular parameters of the virus. I admit that many of my parameters are probably wrong. But at the same time, I make a stronger claim that no one else really has a much better idea of what they are talking about. Of course, I gladly welcome people to critique my estimates here.

  • I expect that the coronavirus will infect at least a few hundred million people by 1/1/2022. However, I think that as the virus progresses, people will take it very seriously, which implies that the reproduction constant probably won't be high enough for 70 - 80% of the population to be infected. I doubt that countries like the United States will be able to replicate the success at containment found in China, though I'm open to changing my mind here.
  • I expect the infection fatality rate (a nonstandard term that means dividing the estimated number of people infected by the number of deaths caused by the virus) to be around 0.7 to 1 percent, with significant uncertainty in both directions. (That said, a paper that was released in the Lancet yesterday says the true figure is probably closer to 5.6% and could be as high as 20%. The sheer insanity of such a prediction should give you an idea of how uncertain this whole thing still is.)
  • I expect the virus to temporarily peak in late April or May, but probably return in the winter and do a lot more relative damage given the cold weather.
  • I expect hospitals in every major country to be overwhelmed at some point. This will cause the number of deaths to rise, making the 1 percent an underestimate of the true risk. My current (wildly speculative) guess is the true number is 2 percent in untreated populations.
  • I expect that a vaccine will not be widespread by 1/1/2021, though I do expect one by 10/1/2021.
  • I expect that some sort of anti-viral will be available by this winter, somewhat dampening the impact of the virus when it hits full force. Though it has yet to be seen whether anti-virals will be effective.
  • I expect pretty much every country to implement measures like Italy is right now at some point, with the exception of countries with poor infrastructure that cannot manage such a quarantine.

I welcome people to view the estimates from Metaculus, which are more optimistic on some of these parameters than I am. So obviously, take the following analyses with a grain of salt.

Note: throughout this article I use the terms infection fatality rate, case fatality rate, and mortality rate somewhat interchangeably, and at times I do not know whether the author means something different by them. Some people often make careful distinctions between these terms, but it appears most people don't. Therefore, it's really difficult to understand what these analyses are actually saying at times.

JP Morgan

In the last 24 hours, JP Morgan announced that

The US economy could shrink by 2% in the first quarter and 3% in the second, JPMorgan projected, while the eurozone economy could contract by 1.8% and 3.3% in the same periods.

Their prediction is based on their research concerning the coronavirus, compiled here. In many ways, their estimates are quite similar to mine, and they share my sense that this virus will be long-lasting and painful. But in other ways they seem too optimistic. Here are some points,

  • At one point they criticize the UK Government's apparent estimate of 100,000 predicted deaths, by saying "To arrive at such an outcome, we had to assume that 38% of the entire UK population is infected (i.e., similar to the 1918 Spanish flu), and that 40% of infected people get sick and then experience 1% mortality; or we had to assume that only 10% of infected people get sick but then experience 4.4% mortality that’s equal to the epicenter of the virus outbreak in Wuhan. Even after accounting for Chinese infection/death underreporting and the difficulty Western countries might have replicating what China has done (the largest lockdown/ quarantine in the history of the world, accomplished via AI, big data and different privacy rules8 ), both of our modeled UK outcomes would be magnitudes worse than what’s occurring in China and South Korea."
  • They concur with my vaccine and anti-viral timelines, "While the fastest timeline for vaccines to reach patients is generally 12-18 months, (i.e., Massachusettsbased Moderna’s mRNA vaccine), COVID-19 treatments could possibly become available later this year"
  • They cite the fall of H1N1's mortality rate estimate (seemingly) as reason to think that this coronavirus will follow the same pattern, "Early estimates in the fall of 2009 from the WHO3 pegged the H1N1 mortality rate at 1.0%-1.3%, since they were dividing (d) by (c). Four years later, a study from the WHO and the Imperial College of London4 estimated H1N1 mortality as a function of total infections, including both the asymptomatic and the sick. Their revised H1N1 mortality rate using (b) as a denominator: just 0.02%."

My opinion

Whoever wrote this report has done a ton of research, and makes some very intelligent points. It think it would be unfair to say that intelligent investors from JP Morgan "don't understand exponential growth."

That said, I differ significantly in my estimate of whether the UK Government's estimate is valid, and whether the mortality rate will fall just as H1N1 did. The author seemed to be saying that the mortality rate can safely be only be calculated as a fraction of those who got sick with severe symptoms, rather than the total infected population. This fact makes me think that they are underestimating the infection fatality rate.

Moody's Analytics

On March 4th Moody's Analytics released a forecast of economic growth conditioned on the coronavirus becoming a pandemic, which at the time they considered to have only a 35% chance of occurring. Even though this report is somewhat old now, I still include it because this was their 'worst case' report. Their conclusion was that,

Under the pandemic scenario, the global economy suffers a recession during the first three quarters of 2020. Real GDP decreases by almost 2 percentage points peak to trough and declines for 2020 [...] The U.S. economy contracts in all four quarters of 2020 in the pandemic scenario, with real GDP falling by approximately 1.5 percentage points peak to trough and the unemployment rate rising by 175 basis points. The struggling manufacturing, transportation, agriculture and energy industries are hit hard, but so too are the travel and tourism industries and the construction trades. However, there are significant layoffs across nearly all industries, with healthcare and government being the notable exceptions.

The modeling assumption was that "millions" would be infected, and that it would peak by March or April.

Under our alternative Global Pandemic scenario, we expect that there are ultimately millions of infections across the globe, including in Europe and the U.S. COVID-19’s mortality rate is assumed to be 2%-3%, consistent with the experience so far, and a similar percentage of those infected become so sick they need some form of hospitalization. The peak of the pandemic is assumed to occur in March and April, winding down quickly by this summer, with a vaccine in place before next winter.

My opinion

While I find their estimate of the mortality rate to be rather high, this consideration is swamped by the fact that they only think it will infect "millions" of people (which I take to be perhaps 5 - 10 million) worldwide, and the fact that they think we will have a vaccine by next winter. I think Moody's Analytics are seriously low-balling this virus.

This report is probably the best evidence that investors still aren't taking the virus seriously. However, given that this report is about 9 days old though, I think that conclusions from this report should be interpreted with caution.

Capital Economics

A report from Capital Economics came out in the last few days, however, I've been unable to find the exact report. Instead, I can quote media article such as this one, and this one. They report,

Capital Economics also cut its estimate for gross domestic product in 2020, saying the economy would expand just 0.6% instead of 1.8% as previously forecast. [...] Many economists have downgraded their growth forecasts for the second quarter and beyond, but the Capital Economics call is the most pessimistic one yet.

So apparently they expect positive growth for the year, and yet this is one of the most pessimistic predictions from economists? That is striking on its own.

Capital Economics predicts a rebound in 2021 on the assumption that strict social distancing works to contain the coronavirus epidemic.
“If such measures helped to stem the spread of the virus ... they may reduce the risk of a worse-case scenario, in which one-third of the population become infected resulting in a prolonged recession,” Hunter said.
[...]
"We think this is going to have a very significant impact on activity over the next few months," said Andrew Hunter senior U.S. economist at Capital Economics.
[..]
However, Hunter expressed hope that if the number of coronavirus cases in the U.S. peaked in the tens of thousands, then the U.S. economy could "start to recover reasonably quickly."

It's not clear whether their "tens of thousands" in the US is a best case or median case scenario.

My opinion

It's hard to get a real sense of what Capital Economics expects, but the article itself gives the impression that we can still contain the effects, and things will wrap up in a few months. But given that they also mention that billions of people could be infected, it's hard to tell whether they are over or underestimating. I don't have a strong opinion here.

United Kingdom report

On March 11, the United Kingdom released a (long) report on their economic forecast, taking into account the expected impact from the coronavirus. Unfortunately their report did not include the latest figures from the coronavirus, and therefore it's hard to tell whether they are underestimating things.

As set out below, we agreed to close the pre-measures forecasts for the economy and public finances on 18 and 25 February respectively, to provide a stable base against which to assess the impact of the large Budget package. This was before the spread of the coronavirus was expected to have a significant effect on economic activity outside China. As discussed in the document, the outlook is therefore likely to be significantly less favourable than this central forecast suggests – especially in the short term – but to a degree that remains highly uncertain even now.

RaboResearch

A firm called RaboResearch released a forecast on March 12th. They are relatively optimistic,

The coronavirus outbreak has led us to reduce our growth projection for the global economy to 1.6% y/y in 2020

However, their assumptions appear to diverge substantially from mine

Whether we will see a similar spread in other Eurozone member states as we have seen in Italy still remains in doubt – and for now we are not yet assuming that as a base scenario.

In their "ugly" scenario, which they consider unlikely,

would see the virus continue to rage in China, spread to ASEAN, Australia and New Zealand, and the cluster of cases in the US and Europe snowball at an exponential growth rate from their currently low base. In other words, developed economies would also be hit.

Unfortunately, they don't include any actual numbers, so it's hard to tell how bad their ugly scenario actually is. Their absolute worst case scenario, which they call "the unthinkable" also contains no facts or figures,

This scenario is very short. The virus spreads globally and also mutates, with its transmissibility increasing and its lethality increasing too. The numbers infected would skyrocket, as would casualties. We could be looking at a global pandemic, and at scenarios more akin to dystopian Hollywood films than the realms of economic analysis. Let’s all pray it does not come to pass and just remains a very fat tail risk.

Note that I did not bold global pandemic. That was their emphasis.

My opinion

Given that their "unthinkable" scenario describes a global pandemic, which the WHO has already declared, I find it hard to believe that this firm has a clear idea of the economic effects of the coronavirus. Their vagueness makes me think that they are not using solid models of the virus, but instead unsubstantiated intuition, and that they are probably underestimating the impact.

Media reports

According to this investopedia article, the top three stock market news websites are MarketWatch, Bloomberg, and Reuters. Due to the paywall on Bloomberg I only accessed MarketWatch and Reuters. Therefore, I have taken the time to open each of these websites, read the first article that I can see that seems to include both an economic forecast and some type of prediction about a parameter of the coronavirus. To be honest, I wasn't able to find anything really specific. Nonetheless, here are some quotes I found,

The vast majority of economists predict the U.S. will start to rebound later in the year, though they are split over how soon and how fast. Some like Donabedian see a rapid recovery starting in the summer. Others predict a short recession that extends through the fall.
The more optimistic view is based on the assumption that the U.S. approach to containing the coronavirus more closely mirrors that of South Korea or Hong Kong than Italy or Iran.
[...]
“We think we will see a nice bounce back in the third quarter,” Guatieri said.
Still, even relative optimists such as Guatieri say there’s still too much uncertainty to feel confident. He and Wells Fargo’s Bullard say their firms have been changing their forecasts almost daily in the past week as the situation deteriorated. What’s made matters worse is simply not knowing the scope of the problem
“We’re not getting the insight into where we are or where we are going,” Bullard said. “So we’re all just speculating.”

My opinion

Like many of the forecasts above, the articles are very vague about what they expect, and it's hard to see what values are being plugged into these economic models, or whether their prediction is intuition alone.

Conclusion

I have not seen strong evidence that economic forecasters are now predicting doom. However, I have seen some weak evidence that suggests that many are misinformed about the scope of the virus, and its potential future impacts. Some forecasters, like JP Morgan, have clearly done a lot of research. Other firms are barely even using mathematical models of the virus. My own interpretation is that the places I surveyed are probably fairly overoptimistic, though it's really hard to tell without more evidence and concrete numbers.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-13T07:11:06.191Z · LW · GW

For my part, I think you summarized my position fairly well. However, after thinking about this argument for another few days, I have more points to add.

  • Disease seems especially likely to cause coordination failures since it's an internal threat rather than an external threat (which unlike internal threats, tend to unite empires). We can compare the effects of the smallpox epidemic in the Aztec and Inca empires alongside other historical diseases during wartime, such as the Plauge of Athens which arguably is what caused Athens to lose the Peloponnesian War.
  • Along these same lines, the Aztec/Inca didn't have any germ theory of disease, and therefore didn't understand what was going on. They may have thought that the gods were punishing them for some reason, and therefore they probably spent a lot of time blaming random groups for the catastrophe. We can contrast these circumstances to eg. the Paraguayan War which killed up to 90% of the male population, but people probably had a much better idea what was going on and who was to blame, so I expect that the surviving population had an easier time coordinating.
  • A large chunk of the remaining population likely had some sort of disability. Think of what would happen if you got measles and smallpox in the same two year window: even if you survived it probably wouldn't look good. This means that the pure death rate is an underestimate of the impact of a disease. The Aztecs, for whom "only" 40 percent died of disease, were still greatly affected
It killed many of its victims outright, particularly infants and young children. Many other adults were incapacitated by the disease – because they were either sick themselves, caring for sick relatives and neighbors, or simply lost the will to resist the Spaniards as they saw disease ravage those around them. Finally, people could no longer tend to their crops, leading to widespread famine, further weakening the immune systems of survivors of the epidemic. [...] a third of those afflicted with the disease typically develop blindness.
Comment by matthew-barnett on Open & Welcome Thread - February 2020 · 2020-03-12T16:48:10.228Z · LW · GW

After today's crash, what are you at now?

Comment by matthew-barnett on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2020-03-12T03:37:49.136Z · LW · GW
Either we'll have a positive singularity, and material abundance ensues, or we'll have a negative singularity, and paperclips ensue. That's why my retirement portfolio is geared towards business-as-usual scenarios.

My objection to this argument is just, more generally, before the singularity there should be some period in which we have powerful AI, but the economy still looks somewhat familiar. The operationalization for this is Paul's slow takeoff, where economic growth rates should start to pick up a little before picking up by a lot.

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T09:49:12.638Z · LW · GW
Later, other Europeans would come along with other advantages, and they would conquer India, Persia, Vietnam, etc., evidence that while disease was a contributing factor (I certainly am not denying it helped!) it wasn't so important a factor as to render my conclusion invalid (my conclusion, again, is that a moderate technological and strategic advantage can enable a small group to take over a large region.)

Europeans conquered places such as India, but that was centuries later, after they had a large technological advantage, and they also didn't come with just a few warships either: they came with vast armadas. I don't see why that supports the point that a small group can take over a large region?

Comment by matthew-barnett on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-08T09:38:25.931Z · LW · GW
I really don't think the disease thing is important enough to undermine my conclusion. For the two reasons I gave: One, Afonso didn't benefit from disease

This makes sense, but I think the case of Afonso is sufficiently different from the others that it's a bit of a stretch to use it to imply much about AI takeovers. I think if you want to make a more general point about how AI can be militarily successful, then a better point of evidence is a broad survey of historical military campaigns. Of course, it's still a historically interesting case to consider!

two, the 90% argument: Suppose there was no disease but instead the Aztecs and Incas were 90% smaller in population and also in the middle of civil war. Same result would have happened, and it still would have proved my point.

Yeah but why are we assuming that they are still in the civil war? Call me out if I'm wrong here, but your thesis now seems to be: if some civilization is in complete disarray, then a well coordinated group of slightly more advanced people/AI can take control of the civilization.

This would be a reasonable thesis, but it doesn't shed too much light on AI takeovers. The important part lies in the "if some civilization is in complete disarray" conditional, and I think it's far from obvious that AI will emerge in such a world, unless some other more important causal factor already occurred that gave rise to the massive disarray in the first place. But even in that case, don't you think we should focus on that thing that caused the disarray instead?