Posts

Delegate a Forecast 2020-07-28T17:43:58.461Z · score: 41 (12 votes)

Comments

Comment by amandango on Delegate a Forecast · 2020-08-02T00:11:28.110Z · score: 3 (2 votes) · LW · GW

Here’s my prediction for this! I predicted a median of March 1, 2029. Below are some of the data sources that informed my thinking.

Related Metaculus question: When will sales of a non-screen technology be greater than sales of a screen technology?

Comment by amandango on Delegate a Forecast · 2020-08-01T23:55:26.850Z · score: 1 (1 votes) · LW · GW

Here's my prediction, and here's a spreadsheet with more details (I predicted expected # of people who would get COVID). Some caveats/assumptions:

  • There's a lot of uncertainty in each of the variables that I didn't have time to research in-depth
  • I didn't adjust for this being outdoors, you can add a row and adjust for that if you have a good sense of how it would affect it.
  • I wasn't sure how to account for the time being 3 hours. My sense is that if you're singing loudly at people < 1m for 3 hours, this is going to be a pretty high infection rate. Also, I assumed they weren't wearing masks because of the singing. I'm most uncertain about this though
  • You didn't mention how big the pods are. I assumed 10 people in a pod, but it would change it if this were much smaller.
Comment by amandango on Delegate a Forecast · 2020-07-30T17:05:53.347Z · score: 3 (2 votes) · LW · GW

Either expected number of people who get covid or number of microcovids generated by the event works as a question! My instinctive sense is that # of people who get covid will be easier to quickly reason about, but I'll see as I'm forecasting it.

Comment by amandango on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-23T22:43:44.131Z · score: 9 (4 votes) · LW · GW

In a similar vein to this, I found several resources that make me think it should be higher than 1% currently and in the next 1.5 years:

  • This 2012/3 paper by Vincent Müller and Nick Bostrom surveyed AI experts, in particular, 72 people who attended AGI workshops (most of whom do technical work). Of these 72, 36% thought that assuming HLMI would at some point exist, it would be either ‘on balance bad’ or ‘extremely bad’ for humanity. Obviously this isn't an indication that they understand or agree with safety concerns, but directionally suggests people are concerned and thinking about this.
  • This 2017 paper by Seth Baum identified 45 projects on AGI and their stance on safety (page 25). Of these, 12 were active on safety (dedicated efforts to address AGI safety issues), 3 were moderate (acknowledge safety issues, but don’t have dedicated efforts to address them), and 2 were dismissive (argue that AGI safety concerns are incorrect). The remaining 28 did not specify their stance.
Comment by amandango on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-22T22:21:54.701Z · score: 2 (2 votes) · LW · GW

If people don't have a strong sense of who these people are/would be, you can look through this google scholar citation list (this is just the top AI researchers, not AGI researchers).

Comment by amandango on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-22T20:14:24.284Z · score: 1 (1 votes) · LW · GW

We're ok with people posting multiple snapshots, if you want to update it based on later comments! You can edit your comment with a new snapshot link, or add a new comment with the latest snapshot (we'll consider the latest one, or whichever one you identify as your final submission)

Comment by amandango on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2020-07-22T19:23:49.012Z · score: 1 (1 votes) · LW · GW

Would the latter distribution you described look something like this?