Posts

Forecasting Newsletter: April 2021 2021-05-01T16:07:22.689Z
Forecasting Newsletter: March 2021 2021-04-01T17:12:09.499Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:35.920Z
Forecasting Newsletter: February 2021 2021-03-01T21:51:27.758Z
Forecasting Prize Results 2021-02-19T19:07:09.420Z
Forecasting Newsletter: January 2021 2021-02-01T23:07:39.131Z
2020: Forecasting in Review. 2021-01-10T16:06:32.082Z
Forecasting Newsletter: December 2020 2021-01-01T16:07:39.015Z
Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) 2020-12-03T22:00:26.889Z
Forecasting Newsletter: November 2020 2020-12-01T17:00:58.898Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:12:39.009Z
Incentive Problems With Current Forecasting Competitions. 2020-11-09T16:20:06.394Z
Forecasting Newsletter: October 2020. 2020-11-01T13:09:50.542Z
Adjusting probabilities for the passage of time, using Squiggle 2020-10-23T18:55:30.860Z
A prior for technological discontinuities 2020-10-13T16:51:32.572Z
NunoSempere's Shortform 2020-10-13T16:40:05.972Z
AI race considerations in a report by the U.S. House Committee on Armed Services 2020-10-04T12:11:36.129Z
Forecasting Newsletter: September 2020. 2020-10-01T11:00:54.354Z
Forecasting Newsletter: August 2020. 2020-09-01T11:38:45.564Z
Forecasting Newsletter: July 2020. 2020-08-01T17:08:15.401Z
Forecasting Newsletter. June 2020. 2020-07-01T09:46:04.555Z
Forecasting Newsletter: May 2020. 2020-05-31T12:35:58.063Z
Forecasting Newsletter: April 2020 2020-04-30T16:41:35.849Z
What are the relative speeds of AI capabilities and AI safety? 2020-04-24T18:21:58.528Z
Some examples of technology timelines 2020-03-27T18:13:19.834Z
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z
What do you do when you find out you have inconsistent probabilities? 2018-12-31T18:13:51.455Z
The hunt of the Iuventa 2018-03-10T20:12:13.342Z

Comments

Comment by NunoSempere (Radamantis) on Low-stakes alignment · 2021-04-30T08:44:03.095Z · LW · GW

To the extent that SGD can’t find the optimum, it hurts the performance of both the aligned model and the unaligned model. In some sense what we really want is a regret bound compared to the “best learnable model,” where the argument for a regret bound is heuristic but seems valid.

I'm not sure this goes through. In particular, it could be that the architecture which you would otherwise deploy (presumably human brains, or some other kind of automated system) would do better than the "best learnable model" for some other (architecture + training data + etc) combination. Perhaps in some sense what you want is a regret bound over the aligned model which you would use if you don't deploy your AI model, not a regret bound between your AI model and the best learnable model in its (architecture + training data + etc) space.

That said, I'm really not familiar with regret bounds, and it could be that this is a non-concern.

Comment by NunoSempere (Radamantis) on Does an app/group for personal forecasting exist? · 2021-04-28T13:56:36.708Z · LW · GW

See also: What are the best tools for recording predictions? 

Comment by NunoSempere (Radamantis) on Scott Alexander 2021 Predictions: Buy/Sell/Hold · 2021-04-27T21:33:47.871Z · LW · GW

I've added these predictions to foretold, in case people want to forecast on them in one place: https://www.foretold.io/c/6eebf79b-4b6f-487b-a6a5-748d82524637

Comment by NunoSempere (Radamantis) on What will GPT-4 be incapable of? · 2021-04-06T22:08:00.564Z · LW · GW

On this same note, matrix multiplication or inversion.

Comment by NunoSempere (Radamantis) on Learning Russian Roulette · 2021-04-02T22:26:23.191Z · LW · GW

I also have the sense that this problem is interesting.

Comment by NunoSempere (Radamantis) on Learning Russian Roulette · 2021-04-02T22:13:57.611Z · LW · GW

I disagree; this might have real world implications. For example, the recent OpenPhil report on Semi-informative Priors for AI timelines updates on the passage of time, but if we model creating AGI as playing Russian roulette*, perhaps one shouldn't update on the passage of time. 

* I.e., AGI in the 2000s might have lead to an existential catastrophe due to underdeveloped safety theory

Comment by NunoSempere (Radamantis) on Learning Russian Roulette · 2021-04-02T22:05:56.386Z · LW · GW

You would never play the first few times

This isn't really a problem if the rewards start out high and gradually diminish. 

I.e., suppose that you value your life at $L (i.e., you're willing to die if the heirs of your choice get L dollars), and you assign a probability of 10^-15 to H1 =  "I am immune to losing at Russian roulette", something like 10^ 4 to H2 = "I intuitively twist the gun each time to avoid the bullet",, and a probability of something like 10^-3 to H3 = "they gave me an empty gun this time". Then you are offered to play enough rounds of Russian roulette for a price of $L/round until you update to arbitrary levels. 

Now, if you play enough times, H3 becomes the dominant hypothesis with say 90% probability, so you'd accept a payout for, say, $L/2. Similarly, if you know that H3 isn't the case, you'd still assign very high probability to something like H2 after enough rounds, so you'd still accept a bounty of $L/2.

Now, suppose that all the alternative hypothesis H2, H3,... are false, and your only other alternative hypothesis is H1 (magical intervention). Now the original dilemma has been saved. What should one do?

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: March 2021 · 2021-04-02T08:24:19.482Z · LW · GW

No, the first part is a typo, thanks.

I'm not sure I understand what "this" refer to in that sentence

Comment by NunoSempere (Radamantis) on Improvement for pundit prediction comparisons · 2021-03-28T20:11:29.186Z · LW · GW

This feels solvable with a sufficiently large monetary prize.

Comment by NunoSempere (Radamantis) on What are some beautiful, rationalist artworks? · 2021-03-27T12:15:25.034Z · LW · GW

Dr. Manhattan has just been convinced by Veidt that he has been causing cancer in the people he cares about. He also finds himself caring less and less about the world (though he had previously accelerated technological progress by e.g., creating lithium batteries for electric cars), and leaves the Earth for Mars to build cool clocks and find some peace and quiet. 

Comment by NunoSempere (Radamantis) on [AN #143]: How to make embedded agents that reason probabilistically about their environments · 2021-03-24T18:01:41.960Z · LW · GW

This post seems pretty broken on Firefox, and doesn't look too great on Chrome either

Comment by NunoSempere (Radamantis) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-09T19:50:51.626Z · LW · GW

Mmh. OTOH, they loose the ability to incorporate new information. Do you have a sense of which factor dominates?

Comment by NunoSempere (Radamantis) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-09T10:04:59.856Z · LW · GW

This is changed now. As a bonus, it also resolves a previous bug

Comment by NunoSempere (Radamantis) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T16:04:44.266Z · LW · GW

Thank! It should just include open questions, not those which have closed and are yet to resolve. But this is easy to change.

Comment by NunoSempere (Radamantis) on Survey on cortical uniformity - an expert amplification exercise · 2021-02-24T14:41:59.381Z · LW · GW

EDIT: rephrased the estimations so they match the probability one would enter in the Elicit questions 

Oof, that means I get to change my predictions. 

Comment by NunoSempere (Radamantis) on Survey on cortical uniformity - an expert amplification exercise · 2021-02-23T22:39:11.863Z · LW · GW

I made three quick predictions, of which I'm not really sure. Someone should do the Bayesian calculation with a reasonable prior to determine how likely is it than more than half of experts would answer some way given the answers by the 6 experts who did answer.

For some of these questions, I'd expect experts to care more about the specific details than I would. E.g., maybe for “The entire cortical network could be modeled as the repetition of a few relatively simple neural structures, arranged in a similar pattern even across different cortical areas” someone who spends a lot of time researching the minutiae of cortical regions is more likely to consider the sentence false.

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2021-02-19T20:56:59.933Z · LW · GW
Comment by NunoSempere (Radamantis) on Mathematical Models of Progress? · 2021-02-16T14:39:52.475Z · LW · GW

Artificial Intelligence and Economic Growth, by Chad Jones et al for a particular model, Economic growth under transformative AI for a comprehensive review.

Comment by NunoSempere (Radamantis) on Incentive Problems With Current Forecasting Competitions. · 2021-02-16T11:37:56.595Z · LW · GW

Cheers, thanks! These are great

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: January 2021 · 2021-02-08T10:31:08.519Z · LW · GW

Thanks!

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2021-01-28T11:19:37.771Z · LW · GW

More examples in this paper: From self-prediction to self-defeat: behavioral forecasting, self-fulfilling prophecies, and the effect of competitive expectations

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2021-01-11T11:00:02.421Z · LW · GW

Two other examples:

  • Youtube's recommender system changes the habits of Youtube video producers (e.g., using keywords at the beginning of the titles, and at the beginning of the video now that Youtube can parse speech)
  • Andrew Yang apparently received death threats over a prediction market on the number of tweets. 
Comment by NunoSempere (Radamantis) on GraphQL tutorial for LessWrong and Effective Altruism Forum · 2021-01-06T11:14:47.555Z · LW · GW

I've come back to this occasionally, thanks. Here are two more snippets:

To get one post 

{
        post(
            input: {  
            selector: {
                _id: "Here goes the id"
            }      
            }) 
        {
            result {
            _id
            title
            slug
            pageUrl
            postedAt
            baseScore
            voteCount
            commentCount
            meta
            question
            url
            user {
                username
                slug
                karma
                maxPostCount
                commentCount
            }
            }
        }
}

or, as a JavaScript/node function:

let graphQLendpoint = 'https://forum.effectivealtruism.org/graphql' // or https://www.lesswrong.com/graphql. Note that this is not the same as the graph*i*ql visual interface talked about in the post. 

async function fetchPost(id){ 
  // note the async
  let response  = await fetch(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    body: JSON.stringify(({ query: `
       {
        post(
            input: {  
            selector: {
                _id: "${id}"
            }      
            }) 
        {
            result {
            _id
            title
            slug
            pageUrl
            postedAt
            baseScore
            voteCount
            commentCount
            meta
            question
            url
            user {
                username
                slug
                karma
                maxPostCount
                commentCount
            }
            }
        }
}`
})),
  }))
  .then(res => res.json())
  .then(res => res.data.post? res.data.post.result : undefined)  
  return response
}

 

To get a user

{
  user(input: {
    selector: {
      slug: "heregoestheslug"
    }
  }){
    result{
      username
      pageUrl
      karma
      maxPostCount
      commentCount
    }
  }
  
}

Or, as a JavaScript function

let graphQLendpoint = 'https://forum.effectivealtruism.org/graphql' // or https://www.lesswrong.com/graphql. Note that this is not the same as the graph*i*ql visual interface talked about in the post. 

async function fetchAuthor(slug){
  // note the async
  let response  = await fetch(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    body: JSON.stringify(({ query: `
       {
  user(input: {
    selector: {
      slug: "${slug}"
    }
  }){
    result{
      username
      pageUrl
      karma
      maxPostCount
      commentCount
    }
  }
  
}`
})),
  }))
  .then(res => res.json())
  .then(res => res.data.user? res.data.user.result : undefined)  
  return response
}
Comment by NunoSempere (Radamantis) on Anti-Aging: State of the Art · 2021-01-03T19:30:42.117Z · LW · GW

Thoughtful answer, thanks

Comment by NunoSempere (Radamantis) on Anti-Aging: State of the Art · 2021-01-03T13:27:52.385Z · LW · GW

The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans

 

Are you willing to bet on this? If so, how much?

Comment by NunoSempere (Radamantis) on Interactive exploration of LessWrong and other large collections of documents · 2020-12-31T09:05:04.895Z · LW · GW

Yes, I'd be interested, many thanks!

Comment by NunoSempere (Radamantis) on Range and Forecasting Accuracy · 2020-12-26T18:06:43.522Z · LW · GW

Cool. Once you rewrite that, and if you do so before the end of the year, I'd encourage you to resubmit it to this contest

In particular, the reason I'm excited about this kind of work is because it allows us to have at least some information about how accurate long-term predictions can be. Some previous work on this has been done, e.g., rating Kurzweil's predictions from the 90s but overall we have very little information about this kind of thing. And yet we are interested in seeing how good we can be at making predictions n years out, and potentially making decisions based on that. 

Comment by NunoSempere (Radamantis) on Interactive exploration of LessWrong and other large collections of documents · 2020-12-26T16:12:37.136Z · LW · GW

So here is something I'm interested in: I have a list of cause area candidates proposed in the EA Forum (available here) as a Google Sheet. Could I use a set-up similar to your own to find out similar posts?

Also, you should definitely post this to the EA forum as well. 

Comment by NunoSempere (Radamantis) on Probability theory implies Occam's razor · 2020-12-26T15:57:45.281Z · LW · GW

Waveman says:

I am not sure you actually justified your claim, that OR follows from the laws of probability with no empirical input. 

I wanted to say the same thing. 

The OP uses the example of age, but I like the example of shade of eye color better. If h is height and s is shade of eye color, then 

weight = alpha * a + beta * s

Then if beta is anything other than 0, your estimate will, on expectation, be worse. This feels correct, and it seems like this should be demonstrable, but I haven't really tried. 

Comment by NunoSempere (Radamantis) on Recommendation for a good international event betting site like predictit.org · 2020-12-26T15:47:36.356Z · LW · GW

I'd point you towards polymarket (polymarket.com). It trades in USDC (a  cryptocurrency pegged to the US dollar), which you can acquire at various exchanges, like Coinbase or the crypto.com app. 

Comment by NunoSempere (Radamantis) on Luna Lovegood and the Chamber of Secrets - Part 8 · 2020-12-22T22:27:30.792Z · LW · GW

How did Luna come to represent Ravenclaw at the dueling tournament? Did she sleepwalk even during Lockhart's class, and somehow win the spot by casting spells while sleepwalking?

 

She was sleepwalking, and thus was able to shrug off a Somnium from her opponent, and win. Possibly repeatedly.

Comment by NunoSempere (Radamantis) on What are the best precedents for industries failing to invest in valuable AI research? · 2020-12-15T21:28:16.848Z · LW · GW

I have some data on this on the top of my head from having read the history of 50 mostly random technologies (database.csv in the post):

  • People not believing that heavier than air flight was a thing, and Zeppelins eventually becoming obsolete
  • Various camera film producing firms, notably Kodak, failing to realize that digital was going to be a thing
  • (Nazi Germany not realizing that the nuclear bomb was going to be a thing)
  • London not investing in better sanitation until the Great Stink; this applies to mostly every major city.
  • People not investing in condoms for various reasons
  • People not coming up with the bicycle as an idea
  • Navies repeatedly not taking the idea of submarines seriously
  • Philip LeBon failing to raise interest in his "thermolamp"

So that's 8/50 of the top of my head (9/50 including Blockbuster, mentioned by another commenter)

I also have some examples of technology timelines here and some technology anecdotes from my sample of 50 technologies here, which might serve as inspiration. 

Comment by NunoSempere (Radamantis) on The Parable of Predict-O-Matic · 2020-12-13T17:34:51.004Z · LW · GW

The short story presents some intuitions which would be harder to get from a more theoretical standpoint. And these intuitions then catalyzed further discussion, like The Dualist Predict-O-Matic ($100 prize), or my own Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems).

Personally, a downside of the post is that "Predict-O-Matic problems" isn't that great a category. I prefer "inner and outer alignment problems for predictive systems," which is neater. On the other hand, if I mention the Parable of the Predict-O-Matic people can quickly understand what I'm talking about. 

But the post provides a useful starting point. In particular, to me it suggests looking to prediction systems as a toy model for the alignment problem, which is something I've personally had fun looking into, and which strikes me as promising.  

Lastly, I feel that the title is missing a "the."

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:45:55.388Z · LW · GW

Thanks. I keep missing this one, because Good Judgment Open, the platform used to select forecasters, rewards both Brier score and relative Brier score.

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:44:47.170Z · LW · GW

You can see this effect for election predictions, such that there are plenty of smallish predictors which predicted the result of the current election closely (but such that it's easy to speculate that they're just a selection effect) 

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:43:18.398Z · LW · GW

Thanks to both; this is a great example; I might add it to the main text

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-12T18:42:42.578Z · LW · GW

Another example, from @albrgr

"This is kind of crazy: https://nber.org/digest-202012/corporate-reporting-era-artificial-intelligence Companies have learned to use (or exclude) certain words to make their corporate filings be interpreted more positively by financial ML algorithms."

Then quoting from the article:

The researchers find that companies expecting higher levels of machine readership prepare their
disclosures in ways that are more readable by this audience. "Machine readability" is measured in
terms of how easily the information can be processed and parsed, with a one standard deviation
increase in expected machine downloads corresponding to a 0.24 standard deviation increase in
machine readability. For example, a table in a disclosure document might receive a low readability
score because its formatting makes it difficult for a machine to recognize it as a table. A table in a
disclosure document would receive a high readability score if it made effective use of tagging so
that a machine could easily identify and analyze the content.
Companies also go beyond machine readability and manage the sentiment and tone of their
disclosures to induce algorithmic readers to draw favorable conclusions about the content. For
example, companies avoid words that are listed as negative in the directions given to algorithms.
The researchers show this by contrasting the occurrence of positive and negative words from the
Harvard Psychosocial Dictionary — which has long been used by human readers — with those
from an alternative, finance-specific dictionary that was published in 2011 and is now used
extensively to train machine readers. After 2011, companies expecting high machine readership
significantly reduced their use of words labelled as negatives in the finance-specific dictionary,
relative to words that might be close synonyms in the Harvard dictionary but were not included in
the finance publication. A one standard deviation increase in the share of machine downloads for a
company is associated with a 0.1 percentage point drop in negative-sentiment words based on the
finance-specific dictionary, as a percentage of total word count.

Comment by NunoSempere (Radamantis) on Parable of the Dammed · 2020-12-11T09:47:26.511Z · LW · GW

So for some realism which the original story didn't call for —it's a "parable trying to make a point", not a "detailed historical account of territorial feuds in 15th century Albania"—, we can look at how this works out in practice. To do this, we look to The Kanun of Lekë Dukagjini, which describes the sort of laws used to deal with this kind of thing in 15th century Albania. My details might be iffy here, but I did read the book and remember some parts.

In practice, there are several points of intervention, if I'm remembering correctly:

  • After the first murder, the extended family of the murdered goes after the murderer, to the extent that he can't safely go out of his home. If he is killed, the feud ends on the part of the murdered's family.
  • At any point, one of the families can ask a more powerful figure to mediate; in some regions this can be a cleric. The resolution might involve substantial amounts of money to be paid, which, crucially, is set beforehand by law, in excruciating detail depending on the conditions.
  • The lands wouldn't in fact be the most valuable resource here; it would be the working power of adult men, who can't get out because they would be killed in revenge. This cripples both families economically, so they do have an incentive to cooperate.

So, in practice

a clever couple from one of the families hatched an idea

I get the impression that this ends with the clever couple getting killed in the middle of the night by one of the more violent and impulsive cousins of the second family, and maybe the second family paying some reparations if they're caught. Probably less than, you know, if they'd killed a normal couple. That, or the dam gets destroyed. Or actually, the husband from the clever couple would have to ask the Patriarch of the family for permission, who would veto the idea because he wants to make the truce work, and is hesitant to lose more of his sons to a new feud. Also, with or without the discount factor rural people in Albania have, doing this kind of thing wouldn't be worth it. Or actually, the clever couple learnt in childhood that this kind of thing wasn't worth it, and got some lashes in the process. 

Violence escalates, and the feud breaks out anew - but peace is even harder to come by, now, since the river has been permanently destroyed as a Schelling point.

The Schelling point wasn't the river, the Schelling point was someone more powerful than you telling you not to start trouble. This is harder to game. Also, you don't have "the government", you have "the more powerful village cacique," or the priest, which works because you don't want to hell when you die. 

You do see a thing in rural Spain with territory boundaries being marked by stones, and those stones being moved, which kind of works if one side doesn't spend time in the land.

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: November 2020 · 2020-12-10T15:31:33.634Z · LW · GW

Makes sense

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-08T16:47:34.762Z · LW · GW

Looks pretty fun!

Comment by NunoSempere (Radamantis) on Open & Welcome Thread - December 2020 · 2020-12-06T16:22:27.946Z · LW · GW

I'd like to point people to this contest, which offers some prizes for forecasting research. It's closing  on January the 1st, and hasn't gotten any submissions yet (though some people have committed to doing so.)

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-04T10:53:06.853Z · LW · GW

Which minimal conditions are necessary for a Predict-O-Matic scenario to appear?

One answer to that might be "either inner or outer alignment failures" in the forecasting system. See here for that division made explicit

Comment by NunoSempere (Radamantis) on Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) · 2020-12-04T08:12:54.633Z · LW · GW

Thanks, changed

Comment by NunoSempere (Radamantis) on Incentive Problems With Current Forecasting Competitions. · 2020-12-03T21:12:30.186Z · LW · GW

Thanks!

Comment by NunoSempere (Radamantis) on The LessWrong 2018 Book is Available for Pre-order · 2020-12-02T22:28:59.687Z · LW · GW

How ironic.

Comment by NunoSempere (Radamantis) on Luna Lovegood and the Chamber of Secrets - Part 3 · 2020-12-02T15:25:32.072Z · LW · GW

Maybe she reminds them of Harry.

Comment by NunoSempere (Radamantis) on Forecasting Newsletter: November 2020 · 2020-12-01T21:14:36.876Z · LW · GW

Thanks

Comment by NunoSempere (Radamantis) on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2020-11-26T09:28:20.915Z · LW · GW

Yes, I can imagine cases where this setup wouldn't be enough.

Though note that you could still buy the shares the last year. Also, if the market corrects by 10% each year (i.e., a value of a share of yes increases from 10 to 20% to 30% to 40%, etc. each year), it might still be worth it (note that the market would resolve each year to the value of a share, not to 0 or 100).

Also note that the current way in which prediction markets are structured is, as you point out, dumb: you bet 5 depreciating dollars which then go into escrow, rather than $5 worth of, say, S&P 500 shares, which increase in value. But this could change.

Comment by NunoSempere (Radamantis) on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2020-11-23T09:48:37.466Z · LW · GW

the failures of "quick resolution" (years)

Note that you can solve this by chaining markets together, i.e., having a market every year asking what the next market will predict, where the last market is 1y before AGI. This hasn't been tried much in reality, though.

Comment by NunoSempere (Radamantis) on AGI Predictions · 2020-11-21T10:27:52.806Z · LW · GW

That was fun. This time, I tried not to update too much on other people's predictions. In particular, I'm at 1% for "Will we experience an existential catastrophe before we build AGI?" and at 70% for "Will there be another AI Winter (a period commonly referred to as such) before we develop AGI?", but would probably defer to a better aggregate on the second one.