The probability that Artificial General Intelligence will be developed by 2043 is extremely low.

post by cveres · 2022-10-06T18:05:57.615Z · LW · GW · 8 comments

8 comments

Comments sorted by top scores.

comment by Jotto999 · 2022-10-13T01:03:50.757Z · LW(p) · GW(p)

I haven't read this post, I just wanted to speculate about the downvoting, in case it helps.

Assigning "zero" probability is an infinite amount of error.  In practice you wouldn't be able to compute the log error.  More colloquially, you're infinitely confident about something, which in practice and expectation can be described as being infinitely wrong.  Being mistaken is inevitable at some point.  If someone gives 100% or 0%, that's associated with them being very bad at forecasting.

I expect a lot of the downvotes are people noticing that you gave it 0%, and that's strong evidence you're very uncalibrated as a forecaster.  For what it's worth, I'm in the highscores on Metaculus, and I'd interpret that signal the same way.

Skimming a couple seconds more, I suspect the overall essay's writing style doesn't really explain how the material changes our probability estimate.  This makes the essay seem indistinguishable from confused/irrelevant arguments about the forecast.  For example if I try skim reading the Conclusion section, I can't even tell if the essay's topics really change the probability that human jobs can be done by some computer for $25/hr or less (that's the criteria from the original prize post [EA · GW]).

I have no reason not to think you were being genuine, and you are obviously knowledgeable.  I think a potential productive next step could be if you consulted someone with a forecasting track record, or read Philip Tetlock's stuff.  The community is probably reacting to red flags about calibration, and (possibly) a writing style that doesn't make it clear how this updates the forecast.

Replies from: cveres, cveres
comment by cveres · 2022-10-13T01:39:45.744Z · LW(p) · GW(p)

Thanks! I guess I didn't know the audience very well and I wanted to come up with an eye catching title. It was not meant to be literal. I should have gone with "approximately Zero" but I thought that was silly. Maybe I can try and change it.

Replies from: Jotto999, sharmake-farah
comment by Jotto999 · 2022-10-13T02:11:34.269Z · LW(p) · GW(p)

That's a really good idea, changing the title.  You can also try adding a little paragraph in italics, as a brief little note for readers clarifying which proability you're giving.

comment by Noosphere89 (sharmake-farah) · 2022-10-13T12:50:15.009Z · LW(p) · GW(p)

Thank you for changing it to be less clickbaity. Downvotes removed.

comment by cveres · 2022-10-13T01:48:25.011Z · LW(p) · GW(p)

Also I was more focused on the sentence following the one where your quote comes from:

"This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs."

and "AGI will be developed by January 1, 2100"

I try and argue that the answer to these two proposals is approximately Zero.

comment by SD Marlow (sd-marlow) · 2022-10-08T01:27:56.705Z · LW(p) · GW(p)

Heavy down votes in less than a day, most likely from the title alone. I can't bring myself to vote up or down because I'm not clear on your argument. Most of what you are saying supports AGI as sooner rather than later (that DL just needs that little bit extra, which should have won this crowd over). I don't see any arguments to support the main premise. 

*I stated that the AGI clock doesn't even start as long as ML/DL remains the de facto method, but that isn't what they want to hear either.  

Replies from: sharmake-farah, cveres
comment by Noosphere89 (sharmake-farah) · 2022-10-13T00:12:37.365Z · LW(p) · GW(p)

Probably because a probability of zero is a red flag, as nothing should be given a probability of zero outside of mathematics.

comment by cveres · 2022-10-09T21:48:07.813Z · LW(p) · GW(p)

Thanks for your comment. I was baffled by the downvotes because as far as I could tell most people hadn't read the paper. Your comment that maybe it was the title, is profoundly disappointing to me. I do not know this community well but it sounds from your comment that they are not really interested in hearing arguments that contradict their point of view.
As for my argument, it was not supporting AGI at all. Basically,  I was pointing out that every serious researcher now agrees that we need DL+symbols. The disagreement is in what sort of symbols. Then I argue that none of the current proposals for symbols is any good for AGI. So that kills AGI.