post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by JBlack · 2022-08-30T01:36:45.282Z · LW(p) · GW(p)

According to a "no fire alarm" model for AI risk, your prediction (every survey shows a later date for doomsday) is exactly what should be expected, right up until doomsday happens and there are no more surveys.

In practice, I think there are some leading indicators and numerous members of the community have shortened their timelines over the past year due to more rapid progress than expected in some respects. I don't know of anyone who has lengthened their timelines over the same period.

comment by Mitchell_Porter · 2022-08-30T19:03:57.836Z · LW(p) · GW(p)

Sorry, but I feel like you must be in a kind of trance, blind to what is happening in AI. Maybe it won't be "doomsday", but how can the world we know not be turned upside down, when the equivalent of a new intelligent species, able to match or surpass human beings in all areas, arrives on Earth? Do you think that's not happening, right before our eyes? 

comment by ChristianKl · 2022-08-30T09:33:07.607Z · LW(p) · GW(p)

I don't see what advantage this approach has over the existing Metaculus predictions. 

Do you believe that your framing of the issue by speaking about doomsday instead of making predictions about more well-defined scenarios is an improvement?

Do you believe that your questions about "within X years" have an advantage over Metaculus more broad distributions?

Do you believe that recruitment via an opinionated post is likely to somehow give a better sense than Metaculus as a more neutral instance?

comment by dmav · 2022-08-30T05:17:01.449Z · LW(p) · GW(p)

Note that your prediction isn't interesting. Each year, conditioned on a doomsday not happening, it would be pretty weird for the date(s) to not have moved forward. 
Do you instead mean to say that you guess that the date will move forward each year by more than a year, or something like that?

Replies from: adrian-arellano-davin
comment by mukashi (adrian-arellano-davin) · 2022-08-30T06:17:00.100Z · LW(p) · GW(p)

The date can move forward because people might update for shorter timelines after seeing the improvements in AI

comment by wunan · 2022-08-29T22:01:39.713Z · LW(p) · GW(p)

By forward do you mean sooner (shorter timelines) or later (longer, slower timelines)?

Replies from: shminux
comment by shminux · 2022-08-30T02:57:40.673Z · LW(p) · GW(p)

"forward in time" means "later".

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2023-02-07T09:55:19.245Z · LW(p) · GW(p)

Yet "bringing an event forward" means "sooner". Better to speak of earlier and later than backward and forward.

comment by JBlack · 2022-08-31T02:24:54.402Z · LW(p) · GW(p)

Most of the predicted dates are "soon" on a civilization timescale, not a personal one.

For example, some vague medians around 2030-2070 sorts of ranges for timescale to the first AGI with outliers on both sides, and peak existential risk something like 1-20 year ranges after that. Very few people are predicting high probability of apocalypse in less than a decade.

If that devalues the predictions for you, then I'm not sure what you were expecting. As already stated, the general cloud of predictions seems to have recently moved shorter, rather than continuously longer as you were expecting. It may move longer and/or shorter again in the future as new evidence arises.

comment by ChristianKl · 2022-08-30T10:47:15.284Z · LW(p) · GW(p)

Do you believe that the LW view substantially differs from the view you find on Metaculus?

comment by deepthoughtlife · 2022-08-30T13:56:51.492Z · LW(p) · GW(p)

A lot of this depends on your definition of doomsday/apocalypse. I took it to mean the end of humanity, and a state of the world we consider worse than our continued existence. If we valued the actual end state of the world more than continuing to exist, it would be easy to argue it was a good thing, and not a doom at all. (I don't think the second condition is likely to come up for a very long time as a reason for something to not be doomsday.) For instance, if each person created a sapient race of progeny that weren't human, but they valued as their own children, and who had good lives/civilizations, then the fact humanity ceased to exist due to a simple lack of biological children would not be that bad. This could in some cases be caused by AGI, but wouldn't be a problem. (It would also be in the far future.)

AI doomsday never (though it is far from impossible). Not doomsday never, it's just unlikely to be AGI. I believe we both aren't that close, and that 'takeoff' would be best described as glacial, and we'll have plenty of time to get it right. I am unsure of the risk level of unaligned moderately superhuman AI, but I believe (very confidently) that tech level for minimal AGI is much lower than the tech level for doomsday AGI. If I was wrong about that, I would obviously change my mind about the likelihood of AGI doomsday.  (I think I put something like 1 in 10 million in the next fifty years. [Though in percentages.] Everything else was 0, though in the case of 25 years, I just didn't know how many 0s to give it.)

'Tragic AGI disasters' are fairly likely though. For example, an AGI that alters traffic light timing to make crashes occur, or intentionally sabotages things it is supposed to repair. Or even an AGI that is well aligned to the wrong people or moral framework doing things like refusing to allow necessary medical procedures due to expense even when people are willing to use their own money to pay (since it thinks the person is worth less than the cost of the procedure, and thus has negative utility, perhaps.). Alternately, it could predict that the people wanting the procedure were being incoherent, and actually would value their kids getting the money more, but feel like they have to try. Whether this is correct or not, it would still be AGI killing people.

I would actually rate the risk of Tool AI as higher, because humans will be using those to try to defeat other humans, and those could very well be strong enough to notably enhance the things humans are bad at. (And most of the things moderately superhuman AGI could do would be doable sooner with tool AI and an unaligned human.) An AI could help humans design a better virus that is like 'Simian Hemorrhagic Fever', but that effects humans, and doesn't apply to people with certain genetic markers (that denote the ethnicity or other traits of the people making it). Humans would then test, manufacture, distribute, and use it to destroy their enemies. Then oops, it mutates, and hits everyone. This is still a very unlikely doom though.