Posts

Largest open collection quotes about AI 2019-07-12T17:18:20.401Z

Comments

Comment by teradimich on Nick Bostrom’s new book, “Deep Utopia”, is out today · 2024-03-30T20:11:14.503Z · LW · GW

It seems that in 2014 he believed that p(doom) was less than 20%

Comment by teradimich on Most people should probably feel safe most of the time · 2023-05-12T18:55:56.255Z · LW · GW

I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger.

I live ~14 kilometers from the front line, in Donetsk. Yeah, it's pretty... stressful. 
But I think I'm much more likely to be killed by an unaligned superintelligence than an artillery barrage. 
Most people survive urban battles, so I have a good chance. 
And in fact, many people worry even less than I do! People get tired of feeling in danger all the time.

Comment by teradimich on Geoff Hinton Quits Google · 2023-05-02T06:12:33.145Z · LW · GW

'“Then why are you doing the research?” Bostrom asked.

“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”'

'I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.” He looked as if he might elaborate. Then a scientist called out, “Let’s all get drinks!”'

https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

Hinton seems to be more responsible now!

Comment by teradimich on Four mindset disagreements behind existential risk disagreements in ML · 2023-04-11T13:08:49.341Z · LW · GW

The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with "half of us think there's a 10+% chance of our work resulting in an existential catastrophe".

In fairness, this is not quite half the researchers. This is half the agreed survey.

'We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. [...] We received 738 responses, some partial, for a 17% response rate'.

I expect that worried researchers are more likely to agree to participate in the survey.

Comment by teradimich on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-06T16:00:21.893Z · LW · GW

Thanks for your answer, this is important to me.

Comment by teradimich on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-06T10:38:37.492Z · LW · GW

I am not an American (so excuse me for my bad English!), so my opinion about the admissibility of attack on the US data centers is not so important. This is not my country.

But reading about the bombing of Russian data centers as an example was unpleasant. It sounds like a Western bias for me. And not only for me.

'What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question?'.

If the text is aimed at readers not only from the First World countries, well, perhaps the authors should do such a clarification as you did! Then it will not look like political hypocrisy. Or not write about air strikes at all, because people are distracted for discussing this.

Comment by teradimich on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-06T09:04:38.897Z · LW · GW

I'm not an American, so my consent doesn't mean much :)

Comment by teradimich on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-06T03:55:59.507Z · LW · GW

Suppose China and Russia accepted the Yudkowsky's initiative. But the USA is not. Would you support to bomb a American data center?

Comment by teradimich on Who are some prominent reasonable people who are confident that AI won't kill everyone? · 2022-12-28T09:25:44.296Z · LW · GW

I can provide several links. And you choose those that are suitable. If suitable. The problem is that I retained not the most complete justifications, but the most ... certain and brief. I will try not to repeat those that are already in the answers here.

Ben Goertzel

Jürgen Schmidhuber

Peter J.Bentley

Richard Loosemore

Jaron Lanier and Neil Gershenfeld


Magnus Vinding and his list

Tobias Baumann

Brian Tomasik
 

Maybe Abram Demski? But he changed his mind, probably.
Well, Stuart Russell. But this is a book. I can quote.

I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”

There are also a large number of reasonable people who directly called themselves optimists or pointed out a relatively small probability of death from AI. But usually they did not justify this in ~ 500 words…

I also recommend this book.

Comment by teradimich on Who are some prominent reasonable people who are confident that AI won't kill everyone? · 2022-12-27T19:28:10.443Z · LW · GW

My fault. I should just copy separate quotes and links here.

Comment by teradimich on Who are some prominent reasonable people who are confident that AI won't kill everyone? · 2022-12-07T09:33:08.742Z · LW · GW

I have collected many quotes with links about the prospects of AGI. Most people were optimistic.

Comment by teradimich on Theoretical Neuroscience For Alignment Theory · 2021-12-14T22:34:46.790Z · LW · GW

Glad you understood me. Sorry for my english!
Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.

Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems.
We have many examples of creatures that follow the goals of someone more stupid. And the mechanism that is responsible for this should not be very complex.

Such a stupid process, as a natural selection, was able to create mentioned capabilities. It must be achievable for us.

Comment by teradimich on Theoretical Neuroscience For Alignment Theory · 2021-12-08T14:38:20.840Z · LW · GW

It seems to me that the brains of many animals can be aligned with the goals of someone much more stupid themselves.
People and pets. Parasites and animals. Even ants and fungus.
Perhaps the connection that we would like to have with superintellence, is observed on a much smaller scale.

Comment by teradimich on Ngo and Yudkowsky on AI capability gains · 2021-11-19T18:26:58.738Z · LW · GW

I apologize for the stupid question. But…

Do we have more chances to survive in the world, which is closer to Orwell's '1984'?
It seems to me that we are moving towards more global surveillance and control. China's regime in 2021 may seem extremely liberal for an observer in 2040.

Comment by teradimich on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-16T13:57:40.135Z · LW · GW

I guess I missed the term gray goo. I apologize for this and for my bad English.
Is it possible to replace it on the 'using nanotechnologies to attain a decisive strategic advantage'?
I mean the discussion of the prospects for nanotechnologies on SL4 20+ years ago. This is especially:

My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.

I understand that since then the views of EY have changed in many ways. But I am interested in the views of experts on the possibility of using nanotechnology for those scenarios that he implies now. That little thing I found.

Comment by teradimich on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-15T14:21:26.609Z · LW · GW

Nanosystems are definitely possible, if you doubt that read Drexler’s Nanosystems and perhaps Engines of Creation and think about physics. 

Is there something like the result of a survey of experts about the feasibility of drexlerian nanotechnology? Are there any consensus among specialists about the possibility of a gray goo scenario?

Drexler and Yudkowsky both extremely overestimated the impact of molecular nanotechnology in the past.

Comment by teradimich on What is Compute? - Transformative AI and Compute [1/4] · 2021-09-26T17:34:21.737Z · LW · GW

I do not know the opinions of experts on this issue. And I lack competence for such conclusions, sorry.

Comment by teradimich on What is Compute? - Transformative AI and Compute [1/4] · 2021-09-24T17:47:34.894Z · LW · GW

AlexNet was the first publication that leveraged graphical processing units (GPUs) for the training run

Do you mean the first of the data points on the chart? The GPU was used for DL long before AlexNet. References: [1], [2], [3], [4], [5].

Comment by teradimich on What 2026 looks like · 2021-08-23T05:35:07.276Z · LW · GW

The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015. The most optimistic estimate for project Elisson would be 2006; the earliest nanowar, 2003.

But this is 1999, yes.

Comment by teradimich on "AI and Compute" trend isn't predictive of what is happening · 2021-04-03T09:01:14.669Z · LW · GW

Probably that:

When we didn’t have enough information to directly count FLOPs, we looked GPU training time and total number of GPUs used and assumed a utilization efficiency (usually 0.33)

Comment by teradimich on "AI and Compute" trend isn't predictive of what is happening · 2021-04-02T11:11:50.097Z · LW · GW

This can be useful:

We trained the league using three main agents (one for each StarCraft race), three main exploiter agents (one for each race), and six league exploiter agents (two for each race). Each agent was trained using 32 third-generation tensor processing units (TPUs) over 44 days

Comment by teradimich on Draft report on AI timelines · 2020-09-19T09:56:59.515Z · LW · GW

Perhaps my large collection of quotes about the impact of AI on the future of humanity here will be helpful.

Comment by teradimich on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-06-03T18:57:00.953Z · LW · GW

Then it is worth considering the majority of experts from the FHI to be extreme optimists, the same 20%? I really tried to find all the publicly available forecasts of experts and those who were confident that AI would lead to the extinction of humanity, there were very few among them. But I have no reason not to believe you or Luke Muehlhauser who introduced AI safety experts as even more confident pessimists: ’Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction’ . The reason may be that not everyone agrees, whose opinion is worth considering.

Comment by teradimich on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-06-02T18:13:34.979Z · LW · GW

What about this and this? Here, some researchers at the FHI give other probabilities.

Comment by teradimich on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T22:51:46.841Z · LW · GW

I meant the results of such polls: https://www.thatsmags.com/china/post/15129/happy-planet-index-china-is-72nd-happiest-country-in-the-world. Well, it doesn’t matter.
I think that I could sleep better if everyone recognized the reduction of existential risks in a less free world.

Comment by teradimich on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T22:08:30.497Z · LW · GW

I’m not sure that I can trust news sources that are interested in outlining China.
In any case, this does not seem to stop the Chinese people from feeling happier than the US people.
I cited this date just to contrast with your forecast. My intuition is more likely to point to AI in the 2050-2060 years.
And yes, I expect that in 2050 it will be possible to monitor the behavior of each person in countries 24/7. I can’t say that it makes me happy, but I think that the vast majority will put up with this. I don't believe in a liberal democratic utopia, but the end of the world seems unlikely to me.

Comment by teradimich on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-08T21:29:42.032Z · LW · GW

Just wondering. Why are some so often convinced that the victory of China in the AGI race will lead to the end of humanity? The Chinese strategy seems to me much more focused on long terms.
The most prominent experts give a 50% chance of AI in 2099 (https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/book-review-architects-of-intelligence). And I can expect that the world in 80 years will be significantly different from the present. Well, you can call this a totalitarian hell, but I think that the probability of an existential disaster in this world will become less.

Comment by teradimich on Discontinuous progress in history: an update · 2020-04-15T04:36:04.789Z · LW · GW

How about paying attention to discontinuous progress in tasks that are related to DL? It is very easy to track with https://paperswithcode.com/sota . And https://sotabench.com/ is showing diminishing returns.

Comment by teradimich on Rohin Shah on reasons for AI optimism · 2020-04-08T21:30:03.656Z · LW · GW

(I apologize in advance for my English). Well, only the fifth column shows an expert’s assessment of the impact of AI on humanity. Therefore, any other percentages can be quickly skipped. It took me a few seconds to examine 1/10 of the table through Ctrl+F, so it would not take long to fully study the table by such a principle. Unfortunately, I can't think of anything better.

Comment by teradimich on Rohin Shah on reasons for AI optimism · 2020-04-08T21:17:57.070Z · LW · GW

It may be useful.

’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/

’In terms of falsifiability, if you have an AGI that passes the real no-holds-barred Turing Test over all human capabilities that can be tested in a one-hour conversation, and life as we know it is still continuing 2 years later, I’m pretty shocked. In fact, I’m pretty shocked if you get up to that point at all before the end of the world.’ — Eliezer Yudkowsky, https://www.econlib.org/archives/2016/03/so_far_my_respo.html

Comment by teradimich on My current framework for thinking about AGI timelines · 2020-03-30T19:07:40.620Z · LW · GW

I have collected a huge number of quotes from various experts about AGI. About the timing of AGI, about the possibility of a quick takeoff of AGI and its impact on humanity. Perhaps this will be useful to you.

https://docs.google.com/spreadsheets/d/19edstyZBkWu26PoB5LpmZR3iVKCrFENcjruTj7zCe5k/edit?fbclid=IwAR1_Lnqjv1IIgRUmGIs1McvSLs8g34IhAIb9ykST2VbxOs8d7golsBD1NUM#gid=1448563947

Comment by teradimich on Will AI undergo discontinuous progress? · 2020-02-24T18:59:39.193Z · LW · GW

Then AI will have to become really smarter than very large groups of people who will try to control the world. And people by that time will surely be ready more than now. I am sure that the laws of physics allow the quick destruction of humanity, but it seems to me that without a swarm of self-reproducing nanorobots, the probability of our survival after the creation of the first AGI exceeds 50%.

Comment by teradimich on Will AI undergo discontinuous progress? · 2020-02-24T16:57:34.043Z · LW · GW

It seems that this option leaves more chances for the victory for humanity than the gray goo scenario. And even if we screw up for the first time, it can be fixed. Of course, this does not eliminate the need for AI alignment efforts anyway.

Comment by teradimich on Will AI undergo discontinuous progress? · 2020-02-24T14:07:27.179Z · LW · GW

Is AI Foom possible if even the godlike superintelligence cannot create gray goo? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.

Comment by teradimich on Will AI undergo discontinuous progress? · 2020-02-24T14:07:01.810Z · LW · GW

Is AI Foom possible if even the godlike superintelligence cannot create ’gray goo’? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.

Comment by teradimich on Rohin Shah on reasons for AI optimism · 2019-11-01T20:31:10.469Z · LW · GW

Indeed, quite a lot of experts are more optimistic than it seems. See this or this . Well, I collected a lot of quotes from various experts about the future of human extinction due to AI here. Maybe someone is interested.

Comment by teradimich on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2019-10-06T03:33:05.639Z · LW · GW

It seems Russell does not agree with what is considered an LW consensus. From ’Architects of Intelligence The truth about AI from the people building it’:

When [the first AGI is created], it’s not going to be a single finishing line that we cross. It’s going to be along several dimensions.
[...]
I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”