Posts
Comments
I don't really want to argue about this, but "those seem much smaller than the ones you can get out of meditation" is a subjective statement with which people with different temperament would disagree as long as there are no objective facts, like what happens to your consciousness after your death (like you go to Heaven if you are a Christian, or stop rebirth that would have otherwise gone on if you've achieved enlightenment). Anyway, I believe there's nothing after death, so do what makes you happy, I suppose.
If that's the reasoning, why pick Buddhism and meditation, when it's so much easier to find religious communities in the west as a Christian, and praying also has benefits for mental wellbeing, and Christians, like Buddhists, are measurably happier than nonreligious people? I think it's possible to be a secular Christian and not believe the supernatural, and go through the motions of a Christian life while not fully believing in it, and reap at least some of the benefits of it.
I have probably spent over a thousands hours practicing mindfulness meditation, and was pretty successful at achieving what I wanted to achieve with it. I have also read a lot of Buddhist books.
However, I think the basis for Buddhism crumbles if you don't believe in rebirth, karma, Samsara, narakas, the Buddha's omniscience and all those other metaphysical claims made by religious Buddhists. I've become a physicalist so I don't believe those claims anymore so I don't meditate anymore.
If after your death you just disappear, I don't see any point in attaining meditative bliss, especially if it leads me to see the world in a less truthful way. Buddhism as I see it is centered around the suffering of Samsara, and not just the occasional suffering of this one life.
Let's compare this to praying, which also feels very good, but it's also something that only makes sense in the context of theistic beliefs.
I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be because of this public awareness of the risks.
Replying to David Hornbein.
Thank you for this comment, this was basically my view as well. I think the employees of OpenAI are simply excited about AGI, have committed their lives working long hours to make it a reality and believe AGI would be good for humanity and also good for them personally. My view is that they are very emotionally invested in building AGI and stopping all that progress for reasons that feel speculative, theoretical and not very tangible feels painful.
Not that I would agree with that, assuming this is correct.
Overall I agree with this. I give most of my money for global health organizations, but I do give some of my money for AGI safety too because I do think it makes sense with a variety of worldviews. I gave some of my thoughts on the subject in this comment on the Effective Altruism Forum. To summarize: if there's a continuation of consciousness after death then AGI killing lots of people is not as bad as it would otherwise be and there might be some unknown aspects about the relationship between consciousness and the physical universe that might have an effect on the odds.
Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers - secular atheism and reductive materialism/physicalism and a computational theory of mind?
Do you have to have roughly the same kind of worldview as the top AI alignment researchers? Do you have to be a secular atheist and reductive naturalist/physicalist holding a computational theory of mind?
Anyone know how close we are to things that require operating in the physical world, but are very easy for human beings, like loading a dishwasher, or making an omelette? It seems to me that we are quite far away.
I don't think those are serious obstacles, but I will delete this message if anyone complains.
How do doomy AGI safety researchers and enthusiasts find joy while always maintaining the framing that the world is probably doomed?
Does anyone know what exactly DeepMind's CEO Demis Hassabis thinks about AGI safety, how seriously does he take AGI safety, how much time does he spend focusing on AGI safety research when compared to AI capabilities research? What does he think is the probability that we will succeed and build a flourishing future?
In this LessWrong post there are several excerpts from Demis Hassabis:
Well to be honest with you I do think that is a very plausible end state–the optimistic one I painted you. And of course that's one reason I work on AI is because I hoped it would be like that. On the other hand, one of the biggest worries I have is what humans are going to do with AI technologies on the way to AGI. Like most technologies they could be used for good or bad and I think that's down to us as a society and governments to decide which direction they're going to go in.
And
Potentially. I always imagine that as we got closer to the sort of gray zone that you were talking about earlier, the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to minute detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you're building. At that point I think all the world's greatest minds should probably be thinking about this problem. So that was what I would be advocating to you know the Terence Tao’s of this world, the best mathematicians. Actually I've even talked to him about this—I know you're working on the Riemann hypothesis or something which is the best thing in mathematics but actually this is more pressing. I have this sort of idea of like almost uh ‘Avengers assembled’ of the scientific world because that's a bit of like my dream.
My own guesses are - I want to underline that these are just my guesses - that he thinks the alignment problem is a real problem, but I don't know how seriously he takes it, but it doesn't seem like he takes it as seriously as most AGI safety researchers, I don't think he personally spends much time on AGI safety research, although there are AGI safety researchers in his team and they are hiring more, and I think he thinks there is over 50% probability that we will on some level succeed.
This is a meta-level question:
The world is very big and very complex especially if you take into account the future. In the past it has been hard to predict what happens in the future, I think most predictions about the future have failed. Artificial intelligence as a field is very big and complex, at least that's how it appears to me personally. Eliezer Yudkowky's brain is small compared to the size of the world, all the relevant facts about AGI x-risk probably don't fit into his mind, nor do I think he has the time to absorb all the relevant facts related to AGI x-risk. Given all this, how can you justify the level of certainty in Yudkowky's statements, instead of being more agnostic?