Posts
Comments
What kind of professional could I discuss this with?
I'm not, what makes it unlikely? Would it prevent an AGI from reviving me, too?
I'm sorry but that's not actually what I meant. I didn't mean that the two are incompatible and I agree with you that they're not. I meant what the other user wrote: my friend was wondering if "most here 'just' want to be immortal no matter the cost and don't really care about morality otherwise."
I'll try to be more clear with my wording here in the future. I try to keep it short to not waste readers time, since the time of users here is a lot more impactful than that of most others.
Yea that was their hypothesis, and thanks for the answer
It would imply a moral system based on maximizing one's personal desires, instead of maximizing well-being across all life capable of suffering (which is what i meant by utilitarianism), or other moral systems.
You can disregard it if you want, I was just curious what moral beliefs motivate the users here.
They don't necessarily have any relation, which is the point, it's a different motive.
I think the most likely scenario of actually trying this with an AI in real life is that you end up with a strategy that is convincing to humans and ends up being ineffective or unhelpful in reality.
I agree this would be much easier. However, I'm wondering why you think an AI would prefer it, if it has the capability to do either. I can see some possible reasons (e.g., an AI may not want problems of alignment to be solved). Do you think that would be an inevitable characteristic of an unaligned AI with enough capability to do this?
Thanks for the response. I did think of this objection, but wouldn't it be obvious if the AI were trying to engineer a different situation than the one requested? E.g., wouldn't such a strategy seem unrelated and unconventional?
It also seems like a hypothetical AI with just enough ability to generate a strategy for the desired situation would not be able to engineer a strategy for a different situation which would both work, and deceive the human actors. As in, it seems the latter would be harder and require an AI with greater ability.
edit: reposted this comment as a 'question' here https://www.lesswrong.com/posts/eQqk4X8HpcYyjYhP6/could-ai-be-used-to-engineer-a-sociopolitical-situation
I'm new to alignment (been casually reading for a couple months). I'm drawn to the topic by long-termist arguments. I'm a moral utilitarian so it seems highly important to me. However I have a feeling I misunderstood your post. Is this the kind of motive/draw you meant?
i see, well i'm not sure what to do then. i inherited a lot of money and i wanna give most of it to alignment groups
What are the most cost effective alignment organizations to donate to? I'm aware of MIRI and https://futureoflife.org/ .
"Making existential choices on such a basis is always a bad idea. What is needed is better information" Regardless of the choice you make, the choice is being made with weak data. Although strong data is the ideal, going with a choice weak data suggests against is worse than going with the choice it favors. Of course, if there is a way to get better information, we should do that first if we have time.
"Would you commit suicide if you thought that it was 60% likely that your life would be of negative value?" Not necessarily. However, if I exhausted all potential better alternatives like investigating further, then in principle yes as I'm a utilitarian. That said, this question has a false premise; I control the impacts of my life, and can make them positive. Not so with civilization.
Yes, I only have what I consider to be educated suspicion about where current human civilization might fall in the range of possible civilizations. However, in terms of felicific calculus (https://en.wikipedia.org/wiki/Felicific_calculus), weak evidence is still valid. If it is all we have to go by, we should still go by it, especially considering the graveness of the potential consequences. Lack of strong evidence is not an argument for the status quo; this would be an example of status quo bias (https://en.wikipedia.org/wiki/Status_quo_bias).
Your second line is an emotional appeal.
"do you really want to give up the one shot we have at making a better world for biological life?" is a misleading argument because, as you know, humanity may well not create an AGI that makes the world better for life (biological or otherwise).
"it is exceedingly unlikely that we will destroy life on earth" is a valid objection if true though.
Even if we assume the human species is typical, it doesn't follow that current Capitalist civilization, with all its misincentives (the ones we're seeing drive the development if AI), is typical. And there's no reason to assume this economic system would be shared by a society elsewhere.
What does "relevant models" mean?
No, I think the same argument could apply to the extinction of humans only, it just seemed less plausible to me that this would happen compared to all life on earth being wiped out.
In fact, I have doubts about whether it might be possible to steer AGI in a direction which ends life on earth but does not radically transform the rest of the reachable universe too. But if it is possible, this would be a potential argument for it.