Posts

(Cryonics) can I be frozen before being near-death? 2023-03-01T06:44:58.532Z
What moral systems (e.g utilitarianism) are common among LessWrong users? 2023-02-23T03:33:05.811Z
Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI? 2023-01-25T17:17:33.379Z
hollowing's Shortform 2023-01-16T18:10:53.624Z

Comments

Comment by hollowing on (Cryonics) can I be frozen before being near-death? · 2023-03-01T07:59:45.940Z · LW · GW

What kind of professional could I discuss this with?

Comment by hollowing on (Cryonics) can I be frozen before being near-death? · 2023-03-01T07:26:20.786Z · LW · GW

I'm not, what makes it unlikely? Would it prevent an AGI from reviving me, too?

Comment by hollowing on What moral systems (e.g utilitarianism) are common among LessWrong users? · 2023-02-23T05:04:23.031Z · LW · GW

I'm sorry but that's not actually what I meant. I didn't mean that the two are incompatible and I agree with you that they're not. I meant what the other user wrote: my friend was wondering if "most here 'just' want to be immortal no matter the cost and don't really care about morality otherwise."

I'll try to be more clear with my wording here in the future. I try to keep it short to not waste readers time, since the time of users here is a lot more impactful than that of most others.

Comment by hollowing on What moral systems (e.g utilitarianism) are common among LessWrong users? · 2023-02-23T04:41:46.731Z · LW · GW

Yea that was their hypothesis, and thanks for the answer

Comment by hollowing on What moral systems (e.g utilitarianism) are common among LessWrong users? · 2023-02-23T04:12:24.278Z · LW · GW

It would imply a moral system based on maximizing one's personal desires, instead of maximizing well-being across all life capable of suffering (which is what i meant by utilitarianism), or other moral systems.

You can disregard it if you want, I was just curious what moral beliefs motivate the users here. 

Comment by hollowing on What moral systems (e.g utilitarianism) are common among LessWrong users? · 2023-02-23T03:36:23.207Z · LW · GW

They don't necessarily have any relation, which is the point, it's a different motive.

Comment by hollowing on Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI? · 2023-01-26T03:11:42.314Z · LW · GW

I think the most likely scenario of actually trying this with an AI in real life is that you end up with a strategy that is convincing to humans and ends up being ineffective or unhelpful in reality.

I agree this would be much easier. However, I'm wondering why you think an AI would prefer it, if it has the capability to do either. I can see some possible reasons (e.g., an AI may not want problems of alignment to be solved). Do you think that would be an inevitable characteristic of an unaligned AI with enough capability to do this?

Comment by hollowing on Could AI be used to engineer a sociopolitical situation where humans can solve the problems surrounding AGI? · 2023-01-25T21:45:50.460Z · LW · GW

Thanks for the response. I did think of this objection, but wouldn't it be obvious if the AI were trying to engineer a different situation than the one requested? E.g., wouldn't such a strategy seem unrelated and unconventional?

It also seems like a hypothetical AI with just enough ability to generate a strategy for the desired situation would not be able to engineer a strategy for a different situation which would both work, and deceive the human actors. As in, it seems the latter would be harder and require an AI with greater ability. 

Comment by hollowing on hollowing's Shortform · 2023-01-25T02:12:16.227Z · LW · GW

edit: reposted this comment as a 'question' here https://www.lesswrong.com/posts/eQqk4X8HpcYyjYhP6/could-ai-be-used-to-engineer-a-sociopolitical-situation

Comment by hollowing on hollowing's Shortform · 2023-01-17T16:47:14.426Z · LW · GW

I'm new to alignment (been casually reading for a couple months). I'm drawn to the topic by long-termist arguments. I'm a moral utilitarian so it seems highly important to me. However I have a feeling I misunderstood your post. Is this the kind of motive/draw you meant?

Comment by hollowing on hollowing's Shortform · 2023-01-17T10:46:17.082Z · LW · GW

i see, well i'm not sure what to do then. i inherited a lot of money and i wanna give most of it to alignment groups

Comment by hollowing on hollowing's Shortform · 2023-01-16T18:10:53.851Z · LW · GW

What are the most cost effective alignment organizations to donate to? I'm aware of MIRI and https://futureoflife.org/ . 

Comment by hollowing on [deleted post] 2023-01-15T21:35:08.072Z

"Making existential choices on such a basis is always a bad idea. What is needed is better information" Regardless of the choice you make, the choice is being made with weak data. Although strong data is the ideal, going with a choice weak data suggests against is worse than going with the choice it favors. Of course, if there is a way to get better information, we should do that first if we have time.

"Would you commit suicide if you thought that it was 60% likely that your life would be of negative value?" Not necessarily. However, if I exhausted all potential better alternatives like investigating further, then in principle yes as I'm a utilitarian. That said, this question has a false premise; I control the impacts of my life, and can make them positive. Not so with civilization.

Comment by hollowing on [deleted post] 2023-01-15T13:35:21.836Z

Yes, I only have what I consider to be educated suspicion about where current human civilization might fall in the range of possible civilizations. However, in terms of felicific calculus (https://en.wikipedia.org/wiki/Felicific_calculus), weak evidence is still valid. If it is all we have to go by, we should still go by it, especially considering the graveness of the potential consequences. Lack of strong evidence is not an argument for the status quo; this would be an example of status quo bias (https://en.wikipedia.org/wiki/Status_quo_bias). 

Your second line is an emotional appeal.

Comment by hollowing on [deleted post] 2023-01-15T13:32:33.535Z

"do you really want to give up the one shot we have at making a better world for biological life?" is a misleading argument because, as you know, humanity may well not create an AGI that makes the world better for life (biological or otherwise). 

"it is exceedingly unlikely that we will destroy life on earth" is a valid objection if true though. 

Comment by hollowing on [deleted post] 2023-01-15T13:28:15.814Z

Even if we assume the human species is typical, it doesn't follow that current Capitalist civilization, with all its misincentives (the ones we're seeing drive the development if AI), is typical. And there's no reason to assume this economic system would be shared by a society elsewhere.

Comment by hollowing on [deleted post] 2023-01-15T13:26:30.254Z

What does "relevant models" mean?

Comment by hollowing on [deleted post] 2023-01-15T13:22:59.993Z

No, I think the same argument could apply to the extinction of humans only, it just seemed less plausible to me that this would happen compared to all life on earth being wiped out.

In fact, I have doubts about whether it might be possible to steer AGI in a direction which ends life on earth but does not radically transform the rest of the reachable universe too. But if it is possible, this would be a potential argument for it.