0 comments
Comments sorted by top scores.
comment by Viliam · 2023-01-15T17:56:17.820Z · LW(p) · GW(p)
present humanity seems exceptionally irrational
Compared to... fictional aliens?
I understand the feeling that the world sucks, but the aliens would also be created by evolution, which involves competition against the members of the same species, preferring short-term benefits over long-term, etc.
comment by RHollerith (rhollerith_dot_com) · 2023-01-14T22:47:31.026Z · LW(p) · GW(p)
Do you prefer extinction of all life on earth to the extinction of just the humans?
In other words, do you put more hope in an alien species than in a non-human Earth species' (the lions, say) evolving into something that can wield technology?
Replies from: hollowing↑ comment by hollowing · 2023-01-15T13:22:59.993Z · LW(p) · GW(p)
No, I think the same argument could apply to the extinction of humans only, it just seemed less plausible to me that this would happen compared to all life on earth being wiped out.
In fact, I have doubts about whether it might be possible to steer AGI in a direction which ends life on earth but does not radically transform the rest of the reachable universe too. But if it is possible, this would be a potential argument for it.
comment by Raemon · 2023-01-14T22:33:59.216Z · LW(p) · GW(p)
This post has relevant models:
https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1 [LW · GW]
Replies from: hollowingcomment by avturchin · 2023-01-15T09:06:49.000Z · LW(p) · GW(p)
We are typical, so it is unlikely that aliens will be better.
Replies from: hollowing↑ comment by hollowing · 2023-01-15T13:28:15.814Z · LW(p) · GW(p)
Even if we assume the human species is typical, it doesn't follow that current Capitalist civilization, with all its misincentives (the ones we're seeing drive the development if AI), is typical. And there's no reason to assume this economic system would be shared by a society elsewhere.
Replies from: avturchincomment by the gears to ascension (lahwran) · 2023-01-15T03:37:47.415Z · LW(p) · GW(p)
it is exceedingly unlikely that we will destroy life on earth, although we might find genetic life getting replaced with some new technology generated by ai that replaces all of the animal kingdom. do you really want to give up the one shot we have at making a better world for biological life?
Replies from: hollowing↑ comment by hollowing · 2023-01-15T13:32:33.535Z · LW(p) · GW(p)
"do you really want to give up the one shot we have at making a better world for biological life?" is a misleading argument because, as you know, humanity may well not create an AGI that makes the world better for life (biological or otherwise).
"it is exceedingly unlikely that we will destroy life on earth" is a valid objection if true though.
↑ comment by the gears to ascension (lahwran) · 2023-01-15T16:56:38.299Z · LW(p) · GW(p)
I don't see how we could possibly prevent ai from making a world that is as good as the world has ever been for life, according to the agi. I don't think a paperclipper is a total failure, my preferences would of course be mortally angry that humanity died if I was dead, but my preferences would still have sympathy for the ai's interest in shiny objects or what have you. similarly to how if all intelligent life was wiped out, I would still prefer for there to be plants than for there to be a barren rock with no cellular life at all. but I would far far far prefer intelligent beings survive to coexist and help take care of each other.
more to the point - I think we're going to solve safety, which will involve changing the nature of ownership contracts to no longer allow capitalism to take over all the capital. markets should be designed in ways that protect all their members, and ai safety is going to connect deeply to the design of markets in terms of ai to ai market design.
don't give up yet!
comment by Richard_Kennaway · 2023-01-15T07:45:00.958Z · LW(p) · GW(p)
How many alien civilisations have you examined, to judge whether humans are exceptionally irrational among them? None, I think. You merely “feel that” they “could” be better.[1]
If safe AGI is to be created, it must be created by someone. Look around you. WE’RE IT. THIS IS IT.
Pro tip: there is no such thing as “feeling that”. ↩︎
↑ comment by hollowing · 2023-01-15T13:35:21.836Z · LW(p) · GW(p)
Yes, I only have what I consider to be educated suspicion about where current human civilization might fall in the range of possible civilizations. However, in terms of felicific calculus (https://en.wikipedia.org/wiki/Felicific_calculus), weak evidence is still valid. If it is all we have to go by, we should still go by it, especially considering the graveness of the potential consequences. Lack of strong evidence is not an argument for the status quo; this would be an example of status quo bias (https://en.wikipedia.org/wiki/Status_quo_bias).
Your second line is an emotional appeal.
↑ comment by Richard_Kennaway · 2023-01-15T16:45:13.106Z · LW(p) · GW(p)
“Educated suspicion” = wild guess. Making existential choices on such a basis is always a bad idea. What is needed is better information. Would you commit suicide if you thought that it was 60% likely that your life would be of negative value?
Your second line is an emotional appeal.
You say that as if it’s a bad thing. I take it you’re talking about the all-caps part. Ignore the emphasis if you like. I stand by all of it.
Replies from: hollowing↑ comment by hollowing · 2023-01-15T21:35:08.072Z · LW(p) · GW(p)
"Making existential choices on such a basis is always a bad idea. What is needed is better information" Regardless of the choice you make, the choice is being made with weak data. Although strong data is the ideal, going with a choice weak data suggests against is worse than going with the choice it favors. Of course, if there is a way to get better information, we should do that first if we have time.
"Would you commit suicide if you thought that it was 60% likely that your life would be of negative value?" Not necessarily. However, if I exhausted all potential better alternatives like investigating further, then in principle yes as I'm a utilitarian. That said, this question has a false premise; I control the impacts of my life, and can make them positive. Not so with civilization.
↑ comment by Richard_Kennaway · 2023-01-16T09:38:29.842Z · LW(p) · GW(p)
That said, this question has a false premise; I control the impacts of my life, and can make them positive.
I agree (although there's plenty on LessWrong and elsewhere who wouldn't).
Not so with civilization.
Civilisation changes according to the choices of all of us, some of them big, most of them small. Do you decline to take part in any cooperative effort when your own part is a small one?