Posts
Comments
This doesn't feel like it's really engaging at all with the content of the post. I don't mention "legitimacy" 10 times for nothing.
It was meant as an April Fool's in the same way the Death with Dignity post was an April Fool's.
trying to make it look like belief in witchcraft is very similar to belief in viruses.
I feel like you're missing the point. Of course, the germ theory of disease is superior to 'witchcraft.' However, in the average person's use of the term 'virus,' the understanding of what is actually going on is almost as shallow as 'witchcraft.' Of course, 'virus' does point towards a much deeper and important scientific understanding of what is going on, but in its every day use, it serves the same role as 'witchcraft.'
The point of the quote is that sometimes, when you want to get a message across (like 'boil the water before drinking it') it's easier to put yourself into the other person's ontology and get the message across in terms that they would understand, rather than trying to explain all of science.
Neat!
I didn't mean to make 1. sound bad. I'm only trying to put my finger on a crux. My impression of most prosaic alignment work seems to be that they have 2. in mind, even though MIRI/Bostrom/LW seem to believe that 1. is actually what we should be aiming towards. Do prosaic alignment people think that work on human 'control' now will lead to scenario 1 in the long run, or do they just reject scenario 1?
I'm just confused about what "optimized for leaving humans in control" could even mean? If a Superintelligence is so much more intelligent than humans that it could find a way, without explicit coercion, for humans to ask it to tile the universe with paper-clips, then "control" seems like a meaningless concept. You would have to force the Superintelligence to treat the human skull, or whatever other boundary of human decision making, as some kind of unviolable and uninfluenceable black box.
I'm a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.
It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can't wait for absolutely everyone to agree before raising the alarm.
For China, the Taliban and the DPRK, I think Fukuyama would probably argue that they don't necessarily disprove his theses, but it's just that it's taking much longer for them to liberalize than he would have anticipated in the 90s (he also never said that any of this was inevitable).
For Mormons in Utah, I don't think they really pose a challenge, since they seem to quite happily exist within the framework of a capitalist liberal democracy.
Technology, and AGI in particular, is indeed the most credible challenge and may force us to reconsider some high-stakes first principles questions around how power, the economy, society... are organized. Providing some historical context for how we arrived at the answers we now take for granted was one the main motivations of this post.
Instrumentally, an invisible alpha provides a check on the power of the actual alpha. A king a few centuries ago may have had absolute power, but he still couldn't simply act against what people understood to be the will of the actual alpha (God).
Thank you. I really appreciate this clarification.
I meant God Is Great as a strong endorsement of LessWrong. I am aware that establishing an analogy with religion is often used to discredit ideas and movements, but one of the things I want to push back against is that this move is necessarily discrediting. But this requires a lot of work (historical background on how religions got to occupy the place they do today within culture, classical liberal political philosophy...) on my part to explain why I think so, and why in the case of EA/LW, I think the comparison is flattering and useful. This is work I haven’t done yet, and I might be wrong about how I view this, so I guess I shouldn’t have been too surprised about the negative reaction.
I really should have written and posted something about my heterodox background assumptions first, and gotten feedback on them, before I published something building on them.
No, it's just that we've rejected the concept of "God" as wrong, i.e. not in accordance with reality. Some ancient questions really are solved, and this is one of them. Calling reality "God" doesn't make it God, any more than calling a dog's tail a leg makes it a leg. The dog won't start walking on it.
The disagreement here seems to be purely over definitions? The way I use "God" to mean "reality" is the same way Scott Alexander uses "Moloch" to mean "coordination failure" or how both Yudkowsky and Scott Alexander have used "God" to mean "evolution."
The claimed evolution of ideas of God towards "reality" is the evolution of those ideas towards "actually, there's no such thing."
That's essentially the meaning I'm trying to get across in God Is Great, just using an unusual definition of 'God' to make the point more palatable to a theistic audience. The reality we have uncovered by rationality and the scientific method is all there is.
Besides, you made a brand new account for that posting, acted plaintively injured when it got a poor reception, and then suggested we're not as open-minded as we might like to think. I've seen the pattern before on LessWrong. It was trolling then. Why should I not think that it is trolling now?
You seem to consider me an outsider troll. I meant God Is Great as a ringing endorsement of the LessWrong community and worldview, admittedly presented in a heterodox fashion. I consider myself a part of this community and only wish the best for it. I meant this post as an earnest request for feedback, and to kick-start a discussion about what I think might be a blind-spot in LW’s understanding of religion and theism. I’m just trying to explore a good faith disagreement. I regret that it ended up sounding so hostile and confrontational.
Thank you, I find this comment quite constructive.
My understanding of neuroscience has convinced me that consciousness is fundamentally dependent on the brain.
I had a similar journey.
The Durkheimian "society worshiping itself" phenomenon is real, common, and by no means limited to religion as traditionally defined. It is often wildly irrational and is pretty much the opposite of what LW aspires to.
I guess this can take a pretty nasty and irrational form, but I see this continuous with other benign community bonding rituals and pro-social behavior (like Petrov day or the solstice).
my impression was that in your post the payload is missing
Okay, that seems fair. It is true that just from that post, it's unclear what my point is (see hypothesis 1).
I think it matters how we construst our mythical analogies, and in Scott Alexander's Moloch, he argues that we should "kill God" and replace it with Elua, the god of human values. I think this is the wrong way to frame things. I assume that Scott uses 'God' to refer to the blind idiot god of evolution. But that's a very uncharitable and in my opinion unproductive way of constructing our mythical analogies. I think we should use 'God' to refer to reality, and make our use of the word more in line with how more than half of humanity uses the word.
Is your point about "functional anti-epistemology" about it being clear from Scott Alexander's and Sarah Constantin's posts that they're not sympathetic to "actual" belief in Moloch or Ra, while in my post, I sound sympathetic to theism?
So you're claiming that religion (aka team green) is so bad and irrational that any analogy of rationalism (aka team blue) with it is dangerous and sabotage? Or that any positive talk of team green is a threat?
It seems to me that the LW (over)reaction to the irrationality of religion is pretty irrational and has nothing to do with 'clarity'. If you're rejecting apriori a line of inquiry because it seems threatening to the in-group, I don't consider that "rational."
Edit: This was an overly antagonistic response, in part due to an uncharitable reading of Vladimir_Nesov's comment.
'Invisible alpha' seems like a big step up over actual alpha on the ladder of cultural evolution.
In the end, reality itself has always been the ultimate arbiter of any claim to truth or authority.
You could think of the aim of this post as trying to steelman theism to a rationalist, while simultaneously steelmanning EA/rationalism/... to a theist.
Why “God?” Part of this exercise is to examine how people have used the word “God” throughout history, look at what purpose the concept has served, and, for example, observe how similar the ‘God’ concept is to the rationalist ‘reality’ concept. Arguably, the way people used ‘God’ a thousand years ago is closer to our “reality” concept than to the way many people use ‘God’ today. It is interesting to see what happens when you decide to take certain compatibilist definitions of God seriously.