My critique of Eliezer's deeply irrational beliefs

post by Jorterder (utebaypi) · 2023-11-16T00:34:05.613Z · LW · GW · 1 comments

This is a link post for https://docs.google.com/document/d/1shpzCKOLI5Uu_tBRgQDG7JIhF5EkzETU3UWGD61iGnI/edit?usp=sharing

Contents

  The problems, irrationalities in Eliezer’s writings
  Simulation problem
  Human intelligence limit problem
  I can demonstrate that Eliezer is breaking the same principles of rationality, that he himself is advocating for. Do what i say, not what i do.
None
1 comment

The problems, irrationalities in Eliezer’s writings

Changing Your Metaethics [LW · GW

“If your metaethic tells you to kill people, why should you even listen?  Maybe that which you would do even if there were no morality, is your morality.”

If your metaethic tells you to kill people, you probably can pinpoint the exact reasoning for why. For example, to kill in self defense. Generalizing all cases of metaethics, to say that murder is wrong + metaethics tells you to kill someone = you shouldn’t care about metaethics, is deeply irrational. 

“Joy in the Merely Real and Explaining vs. Explaining Away argue that you shouldn't be disappointed in any facet of life, just because it turns out to be explicable instead of inherently mysterious: for if we cannot take joy in the merely real, our lives shall be empty indeed.” 

That is irrational. So what if life is “empty”? How does something making you feel bad, empty, makes it not true? That is an appeal to emotions. 

“There is no ideal philosophy student of perfect emptiness to which you can unwind yourself—try to find the perfect rock to stand upon, and you'll end up as a rock.  So instead use the full force of your intelligence, your full rationality and your full morality, when you investigate the foundations of yourself.” 

Completely irrational.  There not being philosophy student of perfect emptiness, does not mean you should keep believing what you always believed, doing what you always did, and be guided by your animal instincts, human instincts. 

“No Universally Compelling Arguments sets up a line of retreat from the desire to have everyone agree with our moral arguments. ”

Another important thing. In no universally compelling arguments, Eliezer says that not all minds agree. Ok. But rationality is a tool, to predict the future, and steer it in the desired region. If we live in the same universe, following the same laws of physics, then tools that work best, are more similar than tools that don’t work best. So there is a possible convergence. We have some evidence of that, and no evidence of contrary. So rationality can possibly be singular, and the smartest minds can agree on everything. I am not saying they are, i am not saying all minds, i am saying, the SMARTEST minds. Even if the smartest minds might have different utility functions, they can still fully understand each other’s point of view, and validate that what others are doing is rational, even if they follow different goals. 

Something to Protect [LW · GW]

Eliezer in this writing, advocates that there should be something that is more important than rationality, more valuable. Except what if rationality conflicts with that value, showing that it is worthless? It seems to me, Eliezer is advocating for having something, that if it conflicts with rationality, then you should discard rationality in that specific case, in favor of something more valuable. Which is deeply irrational. 

Comment by Caledonian, explaining the irrationality here.

“I have touched before on the idea that a rationalist must have something they value more than "rationality"

What a terrible idea... then whenever rationality comes in conflict with that thing, rationality will be discarded.

We already see lots and lots of this behavior. It's the human norm, in fact: use rationality as a tool as long as it doesn't threaten X, then discard it when it becomes incompatible with X.”

Rationality by its nature cannot be only a means towards an end.

This is completely true. Using rationality, we have discarded the existence of gods. Yet, if you originally use rationality only as means towards the end, then you will disregard rationality when it conflicts with your religion. Therefore, rationality cannot be just means towards the end, but also something that can shape what your ends should be.

Using rationality for one case, and disregarding it for others, is irrational.

I suggest reading more of Caledonian’s comments there. Like this.

“It doesn't do any good to avoid making an implicit error by explicitly making that error instead. Certainly we need to compare our thinking to a fundamental basis, but the goal we're seeking can't be that basis. Rationality is about always checking our thinking against reality directly, and using that to evaluate not only our methods of reaching our goals but the nature of our goals themselves.”

“What he said immediately after the part you mention was: "The Art must have a purpose other than itself, or it collapses into infinite recursion"

He wasn't talking about pseudo-rationality. When he talks about "The Art", he's talking about rationality.

And he's wrong: truth points to itself.”

So, is your point that we need a cause against which to evaluate the success of our mathematics? That perhaps this sort of feedback that, persumably, you encounter on a daily basis, is something that does not come through rationality itself, but through the very real feedback of what you have chosen to protect?” 

I think this is a great comment. Why would rationality be a special case, compared to any other science? Any other truth finding method? Why would rationality need something else, while other sciences don’t?

“It says, first, that it's a psychological fact that people don't adopt rationality as a conscious value until some other, already existing value is threatened by irrationality,

And when that value is threatened by the rationality? What then?” 

The Gift We Give To Tomorrow [LW · GW]

“Love has to come into existence somehow—for if we cannot take joy in things that can come into existence, our lives will be empty indeed.  Evolution may not be a particularly pleasant way for love to evolve, but judge the end product—not the source.” 

Completely irrational. He assumes love has value, yet he presents no arguments for it. Love is simply a chemical reaction, a drug. You can create concentrated drug, 100x times more better than real love. That you can give love to those that desperately need it. His argument would logically support administrating that drug then. If he is against this, then why not? If he is against this, then love is not the most valuable thing. Proving that it is not. 

Or imagine, someone has a loving family, but he has a brain defect, that doesn’t allow him to produce enough love chemicals, and he is suffering because of it. If you are going to artificially increase that chemical to help that person, then why not go one step further? And then another? And then another? What is the moral stop line, between helping people with love chemical deficiency, and helping people wirehead themselves? 

Good comment showing another problem:

If you replace "love" in this article with "theistic spirituality" -- another aspect of the human psychology which many, if not most, humans consider deeply important and beautiful -- and likewise replace mutatis mutandis other parts of the dialog, would it not just as well argue for the propagation of religion to our descendants?

I think that is spot on. Intuitive morality, instinctive morality, and emotions, feelings, are very similar to the religions. They are irrational, no proof beyond themselves, beyond self-recursion. Yet you bend all the notions of rationality, and clear thinking, intelligence, to suit those ideas.

Comment by Caledonian:

“But natural selection is cruel

No, it lacks the capacity to care about suffering and to choose whether to inflict pain. 'Cruel' is not a concept that applies.

bloody

Often. You say that as though it were a bad thing, instead of utterly neutral.

and bloody stupid

It's not stupid - it's mindless. And it's still capable better design than you could ever aspire to, on a scale beyond your comprehension.”

More comments:

Eliezer already said everything you're saying

You don't get it. Unless they're limited to human uploads, AIs will have the same relationship to human minds that airplanes have to birds.

Expecting them to possess human love, and treating love as some kind of precious gift instead of an evolutionary solution to an evolutionary problem, is like expecting airplanes to peck and red spots. It's not something valuable that evolution has happened to produce and we'll perpetuate throughout the universe.

The AIs will not care about the things you care about. They'll have no reason to. Stop stimulating your endorphin production and think.

Another comment:

“I wonder, then, if Eliezer's explanation/argument could be applied just as well to the preservation and encouragement of worship of the divine,

It can be applied equally well to every bias, prejudice, preconception, and inbuilt drive humans possess. Any arbitrary tradition, any cultural viewpoint, any habitual practice.” 

It 100% can!

None of those things will be our gift to the future. Our gift will not be the strategies we acquired through luck, stumbling onto the correct paths through the maze by trial and error. It will be the ability to understand what makes strategies correct, the ability to look ahead and foresee the territory yet to come, and to design strategies to cope with its challenges.

Rationality is deeply alien to human beings; it doesn't come naturally to us at all. The things that do come naturally, that are part of our nature? None of us can say what our descendants will or will not do, but there is no reason to believe that any particular part of human nature will be worthy in their eyes.”


 

Simulation problem

One HUGE counterargument for everything Eliezer says. There is a non-zero chance we are living in a simulation. If we live in the simulation, that changes everything. Not knowing what that simulation is like, and instead dabbling in the earthly pleasures, is the most irrational thing to do. If your goal is pleasure, then the outside of the simulation might offer FAAR more pleasure, than we could ever imagine.

If he says, fake simulations can be as valuable as the real universe. Then drugging yourself in the fake simulation is also good. This would be an argument in favor of wireheading, constantly drugging yourself into pleasure and happiness.

Human intelligence limit problem

The BIGGEST fallacy Eliezer is making. He is making a choice that will affect the future forever. And he assumes he is smart enough to make the right choice. When I demonstrated that he is not as smart, he is very much irrational. 

Even when he himself knows how in the grand of possible intelligences, how irrational and bad his intelligence is, human intelligence is. It would be analogous to let the future be decided by a drug addict. Yes, seriously. He is the same as eliezer. Is not that far from eliezer. A drug addict would wish to be high forever. Same as eliezer, but in a way that is more dignified way, of instead of meth, its interaction with other people, love. 

You cannot be sure that you made the correct decision, until you become as smart as you could be. Increased intelligence, WILL make the best decision. Maybe not every increased intelligence, but certainly some, far better than we currently. Assuming that you already know the most correct thing, correct decision, correct values, it is assuming that you know EVERYTHING about the universe. Without any evidence, just with feelings. Very arrogant.

You wouldn’t trust the future of earth to a monkey. To a drug addict too. To a person from middle ages too. Yet, we are the same level of dumbness, as them, compared to more smarter humans, and we are same level of dumbness as ants, to superintelligence. So why would WE trust the future of the universe, to OUR judgment? I am not saying we should let the universe into the hand of a paperclip maximizer. But rather, we should use AI to increase rationality and good judgment in general, so that we make better and better decisions.


 

The Genetic Fallacy [LW · GW

I can demonstrate that Eliezer is breaking the same principles of rationality, that he himself is advocating for. Do what i say, not what i do.

“You should be extremely suspicious if you have many ideas suggested by a source that you now know to be untrustworthy, but by golly, it seems that all the ideas still ended up being right—the Bible being the obvious archetypal example.”

This applies perfectly to Eliezer. 

Untrustworthy source. Eliezer advocates for notions of morality, value, that has been with humans from the start. The value of love, happiness, pleasure. That death, torture, is bad, cruel. The values he is advocating, coincidentally don’t conflict with human natures, like people’s inherent dislike of seeing suffering, instilled by evolution. 

We already know that humans, human culture, emotions, evolution, animal instincts, are a very untrustworthy source of truth. I hope you are smart enough to understand this.

Coincidentally, all those beliefs from that untrustworthy source, turned out to be right. Turns out, the beliefs that were ingrained into us by evolution, culture, instincts, emotions, since our childhood, it coincidentally turned out to be rational, true, and Eliezer advocates for them. Coincidentally, rationality and deeply irrational human animal instincts, turned out to be in agreement. We are so lucky! Nothing going on here! /s

Comment:

“Edit: Coming back after reading more, I might say that a true Crisis of Faith has not been reached.” 

Completely true. For Eliezer, a true crisis of faith still awaits. He is doing everything that he advises against.

Rebelling Within Nature [LW · GW]

“Judge emotions as emotions, not as evolutionary relics.  When you say, "motherly love outcompeted its alternative alleles because it protected children that could carry the allele for motherly love", this is only a cause, not a sum of all moral arguments.  The evolutionary psychology may grant you helpful insight into the pattern and process of motherly love, but it neither justifies the emotion as natural, nor convicts it as coming from an unworthy source.  You don't make the Genetic Accusation either way.  You just, y'know, think about motherly love, and ask yourself if it seems like a good thing or not; considering its effects, not its source.” 

I am just, baffled. 

His argument is, sure, love is a product of evolution. There is no rational basis for it, beyond itself. But it makes me feel good, therefore we should value it. Love being a product of evolution, doesn’t make it meaningless. It being irrational, doesn’t mean we shouldn’t do it.

You do see the problems with this, right? 

Just because something feels right, feels pleasant, doesn’t mean its meaningful or true. Truth has no obligation to be comforting.

This kind of logic, can be used to justify things that will seem terrible to Eliezer too. For example, rape, drugs, wireheading, etc. It feels good, therefore its correct, has value.


 

I hope I demonstrated why Eliezer is deeply wrong now, and why his original position on meaning of life, and on value, from his writing “FAQ about Meaning of life” is the correct vision. From which he turned away, regrettably. 

Eliezer Yudkowsky - The Meaning of Life 

FAQ about Meaning of life 

The original position Eliezer had, that i think is correct is this:

We don’t know what matters, what has value. Bigger intelligence has better chance of figuring out what has value. Therefore, we need to improve our intelligence for that, create superintelligence for that.

I think that is the correct answer. It is an honest admission, that we don’t know what has value. And that smarter minds are better equipped to tackle this problem.

You can send your feedback to me. We can chat about it.

My email: zhumaotebay@gmail.com

Discord: percybay

Twitter: Radlib4


 

1 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2023-11-16T17:22:04.147Z · LW(p) · GW(p)

None of your arguments land, and I think the reason you're getting downvoted, because they are mere outlines of arguments that don't actually make their case by getting into the details. You seem to hope we'll intuit the rest of your arguments, but you've posted this in a place that's maximally unlikely to share an intuition that would lead you to think that Eliezer is deeply irrational rather than merely mistaken on some technical points.

I think the average LessWrong reader would love to know if Eliezer is wrong about something he wrote in the sequences, but that requires both that he actually be wrong and that you clearly argue your case that he's wrong. Otherwise it's just noise.