Comment by Thomas Eisen (thomas-eisen) on Zombies Redacted · 2021-03-18T17:39:08.902Z · LW · GW

You could use the "zombie argument" to "prove" that any kind of machine is more than the sum of its parts.

For example, imagine a "zombie car" which is the same on an atom-by-atom basis as a normal car, except it doesn't drive.

In this context, the absurdity of the zombie argument should be more obvious.

EDIT: OK, it isn't quite the same kind of argument, since the car wouldn't behave exactly the same, but it's pretty similar.

EDIT2: Another example to illustrate the absurdity of the zombie argument:
You could imagine an alternative world  that's exactly the same as ours, except humans (who are also exactly the same as in our world) don't perceive light with a wavelength of 700 nanometer as red. This "proves" that there is more to redness then wavelength of light.

Comment by Thomas Eisen (thomas-eisen) on Antiantinatalism · 2021-01-05T01:11:04.369Z · LW · GW

"Regarding the first question: evolution hasn’t made great pleasure as accessible to us as it has made pain. Fitness advantages from things like a good meal accumulate slowly but a single injury can drop one’s fitness to zero, so the pain of an injury is felt stronger than the joy of pizza. But even pizza, though quite an achievement, is far from the greatest pleasure imaginable.

Humankind has only recently begun exploring the landscape of bliss, compared to our long evolutionary history of pain. If you can’t imagine a pleasure great enough to make the trade-off worthwhile, consider that you may be falling prey to the availability heuristic. Pain is a lot more plentiful and salient, but it’s not a lot more important. The fact that pleasure is rare should only make it more valuable when offsetting pain, and an hour is a lot longer than 5 minutes."

What makes you think there's an equilibrium where the greatest pleasure imaginable is as good as the greatest suffering imaginable is bad (That's at least what I think what you think)? I think there's an asymetrie insofar that truly great suffering is hard to outweigh with great happiness. However, since no finite suffering can be infinitly bad, there has to be some amount of pleasure that outweights 5 minutes of the greatest suffering imaginable, but I don't think 1 hour of greatest pleaure is enough. Something like 1,000,000 years may be enough.

EDIT: 1,000,000 years might be over-the-top. Assuming 100 years of greatest pleasure outweigh 5 seconds of greatest suffering, 6,000 years of greatest pleasure should be enogh.

"Taking seriously the position that life is not worth living should lead one to a philosophy of extinctionism – the stance that it would be pretty great if all humans died in their sleep tonight."

if you subscribe to timeless decision theory, you may still be against extinctionism even if you think life is net-negative, because, when people would expect to die painlessly in there sleep, they would be absolutly terrified, and this would be bad.

Comment by Thomas Eisen (thomas-eisen) on Consequentialism Need Not Be Nearsighted · 2020-12-27T15:12:26.380Z · LW · GW

If I understand correctly, you may also reach your position without using a of non-causal decision theory if you mix utilitarianism with the deontological constraint of being honest (or at least meta-honest [see]) about the moral decisions you would make.

If people would ask you whether you would kill/did kill a patient, and you couldn't confidently say "No" (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must not kill the patient.

EDIT:  honesty must mean keeping promises (to a reasonable degree -- it is always possible that something unexpected happens which you didn't even consider as an improbable possibility when making the promise) to avoid Parfit's Hitchhiker-like problems.

Comment by Thomas Eisen (thomas-eisen) on Newcomb's Problem and Regret of Rationality · 2020-11-13T04:17:18.233Z · LW · GW

slighly modified version:

Instead of chosing at once whether you want to take one box or both boxes, you first take box 1 (and see whether it includes 0$ or 1.000.000$), and then, you decide whether you want to also take box 2.
Assume that you only care about the money, you don't care about doing the opposite of what Omega predicted.


Comment by Thomas Eisen (thomas-eisen) on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2020-11-13T03:54:06.740Z · LW · GW

slightly related:

Suppose Omega forces you to chose a number 0<p<=1 and then, with probability p, you get tortured for 1/(p²) seconds. 
Assume for any T, being tortured for 2T seconds is exactly twice as bad as being tortured for T seconds.
Also assume that your memory gets erased afterwards (this is to make sure there won't be additional suffering from something like PTSD)

The expected value of seconds being tortured is p * 1/(p²)=1/p, so, in terms of expected value, you should chose p=1 and be tortured for 1 second. The smaller the p you chose, the higher the expected value.

Would you actually chose p=1 to maximize the expected value, or would you rather chose a very low p (like 1/3^^^^3)?

Comment by Thomas Eisen (thomas-eisen) on You Can Face Reality · 2020-10-13T10:09:55.439Z · LW · GW

I think this could be considered one the the very basics of rational thinking. Like, if someone asked what rationality/being rational means and wants a short answer, this Litany is a pretty good summary. 

Comment by Thomas Eisen (thomas-eisen) on The Crackpot Offer · 2020-06-17T22:00:23.423Z · LW · GW

I once thought I could prove that the set of all natural numbers is as large as its power set. However, I was smart enough to acknowledge my limitations (What‘s more likely: That I made a mistake in my thinking I haven‘t yet noticed, or that a theorem pretty much any professional mathematician accepts as true is actually false?), so I activly searched for errors in my thinking. Eventually, I noticed that my methods only works for finite sub sets (The set of all natural numbers is, indeed, as large as the set of all FINITE subsets), but not for infinite subsets.

Eliziers method also works for all finite subsets, but not for infinite subsets

Comment by Thomas Eisen (thomas-eisen) on The Least Convenient Possible World · 2020-05-14T19:42:18.649Z · LW · GW

My answers:

1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid defence, since there's a lot of suffering that isn't caused by other humans, like illness and natural catastrophes) (note: I'm adding "nearly" to avoid paradoxes like the omnipotence paradox)

2. belief isn't a choice, for example, you can't "chose" to believe that the continent Australia doesn't actually exist. Therefore, I wouldn't be able to hold religious believes even if I'd acknowledge that this would bring greater happiness without negative side effects.

However, if we make the hypothetical world even less convenient by adding that, actually, I would be able to effectivly self-deceive, and there would be absolutly no negative side effects, then Yes, I would chose to believe.

3. I'm already highly sympathetic towards the "Effective altruism" movement and donate a lot of money to their causes. The reason I'm not donating literally everything I don't need for survival is that I'm not morally perfect; I admit that.

(EDIT just to correct spelling)

Comment by Thomas Eisen (thomas-eisen) on What Would You Do Without Morality? · 2020-03-10T22:34:53.861Z · LW · GW

There would actually be several changes:

I would stop being vegan.

I would stop donating money (note: I currently donate quite a lot of money for projects of "Effective altruism").

I would stop caring about Fairtrade.

I would stop feeling guilty about anything I did, and stop making any moral considerations about my future behaviour.

If others are overly friendly, I would fully abuse this for my advantage.

I might insult or punch strangers "for fun" if I'm pretty sure I will never see them again (and they don't seem like the kind of person who seeks retribution).

I would become less willing to help others.

I would care very little about politics, and might not go voting.

I wouldn't be angry at anyone unless they're action influences me personally (note: If they hurt a person with which I have a relationship, this would influence me. If they hurt a stranger, this wouldn't influence me)

And there would probably be quite a few more changes I haven't thought of yet.

I would still continue my current hobbies, and do things if I have a "feeling "that I "want" to do them. These "feelings" would only be stopped by fear for personal costs, not by moral consideration (And not making moral considerations would indeed make a change see above)

Comment by Thomas Eisen (thomas-eisen) on Absence of Evidence Is Evidence of Absence · 2019-11-03T23:31:30.350Z · LW · GW

More acuratly, "absence of evidence you would expect to see if the statement is true" is evidence of absence.

If there's no evidence you'd expect if the statement is true, absence of evidence is not evidence of absence.

For example, if I tell you I've eaten cornflakes for breakfast, no matter whether or not the statement is true, you won't have any evidence in either direction (except for the statement itself) unless you're willing to investigate the matter (like, asking my roommates). In this case, absence of evidence is not evidence of absence.

Now, suppose we meet in person and I tell you I've eaten garlic just an hour before. You'd expect evidence if that statement is true (bad breath), in this case, absence of evidence is evidence of absence.

Comment by Thomas Eisen (thomas-eisen) on A New Day · 2019-10-27T00:52:29.410Z · LW · GW

I've actually noticed this long before I've read the post. For me, the thought "I'm having many old thoughts" is itself an old thought now.

The same is true for the thought "the thought "I'm having many old thoughts" is itself an old thought now" and so on

Comment by Thomas Eisen (thomas-eisen) on Drawing Two Aces · 2019-10-27T00:04:19.163Z · LW · GW

I see another way to show that 1/5 is the correct solution:

P(2 Aces | Ace of Spades revealed)= P(2 Aces AND Ace of Spades revealed)/P(Ace of Spades revealed)

(note: for further calculations, I'm assuming that there are 5 possible hands and the probability for each hand is 1/5, since it already has been revealed that there is at least one Ace. The end result would be the same if you would also set aside a random card in case you have no Ace,but the probabilities in the steps before the end results would have to change accordingly)

P(2 Aces AND Ace of Spades reveled)=P(2 Aces)*1/2 = 1/5 * 1/2 =1/10

P(Ace of Spades revealed)= 2/5 * 1 + 1/5 * 1/2 = 5/10


Comment by Thomas Eisen (thomas-eisen) on Infinite Certainty · 2019-10-10T23:52:16.071Z · LW · GW

Assigning Bayes-probabilities <1 to mathematical statements (that have been definitly proven) seems absurd and logically contradictory, because you need mathematics to even asign probabilities.

If you assign any Bayes probability to the statement that Bayes probabilities even work, you already assume that they do work.

And, arguably, 2+2=4 is much simpler than the concept of Bayes-probability (To be fair, the same might not be true for my most complex statement that Pi is irrational)

Comment by Thomas Eisen (thomas-eisen) on Your Strength as a Rationalist · 2019-10-10T10:16:21.127Z · LW · GW

This article actually made me question „Wait, is this even true?“ when I read an article with weird claims; then I research whether the source is trustworthy and sometimes, it turns out that it isn‘t

Comment by Thomas Eisen (thomas-eisen) on Infinite Certainty · 2019-10-08T00:53:19.013Z · LW · GW

I agree that you can never be „infinitly certain“ about the way the physical world is (because there‘s always a very tiny possibility that things might suddenly change, or everything is just a simulation, or a dream, or […] ), but you should assign probability 1 to mathematical statements for which there isn‘t just evidence, but actual, solid proof.

Suppose you have the choice beetween the following options: A You get a lottery with a 1-Epsilon chance of winning. B You win if 2+2=4 and 53 is a prime number and Pi is an irrational number.

Is there any Epsilon>0 for which you would chose option A? What if something really bad happens if you lose (like, all of humanity being tortured for [insert large number] years)?

I would chose option B for any Epsilon>0, which means assigning Bayes-probability 1 to option B.

Comment by Thomas Eisen (thomas-eisen) on Entangled Truths, Contagious Lies · 2019-10-06T20:12:54.434Z · LW · GW

I don‘t understand the meaning of the sentence „And since inferences can propagate backward and forward through causal networks, epistemic entanglements can easily cross the borders of light cones. “