ektimo's Shortform
post by ektimo · 2024-06-21T02:53:21.846Z · LW · GW · 11 commentsContents
11 comments
11 comments
Comments sorted by top scores.
comment by ektimo · 2024-07-29T16:40:00.650Z · LW(p) · GW(p)
Should you cooperate with your almost identical twin in the prisoner's dilemma?
The question isn't how physically similar they are, it's how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn't favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they'll think "ha, this sounds like somebody I can take advantage of" or "reason dictates I must defect" then I wouldn't cooperate with them).
Replies from: Dagon, MichaelDickens↑ comment by Dagon · 2024-07-29T20:48:20.474Z · LW(p) · GW(p)
physically similar they are, it's how similar their logical thinking is.
A lot of discussion around here assumes that physical similarity (in terms of brain structure and weights) implies logical thinking similarity. Mostly I see people talking about "copies" or "clones", rather than "human twins". For prisoner's dilemma, the question is "will they make the same decision I will", and for twins raised together, the answer seems more likely to be yes than for strangers.
Note that your examples of thinking are PROBABLY symmetrical - if you don't think (or don't act on) "ha! this is somebody I can take advantage of", they are less likely to as well. In a perfect copy, you CANNOT decide differently, so you cooperate, knowing they will too. In an imperfect copy, you have to make estimates based on what you know of them and what the payout matrix is.
Replies from: ektimo↑ comment by ektimo · 2024-08-06T14:21:28.327Z · LW(p) · GW(p)
Thanks for your reply! Yes, I meant identical as in atoms not as in "human twin". I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.
↑ comment by MichaelDickens · 2024-07-29T22:34:04.391Z · LW(p) · GW(p)
There's an argument for cooperating with any agent in a class of quasi-rational actors, although I don't know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.
(This reminds me of Kant's argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don't think his argument quite works, but I believe there's a similar argument for cooperating on the prisoner's dilemma that does work.)
comment by ektimo · 2024-10-05T14:52:51.155Z · LW(p) · GW(p)
We can be virtually certain that 2+2=4 based on priors. This is because it's true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I'm pretty sure that one doesn't exist anyway.
Replies from: Dagon↑ comment by Dagon · 2024-10-05T16:00:08.584Z · LW(p) · GW(p)
We can be virtually certain that 2+2=4 based on priors.
I don't understand this model. For me, 2+2=4 is an abstract analytic concept that is outside of bayesean probability. For others, it may be "just" a probability, about which they might be virtually certain about, but it won't be on priors, it'll be on mountains of evidence and literally zero counterevidence (presumably because every experience that contradicts it gets re-framed as having a different cause).
There's no way to update on evidence outside of your light cone, let alone on theoretical other universes or containing universes. Because there's no way to GET evidence from them.
Replies from: ektimo↑ comment by ektimo · 2024-10-07T16:59:43.960Z · LW(p) · GW(p)
I meant this as a joke since if there's one universe that contains all the other universes since it isn't limited by logic, and that one doesn't exist then that would mean I don't exist either and wouldn't have been able to post this. (Unless I only sort-of exist in which case I'm only sort-of joking.)
comment by ektimo · 2024-09-22T23:14:49.405Z · LW(p) · GW(p)
How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?
(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I'm not sure how much that would happen. The more interesting thing may be how does it influence everyone's sense of what they are doing?)
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-09-23T02:48:00.309Z · LW(p) · GW(p)
So like... Quadratic voting? https://en.m.wikipedia.org/wiki/Quadratic_voting
comment by ektimo · 2024-06-21T02:53:21.994Z · LW(p) · GW(p)
Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let's handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one "stream of consciousness" quantum).
If you could make a change would you prioritize:
- Pruning the instances to reduce negative experiences
- Being able to press the button lots of times
- Making the experiences more real (For example an experience could be "one instant of reminiscing over my memories of building a Dyson Sphere" but nothing like that ever happened. One way to make it more real would be to create the set of all the necessary universe starting conditions to be able to create the set of all unique experiences; each universe will create duplicate experiences among its various inhabitants but it will contain at least the one unique experience it is checking off, which would include the person reminiscing over building a Dyson Sphere and they actually did build it. Or at least the ones that can be generated in this fashion.)
- This is horrible, stop the train I want to get off.
(I'd probably go with 4 but curious if people have different opinions.)
Replies from: Dagon