Posts

Comments

Comment by michaelstjules on Bet On Biden · 2020-11-02T20:26:33.343Z · LW · GW

From the blog of Andrew Gelman, one of the authors of the Economist's model:

As noted above, I think that with wider central intervals and wider tails we could lower that Biden win probability from 96% to 90% or maybe 80%. But, given what the polls say now, to get it much lower than that you’d need a directional shift, something asymmetrical, whether it comes from the possibility of vote suppression, or turnout, or problems with survey responses, or differential nonresponse not captured by partisanship adjustment, or something else I’m forgetting right now. But I don’t think it would be enough just to say that anything can happen. “Anything can happen” starting with Biden at 54% will lead to a high Biden win probability now matter how you slice it. For example, suppose you start with a Biden forecast at 54% and give a standard error of 3 percentage points, which has gotta be too much—it yields a 95% interval of [0.48, 0.60] for his two-party vote share, and nobody thinks he’s gonna get 48% or 60%. Anyway, start with that and Biden still has a 78% chance of winning (or 75% using the t_3 distribution). To get that probability down below 80%, you’re gonna need to shift the mean estimate, which implies some directional information.

Comment by michaelstjules on Bet On Biden · 2020-10-27T02:10:07.236Z · LW · GW

Older discussion of the Economist's model and betting markets by Gelman:

https://statmodeling.stat.columbia.edu/2020/06/19/forecast-betting-odds/

Comment by michaelstjules on Bet On Biden · 2020-10-25T04:08:24.339Z · LW · GW

Do you think FiveThirtyEight and the Economist haven't appropriately accounted for these considerations in their models? I don't think the discrepancy with the markets are so large. Where did ~65% come from?

Comment by michaelstjules on Bet On Biden · 2020-10-24T20:25:00.575Z · LW · GW

Are there any important considerations in the opposite direction?

Comment by michaelstjules on Bet On Biden · 2020-10-24T17:05:30.697Z · LW · GW

Andrew Gelman, super legit (imo) statistician who built The Economist's model, criticizes 538's model for getting correlations wrong:

https://statmodeling.stat.columbia.edu/2020/10/24/reverse-engineering-the-problematic-tail-behavior-of-the-fivethirtyeight-presidential-election-forecast/

https://www.reddit.com/r/fivethirtyeight/comments/jh9bmu/andrew_gelman_reverseengineering_the_problematic/

The Economist's predictions are even more favourable to Biden, though:

https://projects.economist.com/us-2020-forecast/president

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-11T17:15:04.493Z · LW · GW

Nonhuman animals and children have limited agency, irrational and poorly informed preferences. We should use behaviour as an indication of preferences, but not only behaviour and especially not only behaviour when faced with the given situation (since other behaviour is also relevant). We should try to put ourselves in their shoes and reason about what they would want were they more rational and better informed. The more informed and rational, the more we can just defer to their choices.

If I give that same "agentic being" treatment to animals, then the suicide argument kind-of-hold. If I don't give that same "agentic being" treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex "reasoning" machine but I don't feel any moral guilt when plucking a leaf or a mushroom.

I think this is a good discussion of evidence for the capacity to suffer in several large taxa of animals.

I think also not having agency is not a defeater for suffering. You can imagine in some of our worst moments of suffering that we lose agency (e.g. in a state of panic), or that we could artificially disrupt someone's agency (e.g. through transcranial magnetic stimulation, drugs or brain damage) without taking the unpleasantness of an experience away. Just conceptually, agency isn't required for hedonistic experience.

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-09T23:39:21.286Z · LW · GW

Supplements are part of a person's diet. Vegans who don't take B12 are being stupid.

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-09T23:33:40.896Z · LW · GW
Until then, the sanest choice would seem to be that of focusing our suffering-diminishing potential onto the beings that can most certainly suffer so much as to make their condition seem worst than death.

Even if you thought factory farmed animals might plausibly have good lives on the aggregate (like humans, and perhaps many or most humans who do end up committing suicide), many do not have good deaths, and working on that would still be valuable. Negligent or intentional live boiling[1][2][3][4], CO2 slaughter without stunning, on-farm and transportation mortality, barn fires. I don't think it's very plausible that these conditions aren't worse than death.

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-08T18:37:05.051Z · LW · GW

I think understanding of death is largely experiential (witnessing death) and conceptual (passed on through language), and intentional suicide attempt would further require understanding what would kill you. Maybe people could infer some things based on their experience with sleep, though.

Here's an article on the development of understanding of death in children; it seems they tend to start to understand at 3 years old. I would expect understanding of suicide to generally come later still. Do you think 2-3 year olds can have lives worse than death despite not committing suicide or being able do judge that their lives are/will be worse than death? I'd expect there will be periods for most children where they can speak and be taught to understand death and suicide, but since they won't have been taught yet, they won't understand.

An individual's experience of torture could be similar to ours, and we could deprive them of all pleasure, too, so on a hedonistic account, it wouldn't at all be plausible that their life is good, and yet they might not understand death and suicide enough to attempt suicide. If we think their hedonistic experiences are sufficiently similar to ours, even though they don't have well-informed preferences, we can make judgements in their place.

On a preferential account of value, if an individual doesn't recognize or understand an option and then fails to choose it, we can't conclude that that option is worse for that individual. This is also an everyday issue for typical adult humans given our very limited understanding, but it's worse the more ill-informed the preferences, especially in children and nonhuman animals. If you generally take an individual's actions as indicating what's best for them, then we shouldn't stop children from sticking forks into electrical outlets or touching hot stovetops.

1. We think that beyond a certain point of brain development abortion is acceptable since the kid is not in any way "human". So why not start you argument there ? and if you do, well, you reach a very tricky gray line

I don't start my argument there precisely because it's a grey area for consciousness. I chose examples I'd expect you to accept as conscious and capable of suffering (although it seems you have doubts), and would generally not commit suicide even if tortured.

People don't have memories at ages bellow 1 or 2 and certainly no memories indicative of conscious experience.

I'm guessing you mean episodic memories? Children that young (and farmed animals) certainly remember things like words, individuals, how to do things, etc.. There's also research on episodic-like memory in many different species of nonhuman animals, not just the obviously smart ones (I haven't looked into similar research for young children). Also dreams seem relevant.

https://en.wikipedia.org/wiki/Episodic-like_memory

https://journals.sagepub.com/doi/full/10.1177/147470491301100307

I don't see how this undermines the point, unless you want to argue the "fear" of death can be so powerful one can lead what is essentially a negative value life because an instinct to not die (similarly to, say, how one would be able to feel pain from a certain muscle twitch yet be unable to stop in until it becomes unbearable).
I don't necessarily disagree with this perspective, but from this angle you reach a antinatalist utilitarian view of "Kill every single form of potentially conscious life in a painless way as quickly as possible, and most humans for good measure, and either have a planet with no life, or with very few forms of conscious life that have nothing to cause them harm".

It's also possible for an individual to be so focused on the present that any suicide attempt would feel worse than what they're otherwise feeling at that moment (which could still be overall bad), and this would prevent them from doing it. This can be the case even if it would prevent more intense suffering later. Again, however, I think farmed animals just usually don't understand suicide properly as an option.

My point is that suicide is not a good objective measure on its own. I think suicide attempt is fairly strong evidence of misery, but absence of suicide attempt is really not very good evidence for a life better than death, because of the obstacles (understanding, fear, access to suicide methods, guilt, etc.).

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-08T06:53:34.690Z · LW · GW

Have you looked at suicide rates by country? A lot of these don't accord with my intuitions about quality of life, either. Somalia has the 100th highest rate in the world, after many Western countries. Spain has a higher rate than Saudi Arabia (where suicide (attempt) is illegal). There are important cultural forces (and laws) around suicide, especially religious ones. Then again, maybe the numbers are being misreported in some countries.

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-08T04:11:26.739Z · LW · GW

Besides observations of behaviour, there are also neurological evidence (e.g. Do they have structures functionally similar to those important/responsible for emotions in humans and are they important/responsible for similar behaviour in these animals? Are they actually evolutionarily preserved structures?), and evolutionary/adaptive arguments, although these ultimately tie back to behaviour in some way, but sometimes specifically human behaviour, not the animals' behaviour, although both together could strengthen the argument.

Comment by michaelstjules on The ethics of breeding to kill · 2020-09-08T03:39:37.789Z · LW · GW

I think suicide is a very poor measure of welfare for nonhuman animals, because they typically don't understand death or how they could kill themselves, so it's not an option they understand. I think you could plausibly torture farmed animals almost nonstop and they would generally not commit suicide. I'd expect the same to apply to typically developing toddlers, and it's plausible to me that you could in principle shelter normally developing humans from understanding of death and suicide into adulthood, and torture them, and they too would not attempt suicide.

We (humans and other animals) also have instincts (especially fear) that deter us from committing suicide or harming ourselves regardless of our quality of life, and nonhuman animals rely on instinct more, so I'd expect suicide rates to underestimate the prevalence of bad lives.

Comment by michaelstjules on Forecasting Thread: AI Timelines · 2020-09-05T04:31:10.711Z · LW · GW

Wouldn't plotting the cumulative distribution functions instead of the probability density functions be easier to interpret? With the CDF, you can just take differences to get probabilities for intervals, but I can't get the probabilities for intervals just by looking at the graph of the PDF. The max and argmax of the PDF, which I think people will be attracted to, can also be misleading.

Comment by michaelstjules on SDM's Shortform · 2020-08-28T04:30:10.216Z · LW · GW

I'm here from your comment on Lukas' post on the EA Forum. I haven't been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.

Are there good independent arguments against the absurd conclusion? It's not obvious to me that it's bad. Its rejection is also so close to separability/additivity that for someone who's not sold on separability/additivity, an intuitive response is "Well ya, of course, so what?". It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don't.

So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).

By deny, do you mean reject? Doesn't negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn't follow from diminishing returns to happiness vs suffering?

Also, for what it's worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:

Comment by michaelstjules on Causality and Moral Responsibility · 2020-05-26T20:07:54.283Z · LW · GW

I don't think you've established that Lenin was a jerk, in the sense of moral responsibility.

I think people usually have little control (and little illusion of freedom) over what options, consequences and (moral) reasons they consider, as well as what reasons and emotions they find compelling, and how much. Therefore, they can't be blamed for an error in (moral) judgement unless they were under the illusion they could have come to a different judgement. It seems you've only established the possibility that someone is morally culpable for a wrong act that they themselves believed was wrong before acting. How often is that actually the case, even for the acts you find repugnant?

Lenin might have thought he was doing the right thing. Psychopaths may not adequately consider the consequences of their actions and recognize much strength in moral reasons.

There are no universally compelling arguments, after all.

Comment by michaelstjules on Identity Isn't In Specific Atoms · 2020-05-26T09:47:33.418Z · LW · GW

I think this gets at psychological connectedness/continuity. There's a large gap between scanning and the creation of the copy, but actually, maybe there's a gap between your conscious states, too? Connectedness/continuity seems to be an illusion, and the copy could also be under the same illusion.

I think you could think of yourself as continuing 100% in all of them (at the time of copying), not some fractional amount. Identity is not transitive or unique in this way; it's closer to something like inheritance/descendance. Your hypothetical biological children would each inherit about half of your genes, no matter how many there are. Your identity descendants could each inherit 100% of your identity, even if they aren't identical to each other.

Comment by michaelstjules on Identity Isn't In Specific Atoms · 2020-05-26T09:26:30.305Z · LW · GW

Can't we distinguish between particles through their relationships with other objects or "themselves", including causal relationships? For example, the electrons in my body now have different (and stronger) causal effects on electrons in my body later than on electrons in your body, and by this we can distinguish them.

And can't we trace paths in spacetime for identity? Not particle-like paths, but by just relying on causality and the continuity of the wavefunction over spacetime? This could give you something like four-dimensionalism, which I think could be compatible with throwing away time as a fundamental concept.

The atom swap experiment would then destroy both atoms and create two atoms (possibly the same, possibly different, possibly swapped). What we could say about their identities would depend on the precise details of the view. Maybe there's no coherent way to make this work.

(I'm not endorsing such a view, though.)

Comment by michaelstjules on Why do you reject negative utilitarianism? · 2019-10-18T17:52:49.091Z · LW · GW

I think most of this is compatible with preference utilitarianism (or consequentialism generally), which, in my view, is naturally negative. Nonnegative preference utilitarianism would hold that it could be good to induce preferences in others just to satisfy these preferences, which seems pretty silly.

Comment by michaelstjules on The Very Repugnant Conclusion · 2019-08-31T17:00:10.180Z · LW · GW

Erik Carlson called this the Moderate Trade-off Theory. See also Sider's Geometrism and Carlson's discussion of it here.

One concern I have with this approach is that similar interests do not receive similar weight, i.e. if the utility of one individual approaches another's, then the weight we give to their interests should also approach each other. I would be pretty happy if we could replace the geometric discounting with a more continuous discounting without introducing any other significant problems. The weights could each depend on all of the utilities in a continuous way.

Comment by michaelstjules on The Very Repugnant Conclusion · 2019-08-31T15:36:35.955Z · LW · GW

won't converge as more people (with good lives or not) are added, so it doesn't avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.

Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).

In fact, anything of the form for increasing will allow dust specks to outweigh torture for a large enough population, and if , will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if , it will lead to the Sadistic Conclusion, and if , then it's good to add lives not worth living, all else equal). If we only allow to depend on the population size, , as by multiplying by some factor which depends only on , then (regardless of the value of ), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If depends on in some more complicated way, I'm not sure that it would necessarily lead to torture over dust specks.

I had in mind something like weighting by where is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.

What I might like is to weight by something like for , where the utilities are labelled in increasing (nondecreasing) order, but if are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to . Similarly, if there are clustered utilities, they should each receive weight close to the average of the weights we'd give them in the original Moderate Trade-off Theory.