I am willing to accept bets that general consensus in 3 years will be that Bunny and the vast majority of dogs in such studies do not have an episodic memory which they can communicate like claimed in this post.
I am offering 2:1 odds in favour of the other side.
Are you still offering this bet? I'm interested.
To clarify, you mean not just that the consensus will be that such studies find no (strong) evidence for episodic memory, but that dogs (in such studies) do not have an episodic memory that they can communicate like claimed in the post at all?
And, can you clarify what you mean by "like claimed in this post"?
Does this seem likely? I would guess this is basically true for the sensory and emotional parts, but language and mathematical reasoning seem like a large leap to me, so humans may be doing something qualitatively different from nonhuman animals. Nonhuman animals don't do recursion, as far as I know, or maybe they can, but limited to very low recursion depth in practice.
OTOH, this might be of interest; he argues the human cerebellum may help explain some of our additional capacity for language and tool use:
Ok, ya, some of these seem roughly within an order of magnitude of long COVID (higher or lower, since there's a lot of uncertainty).
I think it's worth mentioning that some of the risks here are more concentrated in older people, but can still be within an order of magnitude of COVID risk for people around my age (28). I would guess only Lyme and CFS would be concerning for a healthy person in their early 20s who doesn't take excessive risks of physical injury (low brain injury and post-ICU syndrome risk). I do wonder about recreational drug use, especially binge drinking (I never drank, so this was never a risk for me) and bike riding.
I'm not sure what I can do about CFS, fribromyalgia and shingles other than maintain a healthy lifestyle (diet, sleep and exercise), which I already am trying to do (but I suppose could be doing more), and I recognize is part of the point of your article. Maybe there's some link between CFS and viral infections (Epstein-Barr, herpes), so I could try to avoid those. I've already had chickenpox, so I'm not sure what else I can do about shingles.
Among a sample of over 20,000 study participants who tested positive for COVID-19 between 26 April 2020 and 6 March 2021, 13.7% continued to experience symptoms for at least 12 weeks. This was eight times higher than in a control group of participants who are unlikely to have had COVID-19, suggesting that the prevalence of ongoing symptoms following coronavirus infection is higher than in the general population.
Of study participants who tested positive for COVID-19, symptom prevalence at 12 weeks post-infection was higher for female participants (14.7%) than male participants (12.7%) and was highest among those aged 25 to 34 years (18.2%).
In contrast, the ONS study compared persistent symptoms lasting 12+ weeks using a survival analysis approach between confirmed COVID-19 cases and age- and sex-matched non-COVID controls, with estimates of 13.7% in cases and just 1.7% in controls.
Prospective versus retrospective data collection: Prospective data collection on ongoing symptoms on a daily basis was uniquely performed in the COVID Symptoms Study, which had the lowest estimates of proportions of cases affected, (2.3% for >12 weeks symptoms). Unpublished analysis of the same individuals asked retrospectively about symptoms using the same questionnaire as in CONVALESCENCE cohorts (inclusive method) revealed very similar proportions with symptoms lasting >12 weeks, ranging from 6% of COVID+ cases in men aged 20-30 to 16% in women aged 40- 50. The COVID Symptoms Study did not count symptoms re-emerging after a week of reporting no symptoms, but although relapse rates were higher in the case population (16.0%) versus non-COVID controls (8.4%; P < 0.0005), this does not account for the difference in reporting rates and suggests that recall bias may operate in retrospective self-reports of symptom duration. The ONS study of persistent symptoms in confirmed infections was based on prospective data (symptoms experienced in the last week, collected each week for the month from enrolment and then each month for up to a year); whereas symptom durations for the population prevalence estimate  is based on retrospective reporting of the initial (confirmed or suspected) infection.
This is based on self-reports on survey data, which will again exclude asymptomatic cases- if you use the ⅓ figure and assume no long covid among the asymptomatic, that becomes 1.8% of 25-45 year olds with covid developing long covid that affects their daily life, which is well within the Lizardman Constant.
On the other hand, medicine is notoriously bad at measuring persistent, low-level, amorphous-yet-real effects. The Lizardman Constant doesn’t mean prevalences below 4% don’t exist, it means they’re impossible to measure using naive tools.
1.8% seems similar to the lower risk difference estimates between cases and controls I've seen (EDIT: 1.8% is the absolute risk, not a difference with controls), and I would guess the point with the Lizardman Constant you make here might not apply to risk differences between cases and controls, unless you want to claim that the constant differs between these two groups. I don't think this is entirely implausible, although I'd lean against it accounting for most of these risk differences, and I'd guess selection effects or controls that aren't good enough would be the most likely ways to explain away most of the long COVID risk difference estimates as actually nothing.
I'm guessing it's so low because of the "affects their daily life" (so risk difference estimates are measuring things not as severe or less frequent, and you filtered these out), or maybe just noise, studies from some samples being unrepresentative, etc.. This should give us a rough upper bound on the risk difference.
Sorry, I was responding to this, but forgot to quote it:
My tentative conclusion is that the risks to me of cognitive, mood, or fatigue side effects lasting >12 weeks from long covid are small relative to risks I was already taking, including the risk of similar long term issues from other common infectious diseases.
My expectation is that compared to other infectious diseases, (long) COVID is
Much much worse, but less common (e.g. cold), or
Much worse and about as common (e.g. flu), or
Not as bad, but much much more common.
And these together make it seem reasonably likely to me that (long) COVID risk is not small relative to (long) risks from other common infectious diseases together.
I don't think it's inevitable that everyone will come into contact with COVID or definitely catch COVID (which becomes more likely the more often you come into contact with it). You can still manage your exposure.
My gym is personal training focused with a single cardio machine, which you must schedule in advance. If I’m doing cardio there will be at most two clients doing weight training and two trainers in the room, plus me, all > 10 feet away, in a large room with filtration they claim is good. If I’m doing weight training there’s me, my trainer (fairly nearby), and potentially a farther away client and trainer pair. In theory there could be an additional person on the cardio machine but I’ve yet to see it happen.
For what it's worth, this seems unusually low risk compared to what I think of when people are going to the gym, e.g. 20 people inside at any time. I would probably be pretty happy with your tradeoff once or twice a week without thinking too much about it, if the alternative is not exercising.
I am worried about what I'll do in the winter (in Canada), when exercising outside becomes very unpleasant/impractical.
My impression is that we're much less likely to catch other infectious diseases that are nearly as severe in the long term (except maybe Lyme?), and unless your probability of catching COVID is very low, your risks from COVID seem worse than driving. This is based on a few people's separate BOTECs for long COVID and my own (vague and personal, not well-researched) impression of how common and bad other infectious diseases are.
Note that a lot of other infectious diseases have become rarer under lockdowns, too, and that's something to account for. If someone has had an infectious disease in the past month, I'd guess it's reasonably likely to be COVID, given its high transmissibility. If someone who's been fully vaccinated for > 1 month has had an infectious disease in the past month, I'm not sure, I'd have to do the math. Going forward, COVID seems likely to be one of the most common infectious diseases going around.
What (similarly addressable or more easily addressable) risks do you think add up to being worse? Or, do you have an overall risk estimate to compare?
There could be external information you and your copy are not aware of that would distinguish you two, e.g. how far different stars appear, time since the big bang. And we can still talk about things outside Hubble volumes. These are mostly relational properties that can be used to tell spacetime locations apart.
I think long COVID is particularly bad because I think you are much more likely to get it from pretty normal activities if you're not careful. Lyme disease, which the author of that comment mentions (citing this article), also looks common:
Recent estimates using other methods suggest that approximately 476,000 people may get Lyme disease each year in the United States.
I would guess that there aren't that many others nearly as bad, but I haven't really looked into it. I think colds, flus and food poisoning are much less severe and less common than COVID-19.
A lot of them have their own summaries if you open the links, but I would recommend focusing on the "Overall how bad it is to catch COVID" links + playing around yourself with https://www.microcovid.org. I've added quotes of the estimates from "Overall how bad it is to catch COVID" to the post. These two figures from https://www.mattbell.us/delta-and-long-covid/ (with estimates derived in the article) seem relevant:
The author wrote:
Based on all the above evidence, I’m making a very rough guess that mRNA vaccination cuts the risk of a Delta COVID infection developing into long COVID in half, independent of vaccination’s reduction of the risk of catching Delta at all.
So, for unvaccinated people, you would double the risks in these figures. Note also that the risks here are higher than in the other 2 links in "Overall how bad it is to catch COVID".
These numbers are low, but not low enough to ignore. Earlier we decided that the quality of life hit from long COVID after a non-hospitalized acute case was 18%. If you’re a 35 year old woman, and your risk of ending up with lifelong long COVID from catching COVID is 2.8%, then catching COVID would be the same, statistically speaking, as losing (50 years * 0.18 * 0.028 * 365 days/year) = ~90 days of your life. Ouch.
We can also look at just the "worst case scenario" – catching long COVID that doesn't go away for years AND limits daily activities a lot. This number feels a bit more like a "mortality" rate – except in this case you don't actually die, but your life is forever altered, and you can't hold down a job anymore or do most of the things you used to love to do.
A 35 year old woman runs about an 0.5% chance of the "worst case scenario" outcome if she gets Delta. For comparison, 0.5% is about 42x your chance of dying in a car crash in the next year.
I think the main differences are using studies with higher excess burdens and using a lower reduction factor to translate to lifelong risk. On the latter:
In the end we need to make an educated guess, even if it's a low-confidence one, as to how often long COVID that lasts 4.5 months ends up being lifelong. Based on the SARS data, we could guess that 80% of hospitalized acute COVID patients that that have long COVID at 4.5 months end up having it for the rest of their life. Patients with milder COVID cases tend to get less physiological damage during acute infection, so it's possible they'll have higher recovery rates. Again taking an educated guess and going on even less data, we might expect that 50% of long COVID cases for mild acute patients at 4.5 months end up being lifelong.
I also worry that it could become basically chronic and lifelong. It's surprising that we still presumably see effects 6 months after, and if 6 months isn't long enough to get better, that's reason for me to believe that these people won't get better. And it's possible to catch COVID multiple times (although you become more immune each time), so each time you may face a risk of long COVID.
If they do longer studies, maybe we'll see more people getting better, and with future studies, we'll have more reliable statistics. For now, I plan to continue being somewhat cautious and avoid large indoor crowds and high-traffric indoor areas.
There wasn't much discussion of long COVID here. At what risk of long Covid (including possibly chronic fatigue and brain fog lasting >6 months, up to the end of study periods, and possibly much longer) would you change your mind about this? I suppose it would still depend on your personal preferences, and how much you get out of certain activities (enjoyment and mental and physical health).
In my specific case (in Canada), I've decided to move out of my current dorm-style residence, since although it's pretty empty now, I would have been sharing a kitchen with 17 other people (although I've been told less than half of residents ever really use the kitchen), and a bathroom with 3 other people. I'd also expect there to be parties with outsiders here, too. I think it's likely that almost everyone would be vaccinated, though.
I think the evidence is somewhat ambiguous on long COVID rates at this point, even among the studies with actual comparisons/controls. A few of the higher quality studies with comparisons/controls were discussed here:
I'd somewhat lean towards lower risk estimates, since I think higher ones are more likely to be biased due to poorly matched controls, selection bias or unrepresentative samples. On the higher end of studies with controls, one of them (of healthcare workers) had small/insignificant differences in mental health between positive and negative cases, but (most worrying to me)
Neurological symptoms of statistical significance included problems sleeping through the night (60.7% vs 51.5%), forgetfulness (35.0% vs 19.0%), confusion/brain fog/trouble focussing attention (20.7% 27.9% vs 14.7%), trouble trying to form words (15.7% vs 9.2%), short-term memory loss (20.7% vs 5.6%) and, less frequently, difficulty swallowing (6.4% vs 2.4%), twitching of fingers and toes (5.7% vs 2.4%) and trembling (5.7% vs 1.7%). Respiratory symptoms of interest included unusual fatigue/tiredness after exertion (39.3% vs 17.5%), breathlessness after minimal exertion (25.7% vs 10.2%), chest tightness/pain (18.6% vs 8.2%), fits of coughing (13.6% vs 6.5%) and breathlessness at rest (9.3% vs 2.8%).
See Table 2. Positive cases were more likely to be patient-facing frontline clinical healthcare workers (51.7% vs 23.0%), though, and maybe they were more exhausted and this explains it, but you'd think this would show up in their mental health, too. None of the cases were hospitalized for COVID.
A large study of long COVID in non-hospitalized patients estimated risks and excess burdens of symptoms at 6 months. See Figure 3 where "Positive" indicates "non-hospitalized individuals with COVID-19". Dividing the excess burdens by 1000, fatigue looks like <2%, and they're all < 2.5%. EDIT: I misread; they're not checking whether they still have the symptoms only at 6 months, but whether they have them at any point 30 days to 6 months post-infection. From the paper:
Outcomes were ascertained from day 30 after COVID-19 diagnosis until the end of follow-up.
My best guess is that you have at least an additional ~1% risk of fatigue lasting > 6 months (and who knows how long) if you're vaccinated and catch COVID than if you don't catch COVID at all. My upper estimate is around 10%, but as I mentioned above, I give more weight to lower estimates, since I expect them to be less biased.
Use isolated subaccounts, so that risks from trades in one account won't affect the other accounts (although they're still correlated). Whenever I set up a new long C + short C-PERP trade, I make a new subaccount.
My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you'd need to change your utility function, which is something we'd normally not want to do.
Of course, my approach doesn't solve infinite ethics in general; if you're adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.
On your lexicographic utility function, I think it's pretty ad hoc that it depends on explicit upper bounds on the quantities, which will depend on the specifics of our universe, but you can manage without them and allow unbounded quantities (and countably infinitely many, but I would be careful going further), unfortunately at the cost of additivity. I wrote about this here.
This is what rule and virtue (and global) consequentialism are for. You don't need to be calculating all the time, and as you point out, that might be counterproductive. But every now and then, you should (re)evaluate what rules to follow and what kind of character you want to cultivate.
And I don't mean this as saying rule or virtue consequentialism is the correct moral theory; I just mean that you should use rules and virtues, as a practical matter, since it leads to better consequences.
Sometimes you will want to break a rule. This can be okay, but should not be taken lightly, and it would be better if your rule included its exceptions. A rule can be something like a very strong prior towards/against certain kinds of acts.
As noted above, I think that with wider central intervals and wider tails we could lower that Biden win probability from 96% to 90% or maybe 80%. But, given what the polls say now, to get it much lower than that you’d need a directional shift, something asymmetrical, whether it comes from the possibility of vote suppression, or turnout, or problems with survey responses, or differential nonresponse not captured by partisanship adjustment, or something else I’m forgetting right now. But I don’t think it would be enough just to say that anything can happen. “Anything can happen” starting with Biden at 54% will lead to a high Biden win probability now matter how you slice it. For example, suppose you start with a Biden forecast at 54% and give a standard error of 3 percentage points, which has gotta be too much—it yields a 95% interval of [0.48, 0.60] for his two-party vote share, and nobody thinks he’s gonna get 48% or 60%. Anyway, start with that and Biden still has a 78% chance of winning (or 75% using the t_3 distribution). To get that probability down below 80%, you’re gonna need to shift the mean estimate, which implies some directional information.
Do you think FiveThirtyEight and the Economist haven't appropriately accounted for these considerations in their models? I don't think the discrepancy with the markets are so large. Where did ~65% come from?
Nonhuman animals and children have limited agency, irrational and poorly informed preferences. We should use behaviour as an indication of preferences, but not only behaviour and especially not only behaviour when faced with the given situation (since other behaviour is also relevant). We should try to put ourselves in their shoes and reason about what they would want were they more rational and better informed. The more informed and rational, the more we can just defer to their choices.
If I give that same "agentic being" treatment to animals, then the suicide argument kind-of-hold. If I don't give that same "agentic being" treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex "reasoning" machine but I don't feel any moral guilt when plucking a leaf or a mushroom.
I think this is a good discussion of evidence for the capacity to suffer in several large taxa of animals.
I think also not having agency is not a defeater for suffering. You can imagine in some of our worst moments of suffering that we lose agency (e.g. in a state of panic), or that we could artificially disrupt someone's agency (e.g. through transcranial magnetic stimulation, drugs or brain damage) without taking the unpleasantness of an experience away. Just conceptually, agency isn't required for hedonistic experience.
Until then, the sanest choice would seem to be that of focusing our suffering-diminishing potential onto the beings that can most certainly suffer so much as to make their condition seem worst than death.
Even if you thought factory farmed animals might plausibly have good lives on the aggregate (like humans, and perhaps many or most humans who do end up committing suicide), many do not have good deaths, and working on that would still be valuable. Negligent or intentional live boiling, CO2 slaughter without stunning, on-farm and transportation mortality, barn fires. I don't think it's very plausible that these conditions aren't worse than death.
I think understanding of death is largely experiential (witnessing death) and conceptual (passed on through language), and intentional suicide attempt would further require understanding what would kill you. Maybe people could infer some things based on their experience with sleep, though.
Here's an article on the development of understanding of death in children; it seems they tend to start to understand at 3 years old. I would expect understanding of suicide to generally come later still. Do you think 2-3 year olds can have lives worse than death despite not committing suicide or being able do judge that their lives are/will be worse than death? I'd expect there will be periods for most children where they can speak and be taught to understand death and suicide, but since they won't have been taught yet, they won't understand.
An individual's experience of torture could be similar to ours, and we could deprive them of all pleasure, too, so on a hedonistic account, it wouldn't at all be plausible that their life is good, and yet they might not understand death and suicide enough to attempt suicide. If we think their hedonistic experiences are sufficiently similar to ours, even though they don't have well-informed preferences, we can make judgements in their place.
On a preferential account of value, if an individual doesn't recognize or understand an option and then fails to choose it, we can't conclude that that option is worse for that individual. This is also an everyday issue for typical adult humans given our very limited understanding, but it's worse the more ill-informed the preferences, especially in children and nonhuman animals. If you generally take an individual's actions as indicating what's best for them, then we shouldn't stop children from sticking forks into electrical outlets or touching hot stovetops.
1. We think that beyond a certain point of brain development abortion is acceptable since the kid is not in any way "human". So why not start you argument there ? and if you do, well, you reach a very tricky gray line
I don't start my argument there precisely because it's a grey area for consciousness. I chose examples I'd expect you to accept as conscious and capable of suffering (although it seems you have doubts), and would generally not commit suicide even if tortured.
People don't have memories at ages bellow 1 or 2 and certainly no memories indicative of conscious experience.
I'm guessing you mean episodic memories? Children that young (and farmed animals) certainly remember things like words, individuals, how to do things, etc.. There's also research on episodic-like memory in many different species of nonhuman animals, not just the obviously smart ones (I haven't looked into similar research for young children). Also dreams seem relevant.
I don't see how this undermines the point, unless you want to argue the "fear" of death can be so powerful one can lead what is essentially a negative value life because an instinct to not die (similarly to, say, how one would be able to feel pain from a certain muscle twitch yet be unable to stop in until it becomes unbearable).
I don't necessarily disagree with this perspective, but from this angle you reach a antinatalist utilitarian view of "Kill every single form of potentially conscious life in a painless way as quickly as possible, and most humans for good measure, and either have a planet with no life, or with very few forms of conscious life that have nothing to cause them harm".
It's also possible for an individual to be so focused on the present that any suicide attempt would feel worse than what they're otherwise feeling at that moment (which could still be overall bad), and this would prevent them from doing it. This can be the case even if it would prevent more intense suffering later. Again, however, I think farmed animals just usually don't understand suicide properly as an option.
My point is that suicide is not a good objective measure on its own. I think suicide attempt is fairly strong evidence of misery, but absence of suicide attempt is really not very good evidence for a life better than death, because of the obstacles (understanding, fear, access to suicide methods, guilt, etc.).
Have you looked at suicide rates by country? A lot of these don't accord with my intuitions about quality of life, either. Somalia has the 100th highest rate in the world, after many Western countries. Spain has a higher rate than Saudi Arabia (where suicide (attempt) is illegal). There are important cultural forces (and laws) around suicide, especially religious ones. Then again, maybe the numbers are being misreported in some countries.
Besides observations of behaviour, there are also neurological evidence (e.g. Do they have structures functionally similar to those important/responsible for emotions in humans and are they important/responsible for similar behaviour in these animals? Are they actually evolutionarily preserved structures?), and evolutionary/adaptive arguments, although these ultimately tie back to behaviour in some way, but sometimes specifically human behaviour, not the animals' behaviour, although both together could strengthen the argument.
I think suicide is a very poor measure of welfare for nonhuman animals, because they typically don't understand death or how they could kill themselves, so it's not an option they understand. I think you could plausibly torture farmed animals almost nonstop and they would generally not commit suicide. I'd expect the same to apply to typically developing toddlers, and it's plausible to me that you could in principle shelter normally developing humans from understanding of death and suicide into adulthood, and torture them, and they too would not attempt suicide.
We (humans and other animals) also have instincts (especially fear) that deter us from committing suicide or harming ourselves regardless of our quality of life, and nonhuman animals rely on instinct more, so I'd expect suicide rates to underestimate the prevalence of bad lives.
Wouldn't plotting the cumulative distribution functions instead of the probability density functions be easier to interpret? With the CDF, you can just take differences to get probabilities for intervals, but I can't get the probabilities for intervals just by looking at the graph of the PDF. The max and argmax of the PDF, which I think people will be attracted to, can also be misleading.
I'm here from your comment on Lukas' post on the EA Forum. I haven't been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It's not obvious to me that it's bad. Its rejection is also so close to separability/additivity that for someone who's not sold on separability/additivity, an intuitive response is "Well ya, of course, so what?". It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don't.
So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
By deny, do you mean reject? Doesn't negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn't follow from diminishing returns to happiness vs suffering?
Also, for what it's worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
I don't think you've established that Lenin was a jerk, in the sense of moral responsibility.
I think people usually have little control (and little illusion of freedom) over what options, consequences and (moral) reasons they consider, as well as what reasons and emotions they find compelling, and how much. Therefore, they can't be blamed for an error in (moral) judgement unless they were under the illusion they could have come to a different judgement. It seems you've only established the possibility that someone is morally culpable for a wrong act that they themselves believed was wrong before acting. How often is that actually the case, even for the acts you find repugnant?
Lenin might have thought he was doing the right thing. Psychopaths may not adequately consider the consequences of their actions and recognize much strength in moral reasons.
I think this gets at psychological connectedness/continuity. There's a large gap between scanning and the creation of the copy, but actually, maybe there's a gap between your conscious states, too? Connectedness/continuity seems to be an illusion, and the copy could also be under the same illusion.
I think you could think of yourself as continuing 100% in all of them (at the time of copying), not some fractional amount. Identity is not transitive or unique in this way; it's closer to something like inheritance/descendance. Your hypothetical biological children would each inherit about half of your genes, no matter how many there are. Your identity descendants could each inherit 100% of your identity, even if they aren't identical to each other.
Can't we distinguish between particles through their relationships with other objects or "themselves", including causal relationships? For example, the electrons in my body now have different (and stronger) causal effects on electrons in my body later than on electrons in your body, and by this we can distinguish them.
And can't we trace paths in spacetime for identity? Not particle-like paths, but by just relying on causality and the continuity of the wavefunction over spacetime? This could give you something like four-dimensionalism, which I think could be compatible with throwing away time as a fundamental concept.
The atom swap experiment would then destroy both atoms and create two atoms (possibly the same, possibly different, possibly swapped). What we could say about their identities would depend on the precise details of the view. Maybe there's no coherent way to make this work.
I think most of this is compatible with preference utilitarianism (or consequentialism generally), which, in my view, is naturally negative. Nonnegative preference utilitarianism would hold that it could be good to induce preferences in others just to satisfy these preferences, which seems pretty silly.
One concern I have with this approach is that similar interests do not receive similar weight, i.e. if the utility of one individual approaches another's, then the weight we give to their interests should also approach each other. I would be pretty happy if we could replace the geometric discounting with a more continuous discounting without introducing any other significant problems. The weights could each depend on all of the utilities in a continuous way.
∑iuie−ui won't converge as more people (with good lives or not) are added, so it doesn't avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.
Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).
In fact, anything of the form ∑if(ui) for f:R→R increasing will allow dust specks to outweigh torture for a large enough population, and if f(0)=0, will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if f(0)<0, it will lead to the Sadistic Conclusion, and if f(0)>0, then it's good to add lives not worth living, all else equal). If we only allow f to depend on the population size, n, as fn=cnf by multiplying by some factor cn which depends only on n, then (regardless of the value of fn(0)), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If fn depends on n in some more complicated way, I'm not sure that it would necessarily lead to torture over dust specks.
I had in mind something like weighting by eu1−ui where u1 is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.
What I might like is to weight by something like ri−1 for 0<r<1, where the utilities are labelled u1,…,un in increasing (nondecreasing) order, but if ui,ui+1 are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to ri−1+ri2 . Similarly, if there are k clustered utilities, they should each receive weight close to the average of the weights we'd give them in the original Moderate Trade-off Theory.