Posts

Moral Anti-Epistemology 2015-04-24T03:30:27.972Z
Arguments Against Speciesism 2013-07-28T18:24:58.354Z

Comments

Comment by lukas_gloor on What if we all just stayed at home and didn’t get covid for two weeks? · 2021-01-22T16:35:39.071Z · LW · GW

Building infrastructure and setting up preparations for doing this throughly could be an interesting safeguard against future pandemics worse than Covid. But I think there's a big problem with continuing to run hospitals and care-taking facilities, and care-taking in general. 

Comment by lukas_gloor on What to do if you can't form any habits whatsoever? · 2021-01-10T07:48:32.773Z · LW · GW

I'm similar and haven't found anything that works well. Reading how most EAs talk about their self-improvement "life hacks" always makes me think "fuck you, lol." I constantly alternate between periods where I'm trying lots of good routines at once and I'm somewhat productive and periods where things fell apart and I'm unproductive. In my experience, most of the leverage to be gained is by trying  to reduce the difference between these two states by not punishing myself for falling off the wave, i.e. getting right back into the attempts after a bad day or five. And if I'm on the wave I try to be extra cautious about avoiding things that could derail me.

I took time off from work late last year for personal reasons and used the opportunity to start some deeper-reaching attempts at mindset improvement based on CBT, visualizing my ideal day, and so on. I'm about to start schema therapy. Ideally I'd do the exercises daily but that's already challenging for obvious reasons. I haven't noticed any productivity improvements so far but I'm at least feeling better about myself.

Comment by lukas_gloor on Morality as "Coordination", vs "Do-Gooding" · 2020-12-30T22:27:58.158Z · LW · GW

I agree. I think of myself as a utilitarian in the same subjective sense that I think of myself as (kind of) identifying with voting Democrats (not that I'm a US citizen). I disagree with Republican values, but it wouldn't even occur to me to poison a Republican neighbor's tea so they can't go voting. Sure, there's a sense in which one could interpret "Democrat values" fanatically, so they might imply that I prefer worlds where the neighbor doesn't vote, where then we're tempted to wonder whether ends do justify the means in certain situations. But thinking like that seems like a category error if the sense in which I consider myself a Democrat is just one part of my larger political views, where I also think of things in terms of respecting the political process. So, it's the same with morality and my negative utilitarianism. Utilitarianism is my altruism-inspired life goal, the reason I get up in the morning, the thing I'd vote for and put efforts towards. But it's not what I think is the universal law for everyone. Contractualism is how I deal with the fact that other people have life goals different from mine. Nowadays, whenever I see discussions like "Is classical utilitarianism right or is it negative utilitarianism after all?" – I cringe. 

Comment by lukas_gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-27T11:34:01.737Z · LW · GW

So the emerging wisdom is that the SA variant is less contagious, or are you just using 20% as an example? The fact that SA is currently at the height of summer, and that they went from "things largely under control" to "more hospitalizations and deaths than the 1st wave in their winter" in a short amount of time, makes me suspect that the SA variant is at least as contagious as the UK variant. (I'm largely ignoring politicians bickering at each other over this, and of course if there's already been research on this question then I'll immediately quit speculating!) 

Comment by lukas_gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T18:06:25.268Z · LW · GW

It could be the time lag from when antibody-based plasma therapy (if that makes sense, I'm not even sure that's how it works) started to be used somewhat widely, plus the time it takes for a new variant to spread enough to get noticed. 

Comment by lukas_gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:56:59.052Z · LW · GW

Conditional on a 4th wave in the US happening in 2021, I wonder if it's >20% likely that it's going to be due to a variant that evolved on US soil. 

Comment by lukas_gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:53:43.639Z · LW · GW

Why are we seeing new variants emerge in several locations independently in a short time window? Is it that people are looking more closely now? Or does virus evolution have a kind of "molecular clock" based on law of large numbers? Or is the "clock" here mostly the time it takes a more infectious variant to become dominant enough to get noticed, and the count started whenever plasma therapy was used or whatever else happened with immunocompromised patients? Should we expect new more infectious variants to spring up all over the world in high-prevalence locations in the next couple of weeks anyway, regardless of whether the UK/SA/Nigeria variants made it there via plane? 

Comment by lukas_gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T16:56:21.877Z · LW · GW

To be clear, I don't mean to take a stance on how much more transmissible it is exactly, 33% or 65% or whatever. I think it's 85% likely that it's a difference that's significant enough to affect things, but it's less clear whether it's significant enough that previous containment strategies become vastly more costly or even unworkable. 

Comment by lukas_gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T16:38:35.166Z · LW · GW

I looked into things a bit and think it's 85% likely that the new variants in the UK and SA are significantly more transmissible, and that this will lead to more severe restrictions globally in the next few months because no way they aren't already in lots of places. I also think there's a 40% chance the SA variant is significantly more deadly than previous variants, but not sure if that means 50% higher IFR or 150% higher (I have no idea what prior to use for this).

Update December 26th: The longer we hear no concerning news about the lethality of the SA variant, the more likely it is that it's indeed benign and that initial anecdotal reports of it being surprisingly aggressive in young-ish people without comorbidities were just rumours. Right now I'm at 20% for it being significantly more deadly, and it's falling continuously. 

Comment by lukas_gloor on Draft report on AI timelines · 2020-11-08T08:04:26.132Z · LW · GW

This is a separate point from yours, but one thing I'm skeptical about is the following: 

The Genome Anchor takes the information in the human genome and looks at it as a kind of compression of brain architectures, right? But that wouldn't seem right to me. By itself, a genome is quite useless. If we had the DNA of a small dinosaur today, we probably couldn't just use ostriches as surrogate mothers. The way the genome encodes information is tightly linked to the rest of an organism's biology, particularly its cellular machinery and hormonal features in the womb. The genome is just one half of the encoding, and if we don't get the rest right, it all gets scrambled.

Edit: OK here's an argument why my point is flawed: Once you have the right type of womb, all the variation in a species' gene pool can be expressed phenotypically out of just one womb prototype. This suggests that the vast majority of the information is just in the genome. 

Comment by lukas_gloor on How should one deal with life threatening infections or air planes? · 2020-10-29T13:38:05.099Z · LW · GW

Effective altruism.

Comment by lukas_gloor on Does playing hard to get work? AB testing for romance · 2020-10-29T12:59:19.481Z · LW · GW

I just don’t feel comfortable if I act.


I think that's a great trait to have and I'd strongly recommend keeping it. If you can find enough things you like about yourself (and maybe have also worked on yourself to that end), you can also acquire genuine confidence in this way that feels way more robust than acting.

Maybe you've thought about this already, but I'd flag that some people (and more women than men) don't themselves compartmentalize so much between "just sex" and "romance". Humans have some degree of sexual dimorphism around attraction (e.g., "demisexuality" is rare among men but not that uncommon among women). So, the habit you mention and the way you phrase it might substantially decrease the pool of otherwise compatible partners. 

With the phrasing, I'd be worried that what many people might take away from your paragraph is not so much "This person cares about avoiding situations where they'd be incentivized to act inauthentically, therefore they prefer prostitutes over dating people with whom conversations don't feel meaningful", but rather "Something about intelligence, therefore hookers". 

The mismatch in psychologies is harder to address than the phrasing, and maybe that just means you don't think you're a good match to others who view the topic differently – it really depends on what feels right all things considered.

Just to be clear, I don't necessarily mean "view it differently" on moral grounds. For instance, I don't think extraverted people are immoral, but I'd feel weird and maybe too insecure with a partner who was too extroverted. Similarly, some women will feel weird and insecure if their partner has too much of a "men are bad/threatening" psychology, whether or not they think it's immoral. So finding other ways to meet the same needs could make sense if one worries about the pool of potential soulmates already being small enough, and if one places value on some of the normative intuitions, like importance of emotional connection during intimacy with a partner and not wanting to risk it being adversely affected. (The extraversion analogy isn't great because it sounds wrong to repress a core aspect of personality – the question with compartmentalization of romance vs. sex is if it's that or more/also influenced via habit formation and so on. I don't know much about the empirical issues.) 

Maybe you think what I write in the paragraphs above goes way too far in the direction of: 

Also implicitly you end up showing more regard for a stranger you don’t know than for yourself, because you basically end up fighting for someones affection instead of giving someone the choice to like you or not like you.

I'd say it depends. "Accommodations" come in degrees. Also, if you make them for any stranger, you're indeed not showing respect for yourself (as well as treating other people's personalities as interchangeable). However, if you find yourself particularly motivated to be good for partners with a certain type of character, that means that you already want to be the sort of person who appeals to them.

Comment by lukas_gloor on How should one deal with life threatening infections or air planes? · 2020-10-29T10:29:30.671Z · LW · GW

I'm assuming you still exercise and go outside and so on, and maybe arrange video calls with friendly people? Because the negative physiological effects from low amounts of exercise or social interactions can easily be a lot worse than the risks from Covid.

It sounds like you've built up a habit of mentally punishing yourself for taking "irrational" risks, and as a result, spend a lot of time worrying over risks in general, including very small but salient ones. I did the same thing when I learned about EA (I don't want to live forever, but I suddenly started to care a lot more about not dying because I do want to accomplish things in life and be rational in the pursuit of that).

I don't have great advice for how to deal with it; I just try to keep an eye on my habits and consciously get myself to change them if it ever feels like it's wandering too far into OCD territory. If you suspect that some of the motivation is also fear instead of just "rational" arguments, you can prepare for the eventuality of getting the virus to make that more palatable. (E.g., prepare food to eat while sick; check-list for what to do, when to call the doctor, etc.)

If you do end up dying, that doesn't mean you played the game poorly. Even death is an acceptable outcome as long as you did your best to reach your goals.

I'd try to "avoid daily dilemmas" by thinking once about the precautions you want to take, and then adhere to them without constantly wondering if you can do even more. And you can reassess the situation at regular intervals.

Regarding the general rationality of this sort of thing: If slightly increasing the chance of living a million years is indeed super important to you, it can make sense to take more precautions than the typical person. (Of course, maybe the mental energy would be better spent on other ways to avoid risks or get benefits.) However, I would make sure that you're doing this because it is truly what you want, not something you think is implied by rational arguments. There are many options to choose from when it comes to purposeful life goals.

Comment by lukas_gloor on Critiquing "What failure looks like" · 2020-10-29T09:35:05.215Z · LW · GW

A raving fascist or communist is more predictable and will lap up raving content. The machines can change our mind about our objective function so we are easier to satisfy.


That's a good way to put it! 

This might be stretching the analogy, but I feel like there's a similar thing going on with technological evolution of "gadgets" (digital watch, iPod, cell phone). It feels like people's expectations of what a gadget should be able to do for them to make them content continue to grow at a rate so fast that something as simple and obviously beneficial as "battery life" never really receives an improvement. I get that not everyone is bothered by having to charge things all the time (and losing the charger all the time), but how come it's borderline impossible to buy things that don't need to be charged so often? It feels like there's some optimization pressure at work here, and it's not making life more convenient. :) 

Comment by lukas_gloor on Critiquing "What failure looks like" · 2020-10-29T09:10:20.041Z · LW · GW

For people who share the intuition voiced in the OP, I'm curious if your intuitions change after thinking about the topic of recommender systems and filter bubbles in social media. Especially as portrayed in the documentary "The Social Dilemma" (summarized in this Sam Harris podcast). Does that constitute a historical precedent? 

Comment by lukas_gloor on No Causation without Reification · 2020-10-23T21:31:26.717Z · LW · GW

Hume made this point in An Enquiry Concerning Human Understanding. :) 

Edit: added a link. 

Comment by lukas_gloor on Draft report on AI timelines · 2020-10-20T07:59:18.367Z · LW · GW

I like this comment, and more generally I feel like there's more information to be gained from clarifying the analogies to evolution, and gaining clarity on when it's possible for researchers to tune hyperparameters with shortcuts, vs. cases where they'd have to "boil the oceans." 

Do you have a rough sense on how using your analogy would affect the timeline estimates? 

Comment by lukas_gloor on On AI and Compute · 2020-10-20T07:34:08.900Z · LW · GW

I tend to agree with Carey that the necessary compute to reach human-level AI lies somewhere around the 18 and 300-year milestones.

I'm sure there's a better discussion about which milestones to use somewhere else, but since I'm rereading older posts to catch up, and others may be doing the same, I'll make a brief comment here. 

I think this is going to be an important crux between people who estimate timelines differently. 

If you categorically disregard the evolutionary milestones, wouldn't you be saying that searching for the right architecture isn't the bottleneck, but training is? However, isn't it standardly the case that architecture search takes more compute with ML than training? I guess the terminology is confusing here. In ML, the part that takes the most compute is often called "training," but it's not analogous to what happens in a single human's lifetime, because there are architecture tweaks, hyperparameter tuning, and so on. It feels like what ML researchers call "training" is analogous to Hominid evolution, or something like that. Whereas the part that is analogous to a single human's lifetime is AlphaZero going from 0 to superhuman capacity in 3 days of runtime. That second step took a lot less compute than the architecture search that came before! 

Therefore, I would discount the 18y and 300y milestones quite a bit. That said, the 18y estimate was never a proper lower bound. The human brain may not be particularly optimal. 

So, I feel like all we can say with confidence is that is that brain evolution is a proper higher bound, and AGI might arrive way sooner depending on how much human foresight can cut it down, being smarter than evolution. I think what we need most is conceptual progress on how much architecture search in ML is "random" vs. how much human foresight can cut corners and speed things up.

I actually don't know what the "brain evolution" estimate refers to, exactly. If it counts compute wasted on lineages like birds, that seems needlessly inefficient. (Any smart simulator would realize that mammals are more likely to develop civilization, since they have fewer size constraints with flying.) But probably the "brain evolution" estimate just refers to how much compute it takes to run all the direct ancestors of a present-day human, back to the Cambrian period or something like that?

I'm sure others have done extensive analyses on these things, so I'm looking forward to reading all of that once I find it. 

Comment by lukas_gloor on Might humans not be the most intelligent animals? · 2020-09-16T09:47:24.607Z · LW · GW
If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn't seem like a hard separation between "being able to process culture inefficiently" and "able to process culture efficiently" other than the initial jump from not being able to do it at all, which we have already passed.

I keep hearing people say this (the part "and there doesn't seem to be a hard separation"), but I don't intuitively agree! I've spelled out my position here. I have the intuition that there's a basin of attraction for good reasoning ("making use of culture to improve how you reason") that can generate a discontinuity. You can observe this among humans. Many people, including many EAs, don't seem to "get it" when it comes to how to form internal world models and reason off of them in novel and informative ways. If someone doesn't do this, or does it in a fashion that doesn't sufficiently correspond to reality's structure, they predictably won't make original and groundbreaking intellectual contributions. By contrast, other people do "get it," and their internal models are self-correcting to some degree at least, so if you ran uploaded copies of their brains for millennia, the results would be staggeringly different.

Comment by lukas_gloor on SDM's Shortform · 2020-08-28T13:33:50.155Z · LW · GW
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn't say in that bracket was that 'maybe axiology' wasn't my only guess about what the objective, normative facts at the core of ethics could be.

I'm not sure. I have to read your most recent comments on the EA forum more closely. If I taboo "normative realism" and just describe my position, it's something like this:

  • I confidently believe that human expert reasoners won't converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it's true that if "life goals don't converge" then "population ethics also doesn't converge")
  • However, I think there would likely be converge on subdomains/substatements of ethics, such as "preference utilitarianism is a good way to view some important aspects of 'ethics'"

I don't know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that's allowed if I'm a naturalist normative realist?)

Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn't occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.

Cool! I personally wouldn't call it "normatively correct rule that ethics has to follow," but I think it's something that sticks out saliently in the space of all normative considerations.

(This still strikes me as exactly what we'd expect to see halfway to reaching convergence - the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we've been working on for longer.)

Okay, but isn't it also what you'd expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions "off distribution." Another intuition is that it's the only domain in ethics where it's ambiguous what "others' interests" refers to. I don't think it's an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it's kind of odd that anyone thought there'd be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between "whether population ethics is underdetermined" and "whether every person should have the same type of life goal." I think "not every person should have the same type of life goal" is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn't all want to replicate, and I'm confident that I'm not somehow confused about what I'm doing.)

Your case for SFE was intended to defend a view of population ethics - that there is an asymmetry between suffering and happiness. If we've decided that 'population ethics' is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can't I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we're going to leave population ethics undetermined?

Exactly! :) That's why I called my sequence a sequence on moral anti-realism. I don't think suffering-focused ethics is "universally correct." The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It's a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth's future light cone.

Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also "in tension," worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as "not more in tension than Democrats versus Republicans." This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to "what do we want to do with earth's future lightcone"). After you've chosen your life goals, that still leaves open the further question "How do you think about other people having different life goals from yours?" That's where preference utilitarianism comes in (if one takes a strong stance on how much to respect others' interests) or where we can refer to "norms of civil society" (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander's archipelago blogpost for inspiring this idea. I think he also had a blogpost on "axiology" that made a similar point, but by that point I might have already found my current position.]

In any case, I'm considering changing all my framings from "moral anti-realism" to "morality is underdetermined." It seems like people understand me much faster if I use the latter framing, and in my head it's the same message.

---

As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:

1. Morality could be underdetermined

2. Moral uncertainty and confidence in strong moral realism are in tension

3. There is no absolute wager for moral realism

(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – "what we on reflection care about" – doesn't suddenly lose its significance if there's less convergence than we initially thought. Just like I shouldn't like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn't care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)

4. Mistaken metaethics can lead to poorly grounded moral opinions

(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)

5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense

(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn't reaching a different conclusion on the same task. Instead, they're doing a different task. I'm interested in all the three questions I dissolved ethics into, whereas people who play the game "pick your version of consequentialism and answer every broadly-morality-related question with that" are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)

Comment by lukas_gloor on SDM's Shortform · 2020-08-28T06:12:28.203Z · LW · GW
I think that, on closer inspection, (3) is unstable - unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims.

I agree with that.

The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further - to the absurd conclusion, because 'the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled'. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.

It sounds like your contrasting my statement from The Case for SFE ("fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms") with "arbitrarily halting the search for coherence" / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where "population ethics" is just a straightforward extension of "normal ethics," you split "ethics" into a bunch of different subcategories:

  • Preference utilitarianism as an underdetermined but universal morality
  • "What is my life goal?" as the existentialist question we have to answer for why we get up in the morning
  • "What's a particularly moral or altruistic thing to do with the future lightcone?" as an optional subquestion of "What is my life goal?" – of interest to people who want to make their life goals particularly altruistically meaningful

I think a lot of progress in philosophy is inhibited because people use underdetermined categories like "ethics" without making the question more precise.

Comment by lukas_gloor on What if memes are common in highly capable minds? · 2020-07-31T10:23:23.348Z · LW · GW

In this answer on arguments for hard takeoff, I made the suggestion that memes related to "learning how to learn" could be the secret sauce that enables discontinuous AI takeoff. Imagine an AI that absorbs all the knowledge on the internet, but doesn't have a good sense of what information to prioritize and how to learn from what it has read. Contrast that with an AI that acquires better skills about how to organize its inner models, making its thinking more structured, creative, and generally efficient. Good memes about how to learn and plan might make up an attractor, and AI designs with the right parameters could hone in on that attracter in the same way as "great minds think alike." However, if you're slightly off the attractor and give too much weight to memes that aren't useful for truth-seeking and good planning, your beliefs might resemble that of a generally smart person with poor epistemics, or someone low on creativity who never has genuine insights.

Comment by lukas_gloor on What are your thoughts on rational wiki · 2020-07-11T09:05:22.188Z · LW · GW

Does anyone know if there were admin/management changes on that site? I remember I thought the older versions of their articles on LessWrong-related topics were disgusting. Only took a quick look now, but it looks like they adjusted the tone somewhat and maybe changed some of the most uncharitable stuff..

Comment by lukas_gloor on Slate Star Codex and Silicon Valley’s War Against the Media · 2020-07-10T11:19:42.466Z · LW · GW

Interesting. Maybe I'd change my mind if I re-read it, but I want to flag that my first impression of this article was very positive. It seemed to me like the article highlighted several of Scott's qualities and contributions that make him look like the kind and relatable person that he is. And the critical stuff seems like what you'd expect by someone who is trying to present a balanced view (and some of it may well be accurate takes). There are some clear exceptions, e.g., I didn't like the accusation of (slightly) bad faith on Scott's part in asking SSC commenters to contact the NY times in a respectful manner, or that the author talked about Damore as though it was so-obvious-as-to-not-even-worth-arguing-for that he did something really bad.

I should flag that I only read this very quickly, and I had very pessimistic expectations. Sometimes when you expected something absolutely terrible, and you get something that's merely bad, you think it's very good. :)

Comment by lukas_gloor on Is Altruism Selfish? · 2020-06-13T19:45:43.027Z · LW · GW

I'm happy to grant you that, when pondering a specific decision, people always choose the option they feel better with in the moment of making the decision. If they have cravings activated, that sense of feeling better will cash out in terms of near-term hedonism (e.g., buying two packs of crisps and a Ben&Jerry's ice cream for dinner). If they make decisions with the brain's long-term-planning module activated, they will make whichever decision they feel most satisfied with as a person (e.g., choosing to do a PhD even though it means years of stress).

No one purposefully makes a decision that predictably makes them feel worse for having made that decision. In that sense, all decisions are made for "self-oriented" reasons. However, that's a trivial truth about the brain's motivational currency, not a philosophical truth about altruism versus selfishness.

Altruism is about taking genuine pride in doing good things for others. That's not what makes altruism "secretly selfish." It's what enables altruism. It also matters to what degree people have a habit of fighting rationalizations and hypocrisy. Just like it feels good to think that you're being virtuous when in reality you're entitled and in the wrong, it also feels good to spot your brain's rationalizations and combat them. Both things feel good, but only one of them contributes to altruistic ideals.

Comment by lukas_gloor on How to learn from a stronger rationalist in daily life? · 2020-05-21T08:58:29.698Z · LW · GW

I recommend finding some kind of goal other goal than "becoming more rational." Going to a workshop here and there or discussing rationality techniques with someone sounds good, but if that's your primary goal for several months or longer, that IMO risks turning into a failure mode of looking at rationality as an end rather than a means. I think you learn most by trying to do things that are important to you.

I strongly agree with the advice of trying to surround yourself with some people you want to learn from.

Comment by lukas_gloor on Why COVID-19 prevention at the margin might be bad for most LWers · 2020-05-17T20:14:13.630Z · LW · GW
We can expect some small regions will make it out with sub 1% but I think there's a 90% chance at least 4% of the US will be antibody positive from exposure (with or without severe symptoms) after a year

That sounds exactly right.

(and a 90% chance no more than 60% will)

I'd say you can go up to 97% for that.

I think the median will be somewhere around 10% of the US population very roughly and that's why I disagreed with the OP. It's unlikely I'd change my mind too drastically about those numbers, at least not in the near future and without new info, because I've spent a lot of time forecasting virus questions. :)

Comment by lukas_gloor on Will the world hit 10 million recorded cases of COVID-19? If so when? · 2020-05-13T20:44:17.998Z · LW · GW

There was a Metaculus question that opened in early April about "How many COVID-19 deaths will be recorded in the month of April, worldwide?" The community prediction was 210k (50% CI: 165k – 288k), which seemed little different from just extrapolating the trend of reported deaths. I saw that countries had all gone into lockdown a while back, so I predicted 75% that the numbers would end up below 193k. The resolution was 184k and I won a lot of points.

Trend extrapolation is only half of what's important. If the trend is foreseeably going to break because circumstances are changing, we need to factor that in. If avturchin is right about the recent numbers being linear with 100K cases a day (I didn't look this up), then we can say that it'll probably take longer than 60 days until 10M confirmed cases. In the majority of locations, R0 is below 1 and many people are recovering (and PCR tests only catch active infections). Of course, case numbers may go up again, which can happen surprisingly fast. Still, I think the mark for 10M confirmed cases is unlikely to be hit before August. Unfortunately, I suspect that we will hit it at some point later in the year when cases go out of control again in some parts of the world where there's extensive testing.


UPDATE June 12th: Seems like I got this one really wrong. Daily new cases are at 135k now, so a substantial increase in cases.

Comment by lukas_gloor on Why COVID-19 prevention at the margin might be bad for most LWers · 2020-05-10T16:44:32.153Z · LW · GW

Thanks for clarifying, that makes sense.

The only strong stance I took (as far as I can see) is that the countermeasures are harmful even without considering their costs.

I think your wording also kind of implied that a large fraction of the population is going to get the virus. Maybe you were primarily thinking of people with jobs that put them at risk, but I think even for those populations, expecting >50% of people with such jobs to get it is very much taking a strong stance. I was wondering if you'd think differently about your dislike of the LW emphasis on advice if you thought that the expert predictions were spot on.

Edit: But maybe that's just not the crux. Maybe you're not saying "you're going to get it sooner or later anyway" but rather "sooner or later, you're going to _decide_ that you're fine with probably getting it anyway."

And that's a stronger argument, I think. But I think a lot of people have probably thought about it, and I don't think keeping your probability of getting this virus <3% is extremely socially restrictive for the rest of how long it'll take. That said, I'm an extreme introvert so probably I don't quite factor in all the things that social people are missing.

Comment by lukas_gloor on Why COVID-19 prevention at the margin might be bad for most LWers · 2020-05-10T14:05:01.797Z · LW · GW

I get the impression that you might be thinking about this in terms of a false dichotomy. It seems correct to me to note that much longer lockdowns are politically infeasible in large parts of the US, but this doesn't mean that most states will just let their entire population catch the virus. Maybe there'll be a second wave and then states that are similarly badly hit as New York and New Jersey will change their stance. Or maybe some states succeed at lessening the restrictions in a smart way, with masks and so on. Maybe the people are sufficiently afraid to catch the virus that they socially distance themselves of their own accord, even when businesses reopen.

Expert predictions say that there have been between 4.8M—28M infections in the US so far (80%) confidence interval. Those infections are responsible for 73k+ deaths so far, and predictions say the median number of deaths will be below 300k in the US (probably even below 200k but I think there was an upward trend in the latest survey, and some chance they're now between 200k and 300k).

I've been doing predicting as well and I agree with those predictions (I'm saying this because it can be justifiable to not always trust experts). Therefore I don't think it makes sense to assume you'll likely be exposed to the virus anyway. (The case for this is even stronger if you live in Germany or the UK; the upward trend in predictions about the US is a bit concerning.) For those who want to avoid low-ish but non-negligible risks of becoming sick for sometimes quite a long time, with a virus that in some instances can do do all kinds of strange and scary things that we don't fully understand yet (see also Elizabeth's comment above), it's good to have the advice available! (Of course, I'm not necessarily saying I endorse all of those pieces of advice.)

Comment by lukas_gloor on April Coronavirus Open Thread · 2020-04-28T12:19:14.878Z · LW · GW

I've heard people with good judgment criticize the Imperial College modelling for countries outside the UK because the forecasts proved to be too pessimistic repeatedly. That's interesting because I know that their UK forecasts were slightly too optimistic. They predicted 20k deaths for the UK initially, then updated to "probably a bit less than that" shortly afterward. And now we're at 21k deaths already (but daily deaths slowed down a lot). I would imagine that their forecasting is the most accurate for the UK numbers because that's what their main task is about.

Comment by lukas_gloor on Kevin's Shortform · 2020-04-25T12:11:09.033Z · LW · GW
What would it take for you to think that it's ok for romantic partners to visit and community house quaranteams to merge?

Ethically, I think this can be fine if strong precautions are taken to avoid infecting non-consenting individuals. (The freedom rhetoric only works if one's actions don't impinge on other people's rights not to be exposed to a deadly illness.) If the only potentially virus-transmitting contacts are with people who follow the same precautions, that's fine in theory. In practice, it can often be difficult to have justified confidence that other people will stick to the rules.

Example: If you infect a person from another household who starts to allow visits with your household under the assumption that both households are otherwise shut off from the outside world, but then one of the people in the other household also makes an exception for visiting her family, and a person from the family gets infected too and goes to grocery stores without a face mask, then you now started a new chain of transmissions that can kill dozens of people who had absolutely no intention of voluntarily taking on additional risks of being infected.

Risking such negative effects may still be justified as long as the probability of it happening is low enough – after all, there are many tradeoffs and we don't prohibit cars just because they foreseeably kill a low number of people. That said, I expect governments to be aware of those tradeoffs. Accordingly, the restrictions should already be lowered soon, and unilaterally lowering them even further can lead to too much tightening up of the network connections between people and households, which could result in an unacceptably high transmission rate. (It's not necessarily just R0 > 1 that's problematic – depending on number of currently active infections, even an R0 < 1 could result in arguably unacceptably many deaths compared to what it would cost to prevent them.)

Comment by lukas_gloor on Kevin's Shortform · 2020-04-25T11:47:02.643Z · LW · GW

>where it's possible that 80% of people have had the virus,

If a demographically representative cross section of the population is infected, I would operate under the assumption that about 0.9% of them will die. From what you write about NY city, it sounds like you think the fatality rate might be a lot lower. I think this will be a major crux for people and so I'd focus first on addressing questions like why recent serology surveys in NYC grocery stroes find only 21% of people with antibodies.

Comment by lukas_gloor on Peter's COVID Consolidated Brief for 2 April · 2020-04-24T00:48:08.091Z · LW · GW

https://www.businessinsider.com/california-gov-newsom-orders-covid-19-autopsies-back-to-december-2020-4?r=US&IR=T

This puts a new light on experts getting the predictions wrong. People are speculating that some of the California cases date back to January or even December. Similar stuff could have happened in New York. IMO, that's the type of thing that makes sense to have outside one's 95% confidence interval.

EDIT: OTOH it seems as though the infections only started in New York in February, and yet they spread to infect a large portion of the population there (tentative serology estimates say about 20% for the city). It doesn't seem to be the case that the wide spread is explained by the infection in New York having started a lot earlier than expected. But something about this confuses me. If the infections reached the Bay area months earlier than they reached in New York, why is New York worse off? I guess one unusually thing about New York is how insanely little space they have inside restaurants and so on. Go to a California Starbucks and it's awesome and comfortable. Go to a New York Starbucks (wasn't it even invented there??) and you can't even sit anywhere and there are walls all around you. Probably infections just spread way faster in that tightly crammed setting?

Comment by lukas_gloor on Jimrandomh's Shortform · 2020-04-15T21:48:01.286Z · LW · GW

What about allegations that a pangolin was involved? Would they have had pangolins in the lab as well or is the evidence about pangolin involvement dubious in the first place?

Edit: Wasn't meant as a joke. My point is why did initial analyses conclude that the SARS-Cov-2 virus is adapted to receptors of animals other than bats, suggesting that it had an intermediary host, quite likely a pangolin. This contradicts the story of "bat researchers kept bat-only virus in a lab and accidentally released it."

Comment by lukas_gloor on Coronavirus: Justified Key Insights Thread · 2020-04-15T18:15:10.712Z · LW · GW

I've looked into this a lot and I agree strongly with this being a good range.

Comment by lukas_gloor on The case for C19 being widespread · 2020-04-13T19:35:16.246Z · LW · GW

Yup. 0.77% is also what I keep stumbling upon when I look into various data points about the IFR! It's my best guess about where Iceland's IFR will end up, and very close to my best guess for proper age adjustment for the Diamond Princess.

Comment by lukas_gloor on The case for C19 being widespread · 2020-04-13T13:26:36.355Z · LW · GW

It's worth noting that the German serology study (it was in the town Gangelt) has been criticized for being poorly presented: https://www.sueddeutsche.de/wissen/heinsberg-studie-herdenimmunitaet-kritik-1.4873480?fbclid=IwAR1mpGCPj21bffeXBe1fGJVeEWc7UlO2DkEP9-XrSCi4sJeh2-Ri_Cahwrw

One point of criticism is that the renowned German experts who were asked to comment on the study say they are skeptical about the antibody tests. They argue that to their knowledge, the only antibody tests widely in use in Germany at the time of the study can't distinguish between SARS-CoV-2 and other coronaviruses responsible for a third of common colds. Because we are 1 month past the peak of cold season, they argue that the 15% could be largely picking up on false positives for SARS-CoV-2.

Comment by lukas_gloor on Why I'm Not Vegan · 2020-04-10T11:35:00.664Z · LW · GW

I agree with this if you're comparing complete veganism to something like "reducing one's former consumption of animal products to <10%." But I'd be interested in discussion of the <10% thing. I don't quite like the framing of "purchasing consistency" for that because it doesn't seem like one gets a lot of moral fuzzies from being "sort of almost close to vegan." And many of the arguments against veganism also apply against the <10% thing. And yet, it feels quite problematic to me to think that I don't want to be the type of person who does the <10% thing. What's that driven by? (Not asking you to reply; I'm just thinking out loud.)

Comment by lukas_gloor on Why I'm Not Vegan · 2020-04-09T23:07:22.388Z · LW · GW
This means I'd rather see someone donate $43 to GiveWell's top charities than see 100 people go vegan for a year.

This is saying something different from "I'm not vegan."

I'm not vegan myself either (anymore), but I would care a lot about the impact of 100 people going vegan, and I could imagine so would a lot of non-rationalist meat eaters. Maybe I'm not factoring in how counterintuitive it is how few entire animals are actually eaten by someone, and how effective Givewell charities are by comparison. But on the face of it, this statement feels quite unusual to me.

Edit: I should really have thought about the actual numbers rather than the confounder with money donated to an effective charity. So, according to the post, the comparison is 1 healthy human life year for the following:

  • preventing 80 factory farmed cow years
  • preventing 80 factory farmed pig years
  • preventing 3,300 factory farmed chicken years
  • preventing some % of 300 fish years (representing the %-age of farmed fish rather than wild-caught fish)

I think it's defensible to call this "unusual" but I agree there are many people who would give way higher animal numbers still.

Comment by lukas_gloor on Why I'm Not Vegan · 2020-04-09T13:59:14.132Z · LW · GW
Conditional on animals mattering, how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?

This compares "giving a year of life" to preventing suffering. It's unclear to me whether you're someone who cares unusually little about animals, or whether you're someone who cares unusually much about "giving years of life to self-aware beings that form life plans." Many animal advocates (esp. ones that follow Singer's philosophy) would agree that there's an important difference between human lives and animal lives. But not that there's an important difference about human suffering versus animal suffering.

Comment by lukas_gloor on April Coronavirus Open Thread · 2020-04-08T00:42:21.173Z · LW · GW

If you go through my LW comment history you'll find that I'm in the camp of "The IFR is definitely >0.3%, and very plausibly >0.8%" and that I seem to care somewhat strongly about conveying this to others. :) Maybe you'll find some of the discussions (or links therein) useful. (Unfortunately I can't recommend any single resource that looks super convincing all on its own.)

Edit: By "very plausibly" I mean 25% likely rather than 50% likely. By "definitely" I mean 97% likely.

Comment by lukas_gloor on April Coronavirus Open Thread · 2020-04-08T00:25:19.127Z · LW · GW

Someone in that twitter thread points out that with subtracting false positives, it implies that 10% would be the better guess, as opposed to 13-14%. Does that make sense? Then 4 Covid-confirmed deaths per 620 people would be 0.66%.

And what about sampling bias? I read that the tests were voluntary. Unless someone was extremely meticulous about trying to somehow get a representative sample, I don't think it's reasonable to treat this as random. It's really quite obvious that people who had flu-like symptoms for a couple of days will be more curious to go among people and have a needle stuck into them. .

Comment by lukas_gloor on Would 2009 H1N1 (Swine Flu) ring the alarm bell? · 2020-04-07T17:04:56.747Z · LW · GW
I would also like to investigate this question for MERS, SARS, the 1968 Hong Kong flu, and (as far as it's relevant) the 1918 Spanish flu.

I'd be very interested in analyses of those (esp. if you look at it from the limited perspective people had in the early stages of those outbreaks). I feel like I completely missed it at the time, but the more I hear about SARS-1 the more I feel like the alarm bells should have gone off like crazy (and that probably happened in Asia but the way I remember it, reporting on SARS in the West felt no different from reporting about bird flu or Swine flu – but probably I didn't play close attention because I was really young).

Comment by lukas_gloor on Would 2009 H1N1 (Swine Flu) ring the alarm bell? · 2020-04-07T16:56:42.490Z · LW · GW
>The death rate from swine flu was 0.02%, hitting the young harder than the elderly. I count this as a no.

This is not quite the right way of looking at it! I think you'd have to look into what experts thought during the early months of the Swine flu outbreak. I haven't researched this but I've read that early best estimates for Swine flu fatality were at least a factor of 5 higher than the true infection fatality rate, if not even higher. (The IMO misguded folks who think the IFR for Sars-CoV-2 could be as low as the flu's are constantly pointing this out, failing to flag that this is far from a universal trend among outbreaks – e.g., early estimates of Sars-1 fatality turned out to be underestimates.)

That said, it seems plausible that even with my proposed adjustment, the numbers would still remain below the thresholds you list under "harm." It depends on how much credence experts put on the higher end of the range during the early months of the Swine flu outbreak. I don't know just how high the highest estimates were that still came from credible experts.

Comment by lukas_gloor on Peter's COVID Consolidated Brief for 2 April · 2020-04-07T09:52:08.079Z · LW · GW

I mostly made my comment to point out that the particular question that's being used as evidence for expert incompetence may have been unusually difficult to get right. So I don't want to appear as though I'm confidently claiming that experts need a lesson on forecasting.

That said, I think some people would indeed become a bit better calibrated and we'd see wider confidence intervals from them in the future.

I think the main people who would do well to join Metaculus are people like Ioannidis or the Oxford CEBM people who sling out these unreasonably low IFR estimates. If you're predicting all kinds of things about this virus 24/7 you'll realize eventually that reality is not consistent with "this is at most mildly worse than the flu."

Comment by lukas_gloor on Peter's COVID Consolidated Brief for 2 April · 2020-04-07T00:58:37.848Z · LW · GW

Metaculus (me included) also did similarly poorly on the question of US case growth. Out of all Metaculus questions, this one was probably the one the community did worst on. Technically expert epidemiologists should know better than the hobbyists on Metaculus, but maybe it's a bit unfair to rate expert competence based on that question in isolation.

What was surprising about it was mostly the testing ramp-up. The numbers were dominated by how much NY managed to increase their testing. I managed to overestimate the number of diagnosed cases in the Bay area, while still heavily underestimating the number of total cases in the US.

This is the relevant Metaculus question: https://www.metaculus.com/questions/3712/how-many-total-confirmed-cases-of-novel-coronavirus-will-be-reported-in-the-who-region-of-the-americas-by-march-27/

If you look at the community median at a similar date to the prediction by expert epidemiologists, it's also off by a factor of 6 or so. (Not sure what the confidence intervals were, but most likely most people got negative points from early predictions.)

(For those interested, the Metaculus user "Jotto" collected more examples to compare Metaculus to expert forecasters. I think he might write a post about it or at least share thoughts in a Gdoc with people who would be interested.)

Comment by lukas_gloor on What is going on in Singapore and the Philippines? · 2020-04-06T13:15:32.678Z · LW · GW
  • Heat and humidity probably slow down the transmission rate, but not enough to make large outbreaks impossible.
  • I could imagine that heat and humidity are especially beneficial for countries during the containment phase (esp. for contact tracing). According to this interview, the virus is inactivated at temperatures 30 degrees or higher. This could reduce the number of transmissions in setting that are particularly hard to contact trace (public transport, small grocery stores). As long as transmissions happen primarily in air-conditioned buildings or household contexts, contact tracing is much easier. (But perhaps it was doomed from the start, and the heat/humidity only meant it took longer to notice the cases that were being missed.)
  • Singapore and the Philippines seem very different to me in several respects!
  • The Philippines had reported 8 deaths by March 15th already. That's indicative of a large undetected outbreak early on. I know almost nothing about how much testing they've done, but I could imagine that it's not a lot. I could imagine that deaths in the Philippines are vastly underreported even now.
  • By contrast, Singapore definitely seemed to have their outbreak under control initially. I think there's a good chance it could have worked with earlier border closures. They only closed borders on March 24th after several imported cases, primarily from Indonesia.
  • Indonesia (which also has had hot climate throughout recent events) has one of the highest deaths-to-confirmed-cases ratios worldwide, and that's not factoring in that they may have missed >1,000 deaths already. According to that Reuters article, Indonesia had conducted only about 7,500 tests by April 3rd. By comparison, the UK conducted more tests on April 3rd itself (in a single day) even though its population is 3x lower than Indonesia's. Experts had been saying all along that Indonesia not reporting any cases throughout February was extremely suspicious based on travel connections to Wuhan. It seems that they were spot on. I think it's quite likely that Indonesia has >100,000 active cases by now. This suggests to me that dozens of Indonesians must have imported the virus to Singapore before the border closure (though maybe they all underwent temperature checks at the very least, and possibly quarantine?).
  • An alternative hypothesis (or contributing factor at the very least) is that containment failed because Singapore did not recommend mask usage as much as Hong Kong for instance did. Probably that was partly because of limited supplies, though the way it was communicated was similar to CDC communications ("masks don't help unless you're sick"). It seems increasingly likely to me that outbreaks are very hard to contain without widespread usage of masks (South Korea and Hong Kong rely heavily on mask usage – maybe someone could check up on the situation in Taiwan to get more data points on this).
Comment by lukas_gloor on Atari early · 2020-04-02T18:36:25.955Z · LW · GW

Another thing is that the bots never make exploits. So when there's a bad player at the table playing 95% of their hands, the bot would never try to capitalize on that, whereas any human professional player would be able to make extra money off the bad player. Therefore, the bot's advantages over human professionals are highest if the competition is especially though.

Comment by lukas_gloor on Atari early · 2020-04-02T08:30:57.749Z · LW · GW
I’m not familiar enough with Poker to say whether any of the differences between Texas Hold’em, Omaha Hold’em and Seven Card Stud should make the latter two difficult if the first is now feasible.

I've played all of these and my sense is that Seven Card Stud would be relatively easy for computers to learn because it has fixed bet sizings just like Limit Holdem, which was solved long before No Limit. Some of the cards are exposed in Stud which creates a new dynamic, but I don't think that should be difficult for computers to reason about that.

Omaha seems like it would be about as difficult as Texas holdem. It has the same sequence of actions and the same concepts. The bet sizings are more restricted (the maximum is determined by the size of the pot instead of no limit), but there are more cards.

As far as I'm aware, none of the top poker bots so far were built in a way that they could learn other variants of poker without requiring a lot of fine-tuning from humans. It's interesting to think about whether building a generalized poker bot would be easier or harder than building the generalized Atari bot. I'm not sure I know enough about Atari games to have good intuitions about that. But my guess is that if it works for Atari games, it should also work for poker. The existing poker bots already rely on self-play to become good.