Posts

Moral Anti-Epistemology 2015-04-24T03:30:27.972Z
Arguments Against Speciesism 2013-07-28T18:24:58.354Z

Comments

Comment by Lukas_Gloor on Covid 6/17: One Last Scare · 2021-06-17T22:08:09.913Z · LW · GW

I mentioned it as a consideration, but yeah, I'm probably underestimating the effect of that by a lot, now that I think about it. I wasn't sure how much the US has so far relied on the J&J vaccine, which is also less effective. But it looks like it's a low amount of it. 

Comment by Lukas_Gloor on Covid 6/17: One Last Scare · 2021-06-17T20:46:39.153Z · LW · GW

Regarding the estimate that Delta is 40% more infectious than Alpha: I've seen 50-60% mentioned a lot in the last couple of days from UK expert sources. If true, this would probably make a big difference to your calculations. 

Comment by Lukas_Gloor on Covid 6/17: One Last Scare · 2021-06-17T20:29:22.128Z · LW · GW

Thus, it looks clear to me that most places in America are going to make it given the additional vaccinations that will take place, but some places with low vaccination rates will fall short. 

I found myself intuitively skeptical about this claim and tried evaluating it via a different line of reasoning than the one you used (but relying on some of your figures). After going through this, I mostly updated that it will be a close race with the vaccinations. Overall, I find it 65% likely that places with a roughly average vaccination coverage in the US won't be able to avoid large surges in case numbers (defined by either lockdowns or really strong new restrictions, or 3% of unvaccinated people infected at the same time.) (This could be compatible with your estimates, because the death rate in well-vaccinated areas would still be relatively low if vaccination uptake is high amongst the elderly.) What seems very clear is that locations with below-average vaccination coverage will be in trouble.

My approach and estimates:

I think a crude lower bound for when you get Delta variant under control is when you have a substantially larger percentage of the population vaccinated than the UK currently has. (Because R is 1.1-1.35 in the UK now and that's before the full reopening.)

Current vaccination percentages for the UK (all age groups):
63.3% first dose
46.0% second dose

Current vaccination percentages for the US (all age groups, I think): 
52.7% first dose
44.1% second dose

You say there's about 25% Delta variant in the US now.

5 weeks ago, I commented that the UK had >50% Delta variant in some areas. With a doubling time of roughly 11 days in the UK, it must have been at 25% roughly 7 weeks ago. Meaning, assuming that the infection levels in the US currently are comparable to what they were in the UK 7 weeks ago, then the US is roughly 7 weeks behind the UK timeline. 

7 weeks ago, the UK was reporting around 2k Covid cases (with a population of 66 million). The US population is 5x larger. The US is presently reporting around 13k cases. That's similar enough! Therefore, I'm going to operate under the assumption that the US is "7 weeks behind the UK timeline." 

The situation in the UK is concerning and getting worse still, but the case numbers are substantially below the previous peaks. I'd say the UK is about 3-4 weeks ahead of things getting very bad.

By that reasoning, the US has roughly 10 weeks to get R below 1 for the Delta variant.

You say,  "Currently [the US] are vaccinating about 1% of people each week." 

I'm assuming that's both doses?

Continuing with that, in 10 weeks, the US should have the following vaccination percentage:

62.7% first dose
54.1% second dose

And here the present UK numbers again:

63.3% first dose
46.0% second dose

The UK is not fully reopened yet, and R is at 1.1-1.35. Most UK experts are pessimistic about things getting better anytime soon, despite vaccinations progressing quite quickly.

That said, the second dose may matters more than the first dose, especially if the first dose is Astra Zeneca. So, 54% second dose instead of 46% should make quite a large difference. I think (?) the US also relies slightly more on Pfizer and Moderna than the UK, which should add a bit of extra protection. Summer temperatures also help out. But is all of this enough to put R below 1 (for the Delta variant, specifically) early enough?

The UK isn't even fully opened yet.  Some US states may go ahead with the full reopening now, in which case they'll have less than the projected 10 weeks until they catch up with the UK timeline. 

Then again, there's room for the vaccinations to speed up (the vaccination rate used to be higher at points in the past).

Note that my definition of "large surges in case numbers" isn't necessarily that bad. 3% of unvaccinated people infected – the UK is almost there already, and deaths are extremely low because the unvaccinated people are mostly really young.

Update: I'm realizing that country-wide infection counts are driven mostly by the places with the worst vaccination uptake, so a location with an average vaccination rate wouldn't be hit that badly compared to the country average infection rate. This means I'd now change my operationalization to something like "worst 25th percentile." And maybe make it 60% instead of 65%. 

Comment by Lukas_Gloor on Covid vaccine safety: how correct are these allegations? · 2021-06-15T19:19:02.750Z · LW · GW

One obvious candidate explanation: For the reason you explain in the letter to your dad – probably those deaths were roughly what you'd expect among the vaccinated demographic if the vaccine is benign. By contrast, the specific blood clots are generally rare. 
 

Comment by Lukas_Gloor on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-14T13:55:35.411Z · LW · GW


If you believe that we should expect a certain number of side-effect reports even if there's no issue with the vaccine (and reacting it would mislead System 1), how reports of significant side-effects do you think we should expect?

Are you asking for effects that show up after 3 days (and then don't go away), or anything bad that happens sometime within a couple of months after getting vaccinated? 

If it's the latter, then among1,000 people you might expect that a few people will have weird health issues show up without an obvious cause. I'd be surprised and pretty concerned if someone died in that interval (for non-obviously traceable causes), but if it was just a handful of issues of the severity of "developing a kind of serious new allergy" or "developing heart rhythm issues," that could be entirely what's expected (though I haven't studied the frequencies).

With miscarriages, for instance, apparently "1 in 8 pregnancies end in miscarriage" – so out of a large enough pool, you have to expect that someone had a miscarriage in the last couple of months, etc. 

--

My father, mom and brother had no significant side effects after two doses of either Pfizer or Moderna. I'm only 3 days after my first dose of Pfizer; no side effects so far.

My father's a GP and seemed happy that I'm getting vaccinated – though he did say that it's possible for young people to have "kind of scary 2 days," side-effect-wise.

(Also have friends in the community who didn't have issues from 1st doses, but was focusing on people outside.) 

Comment by Lukas_Gloor on AGI in a vulnerable world · 2021-06-07T18:55:24.564Z · LW · GW

I guess you are more optimistic than me about humanity. :) I hope you are right!

Out of the two people I've talked to who considered building AGI an important goal of theirs, one said "It's morally good for AGI to increase complexity in the universe," and the other said, "Trust me, I'm prepared to walk over bodies to build this thing."

Probably those weren't representative, but this "2 in 2" experience does make me skeptical about "1 in 100" figure.

(And those strange motivations I encountered weren't even factoring in doing the wrong thing by accident – which seems even more common/likely to me.) 

I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.

Comment by Lukas_Gloor on Looking for reasoned discussion on Geert Vanden Bossche's ideas? · 2021-06-06T23:57:37.066Z · LW · GW

As an example, look at E484K - this mutation changes the amino acid polarity, so that antibodies trained against the E variant will have a much harder time attaching to the K variant.  If an antibody fails to attach, it doesn't 'crowd out' anything.

That makes sense; I was wondering about this exact thing. It seems like VB is painting a worst-case scenario where a bunch of things go wrong in a specific way. Perhaps not impossible, but based on what you're saying, there's no reason to be unusually concerned. 

Comment by Lukas_Gloor on Looking for reasoned discussion on Geert Vanden Bossche's ideas? · 2021-06-06T21:51:45.141Z · LW · GW

Nice, that's reassuring. I assumed that claim (2) was basic immunology because he was talking about it so confidently, but at the same time, I noticed confusion about the lack of precedents where outdated antibodies (from previous infection or outdated flu vaccines) cause complications. It seems like immunologists think his view on (2) are outlandish – in which case, "case closed, nothing to see here." 

Edit: On the other hand, reading this blogpost makes me think that the mechanism Vanden Bossche proposes is plausible at least in theory. But also, the Nature blogpost discusses that targeting the spke protein in particular was a good idea:

Targeting the Spike protein is another big benefit that we got from the earlier SARS work; which suggested that (for example) targeting the Nucleocapsid (N) protein was riskier. With the Spike, you put the virus in an evolutionary tight spot: evading the antibodies while trying not to lose the ability to bind to the human ACE2 protein. So far, that looks like too narrow a path for the virus to stumble through.

So far, that all seems right and the vaccines continue to be functional enough to neutralize even the most vaccine-resistant variants.

Comment by Lukas_Gloor on Looking for reasoned discussion on Geert Vanden Bossche's ideas? · 2021-06-06T21:21:35.730Z · LW · GW

At 1:40:22, he claims that we see young people without risk factors getting severe Covid for basically the first time only now, because they get infected in the time window right after vaccination when their antibodies are immature and only serve to crowd out innate antibodies. 

That claim sounds dubious to me! Firstly, there's always been a (small) risk for young people to develop severe Covid. The way both he and Weinstein talk about "young people are already immune" seems a bit dumb to me. Secondly, if getting infected right after vaccination increases a young person's risk, that would show up in the data. But no one is talking about this yet. Am I missing something? 

Comment by Lukas_Gloor on Looking for reasoned discussion on Geert Vanden Bossche's ideas? · 2021-06-06T20:49:04.361Z · LW · GW

And it just so happens that this is exactly what we were doing prior to vaccine rollout.

I agree, if all of this was only about argument (1), then it's clear that the ongoing mass vaccinations are best. 

But Vanden Bossche wants us to look at both arguments together, (1) and (2). His point is that having the antibodies from an outdated vaccine will soon be bad for you, because the types of antibodies set off by the vaccine will "get in the way" of innate antibodies. 

Are you specifically saying that it takes too long for viral evolution to escape vaccine-generated antibodies so much that they go from "suboptimally useful" to "actually harmful because they get in the way?" I think that's plausible based on the observation that every vaccine in circulation so far is overwhelmingly net positive to have, and we've already vaccinated 50%+ of the population (at least in some fortunate countries) and could continue "keeping up" with booster shots. So all of that makes sense and makes me feel reassured. 

However, I wonder if we're maybe underestimating the selection pressure from "virus evolves in unvaccinated population" and "virus evolves in population vaccinated by an outdated vaccine." The Delta variant evolved in India where only few people were vaccinated. Somewhere there (or in the vicinity, e.g. Nepal), it apparently acquired a mutation that's been studied in the Beta variant, which gives the virus better immune escape. This looks like somewhat fast virus evolution already, and the selective pressures will get even stronger. The UK has the Delta+ ("Nepal") variant already, and is reopening the economy. The selection pressure will strongly favor mutations that make the vaccine-generated antibodies less useful. Vanden Bossche is saying that the antibodies are targeted at the virus in a fragile way, so that once you dial up the selection pressure for vaccine escape, it could happen quickly.  Therefore, I worry that the argument "virus evolution has been too slow so far" is not watertight because the selection pressure for the specific thing that he's most worried about (vaccine-generated antibodies becoming a hinderance soon enough) is going to be much stronger in the near future than it ever was. Did you consider all of that in your assessment? 

(Even if it's true that selection pressure will increase, it seems like Vanden Bossche can't be confident that the increase will be strong enough. So what he describes is only a possibility that depends on parameters of virus evolution.) 

Comment by Lukas_Gloor on Looking for reasoned discussion on Geert Vanden Bossche's ideas? · 2021-06-06T19:46:19.696Z · LW · GW

I found this video interesting and quite concerning. 

Vanden Bossche makes two arguments: 

(1) The ongoing mass-vaccination campaigns are poorly timed. We started vaccinating right about when lots of concerning new variants were showing up independently in different locations, suggesting that SARS-Cov-2 is quick at evolving. The vaccines are targeting outdated variants, and some vaccines are already only partly efficient. This creates the perfect conditions for further viral evolution. Therefore, we should expect immune escape really soon. Booster shots may help temporarily, but that's not a good solution because you're always a step behind, and if the outbreak isn't under control at any point, you just keep pressuring the virus to evolve and you thereby make it better at evading antibodies.

(2) There are two types of antibodies: 'innate immunity,' which is based on undiscriminating antibodies, and acquired immunity, which you get from the vaccines (or from having had the virus previously). Innate immunity is why young people do very well against the virus. Now, when you give people specific antibodies from the vaccines, those antibodies will still bind to the virus, but they won't neutralize it. They will be useless, but they'll crowd out the less discriminating antibodies from innate immunity, the ones that would actually work against the virus. This way, vaccinations could end up harmful. 

My impression is that his points in (1) seem undoubtedly accurate and pretty scary, but I think it's plausible that update vaccine shots will be made and distributed quickly enough to at least keep things under control (similar to Influenza each year). Besides, I don't see a good alternative. (Given that trying to eradicate the virus globally requires an unrealistic degree of willingness and coordination abilities.) 

I lack the expertise to judge his arguments in point (2), but there's something at 1:08:20 in the video that Vanden Bossche says that makes me think his mind is ideologically clouded. He talks about 'natural immunity' in this hyped way and suggests that 80%-85% of people "don't get any symptoms." I think that's just false. Asymptomatic infection is <50% with Covid. So given that his entire argument rests on understanding innate immunity, and given that he gets a central fact about it wrong in a way that suits his biases, makes me think he may not be right about these concerns. 

Of course, people can be wrong about some details and still be right about the general picture. I do think the mechanism he proposes sounds at least plausible to my lay ears. In particular, I think the situation "mass vaccination campaign during a global pandemic against a fast-mutating virus" is quite unprecedented, so it's not crazy to think that policy makers may not be thinking about virus evolution and immunity mechanisms in fine enough detail to realize that they're creating a dangerous mix of circumstances. 

One thing I'm skeptical about: If his concern with working antibodies being crowded out were correct, wouldn't we see instances where anti-flu vaccines end up harming people, because they'd also crowd out antibodies from innate immunity? But this is basically never the case, no? If you gave an outdated flu vaccine to a young person, they wouldn't do worse against the current flu virus, would they? That's another reason why I I'm skeptical, but I don't understanding anything about the specifics of the immune system.

Similarly, what about previous infection? If someone got infected by the original Covid variant in 2020 and then reinfected with some future evolved Covid variant that's very good at evading previous antibodies, it seems like Vanden Bossche's model would predict that they're going to do worse than if they had never had a previous Covid variant. Would we actually see that in reality? So far, antibodies seem to always be good to have. 

Comment by Lukas_Gloor on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-05T05:49:41.298Z · LW · GW

I also thought I was reading SSC / the new thing. 

Comment by Lukas_Gloor on Covid 6/3: No News is Good News · 2021-06-04T19:28:19.848Z · LW · GW

There are also some concerns that the Delta variant picked up an additional mutation that helps it circumvent vaccines. (And even if it didn't happen yet, with many people already vaccinated but Delta cases growing rapidly in many places, it's just a matter of time until virus evolution gets there. But there are booster shots being tested already.) 

Comment by Lukas_Gloor on If You Want to Find Truth You Need to Step Into Cringe · 2021-06-02T10:23:53.730Z · LW · GW

According to my intuitions about cringiness, it's more about how people say things than what they say.  E.g., discussions on inter-group differences in IQ are frequently really cringy when they happen on some culture war subreddit, but they can be fine (to my ears) when it's Sam Harris talking to a guest on his podcast. 

I guess you might reply that this effect is just: Sam Harris has a professional podcast and is already established, whereas redditors will seem like social outcasts when they discuss the same ideas? But I don't think that's what's going on. I feel like it's mostly the way a topic is addressed (framed, put into appropriate context, interpreted), and if I took the time I could point out various reasons why I think the reddit discussions are cringy. (Here's a list of things to get started.)

I'd say you can always say true and important things without sounding cringy! (According to my cringingness intuitions, that is.) 

Comment by Lukas_Gloor on Deliberately Vague Language is Bullshit · 2021-05-14T10:24:06.970Z · LW · GW

Vague language (and low communication more generally) also gives you plausible deniability for bending the truth.

Related: It's a common feature of Machiavellianism to "keep one's cards hidden" (12:25 here), i.e., not disclosing motives behind one's actions and generally communicating little information. 

People without anything to hide can build trust by communicating a lot and clearly.

Comment by Lukas_Gloor on The case for hypocrisy · 2021-05-13T21:40:40.731Z · LW · GW

The OP, as well as the other hypocrisy-favorable posts linked by Abram here in the comments, seem to do a poor job IMO at describing why anti-hypocrisy norms could be important. Edit: Or, actually, it seems like they argue in favor for a slightly different concept, not what I'd call "hypocrisy." 

I like the definition given in the OP:

1. a feigning to be what one is not or to believe what one does not : behavior that contradicts what one claims to believe or feel

The OP then describes a case where someone thinks "behavior x is bad," but engages in x anyway. Note that, according to the definition given, this isn't necessarily hypocrisy! It only constitutes hypocrisy if you implicitly or explicitly lead others to believe that you never (or only very infrequently) do the bad thing yourself. If you engage in moral advocacy in an honest, humble or even self-deprecating way, there's no hypocrisy. 

One might argue (e.g., Katja's argument) that it's inefficient to do moral advocacy without hypocrisy. That seems like dubious naive-consequentialist reasoning. Besides, I'm not sure it's empirically correct. (Again, I might be going off a subtly different definition of "hypocrisy.") I find arguments the most convincing when the person who makes them seems honest and relatable. There are probably target audiences to whom this doesn't apply, but how important are those target audiences (e.g., they may also not be receptive to rational arguments)? I don't see what there's to lose by not falsely pretending to be a saint.  The people who reject your ideas because you're not perfect, they were going to reject your ideas anyway! That was never their true rejection – they are probably just morally apathetic / checked out. Or trolls.

The way I see it, hypocrisy is an attempt to get social credit via self-deception or deceiving others. All else equal, that seems clearly bad.

I'd say that the worst types of people are almost always extreme hypocrites. And they really can't seem to help it. Whether it's deceit of others or extreme self-deception, seeing this stuff in others is a red flag. I feel like it muddles the waters if you start to argue that hypocrisy is often okay. 

I don't disagree with the view in the OP, but I don't like the framing. It argues not in favor of the hypocrisy as it's defined, but something in the vicinity.

I feel like the framing of these "pro-hypocrisy" arguments should rather be "It's okay to not always live up to your ideals, but also you should be honest about it." Actual hypocrisy is bad, but it's also bad to punish people for admitting imperfections. Perversely, by punishing people for not being moral saints, one incentivizes the bad type of hypocrisy. 

tl;dr hypocrisy is bad, fight me. 

(As you may notice, I do have a strong moral distaste for hypocrisy.) 

Comment by Lukas_Gloor on Covid 5/13: Moving On · 2021-05-13T16:34:17.359Z · LW · GW

In the UK there's evidence that the Indian variant (".2") is spreading rapidly in the population, outcompeting the UK variant. It may have reached >50% in some areas, including London probably. This could mess with the indoor reopening plans for next week somewhat, though given that the government mostly seems concerned with keeping hospitals from being overwhelmed, and that's now easy to achieve with all the vaccinations, it could be that indoor stuff will be allowed despite relatively high and climbing infection levels. (The levels are still very low right now, but if the 0.2 variant is as contagious as it maybe seems, this could change really quickly and lead to massive spikes.)

Comment by Lukas_Gloor on What weird beliefs do you have? · 2021-05-06T20:19:07.398Z · LW · GW

Anti-realism is not quite correct here, it's more that claims about external reality are meaningless as opposed to false. 


This is semantics but I'd say what you're describing fits the label "anti-realism" perfectly well. I wrote a post on Why Realists and Anti-Realists disagree. (It also mentions existence anti-realism briefly at the end.) 
 

Comment by Lukas_Gloor on Your Dog is Even Smarter Than You Think · 2021-05-02T10:53:14.867Z · LW · GW

This raises the natural question: what if you gave an ape the buttons, and taught it from childhood, and put parent-level effort into it, not "70s research”-level effort? Perhaps the answer would surprise us.

The bonobo Kanzi had something very similar ("lexigrams"). And his sister Panbanisha was born in the research center and grew up with the lexigrams. As far as I'm aware, the research never seemed to generate extreme attention, so probably the learnings remained somewhat limited?

Bunny is quite obsessed over her bowel movements (how Freudian) and about her owners' poop cycle.

As a youtube comment on the video points out, maybe the dog is just trying to be polite by imitating the conversation topic of its family. People probably ask their dog all the time about whether the dog needs or wants to go potty. 


 

Comment by Lukas_Gloor on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2021-05-02T09:30:06.650Z · LW · GW

I feel like you can turn this point upside down. Even among primates that seem unusually docile, like orang utans, male-male competition can get violent and occasionally ends in death. Isn't that evidence that power-seeking is hard to weed out? And why wouldn't it be in an evolved species that isn't eusocial or otherwise genetically weird? 
 

Comment by Lukas_Gloor on Monastery and Throne · 2021-04-27T05:10:38.194Z · LW · GW

Now you're moving goalposts. Of course you can find places that didn't need lockdowns. I thought your position was that lockdowns were almost never/nowhere worth it. If your position is just "some locations didn't need lockdown (e.g., the ones where governments decided not to do it)" – that's extremely different. Whether lockdowns make sense is to be assessed case-by-case, because the virus (and new variants of concern) affected different locations differently.

In your other comment, you attribute a claim to me that I haven't made ("you have provided zero support for your own claim that lockdowns do more good than harm"). All I did was saying that I'm already skeptical since you were making the opposite claim with extremely poor and flawed arguments; I didn't say I confidently disagreed with your conclusion. Pointing out the favorable mention of Ioannnidis's  0.15% IFR estimate isn't "nitpicking of your evidence." It's damning that you rely on a source that does this – it's off by a factor of three to seven. After more than a year of the pandemic, you simply cannot be off about the IFR by this much without looking quite poorly. If someone (the person you were citing/recommending) writes an entire report on how bad lockdowns are but thinks the virus is at least three times less deadly than it actually is, this person seems incompetent and I cannot trust their reasoning enough to buy into the conclusion.

I will drop out of this discussion now.

Comment by Lukas_Gloor on Covid 4/22: Crisis in India · 2021-04-23T12:50:58.456Z · LW · GW

Seconded. The situation in India looks worse, but kind of comparable, to the rapid spikes in South Africa and the UK when new variants arose there. In both cases, the strong reaction induced by the threatening situation led to things stabilizing. It's true that things might be worse for India, but 95% seems really quite high. Maybe you have a detailed model of why the situation is much different and worse in India now? If so, I'd be curious about the reasoning. (JTBC, I also think it's likely that things will be completely bad, but I don't immediately see why >60% for a worst-case scenario seems obviously warranted. There's a chance that if I looked into this for 2h or heard some convincing arguments, I'd also update to >90% now. ) 

Comment by Lukas_Gloor on Monastery and Throne · 2021-04-22T09:30:51.494Z · LW · GW

The report you're linking to contains this:  

>Estimates of the IFR have continued to fall over the year. The latest meta-study by Ioannidis (March 2021) estimates the average global IFR at 0.15%.

That's completely off, and so obviously and indefensibly so that it discredits the entire thing, IMO. Maybe there are economic arguments that suggest that alternatives to lockdown could be better, but it would be irresponsible to update on that based on arguments made by a person who cites Ioannidis's IFR estimates favorably. Ioannidis is a crackpot when it comes to Covid. It's ironic that you write "This image is a good example of how distorted pro-lockdown arguments are."

I have looked into IFR estimates quite a lot when I was following Covid and I won a large forecasting tournament (and got 3rd in the year-long version): https://forum.effectivealtruism.org/posts/xwG5MGWsMosBo6u4A/lukas_gloor-s-shortform?commentId=ZNgmZ7qvbQpy394kG

Also, from the anti-lockdown side I've always wanted to know how to justify letting hospitals get so overwhelmed that people will die of appendicitis – basic health care collapses for at least 2 weeks. Do we really want that if it's avoidable? How would anyone feel as a doctor, nurse, caretaker, etc. if the government expects you to do triage under insane conditions when it's totally avoidable? The anti-lockdown side has to engage with that argument. If you say the IFR is low enough that hospitals wouldn't get overwhelmed without lockdowns, that's simply not true and you're engaging in wishful thinking or ideologically clouded thinking. I'm open to arguments that we should have a breakdown of civilization for 2+ weeks (and probably several times) if [edit] "more hidden" consequences are extremely catastrophic otherwise, but then one has to be honest about the costs of a no-lockdown policy.

Edit to add: It's a strawman that policymakers compare lockdown to "do nothing." And by now, even the people who initially got it wrong have understood that there are control systems, that many people will stop taking risks as they read about hospitals being overwhelmed. However, there's a 2-week lag from infections to the peak of hospital overwhelm and if the government isn't strict enough, you overshoot things really quickly. It can happen extremely fast. You cannot assume that people will always time their behavior the correct way to anticipate hospital overstrain that's 2 weeks ahead. That's what government is for. 

Comment by Lukas_Gloor on All is fair in love and war, on Zero-sum games in life · 2021-04-17T06:34:26.259Z · LW · GW

I don't think status is a zero-sum game.  Some people may play it as such, unfortunately.  But some ways to increase your social standing also confer benefits to others without anyone losing out. By being being kind and considerate (as well as knowledgeable, competent, etc.) you can notice people's good qualities and confer status to others, flattening the status hierarchy and making it more multi-dimensional (making sure different types of talents get noticed).

It also depends what kind of status you're after. If you care more about the approval of people with depth and good character, that's easier to achieve in ways that build others up than if you care primarily about the most shallow metrics of status.

Comment by Lukas_Gloor on Training the YouTube Algorithm · 2021-04-15T21:42:55.736Z · LW · GW

I did this and it worked really well. I spent maybe 3h on the training initially until it was mostly just showing me music. Then clicked away the occasional non-music video suggestion for a couple of days, until the music-only preference was completely locked in. I feel like I don't get other suggestions anymore (and the habit to click them away is still installed anyway). 

When I want to watch other youtube videos, I use incognito mode (unfortunately that disables the adblocker). 

Comment by Lukas_Gloor on Anna and Oliver discuss Children and X-Risk · 2021-02-27T10:48:36.403Z · LW · GW

I'm also curious if Oliver or Anna think there's a difference between EA longtermist endeavors vs. the reference class you've drawn from ("scoring very highly on broadly accepted metrics of success"), and if so, how that difference manifests itself for having children. 

Comment by Lukas_Gloor on How Should We Respond to Cade Metz? · 2021-02-15T00:16:32.519Z · LW · GW

Good points. "How should we respond" is also a strange framing IMO because it unquestioningly assumes that there's a need to coordinate as a community (on Lesswrong of all places, which isn't even a Scott-themed reddit or the commenters on his blog). Personally I think any coordination around this sort of thing is pretty weird and people should just do what they think they should do (and maybe that includes some person writing a personal post on why they want to boycot the newspaper, in the hope to inspire some others, etc.). 

Comment by Lukas_Gloor on Covid 2/11: As Expected · 2021-02-12T13:30:54.477Z · LW · GW

The fact that the leaderboard has someone with a billion points, because they have been participating for years, is kind-of irrelevant, and misleading.


There are many leaderboards, including ones that only consider questions that opened recently. Or tournaments with a distinct start and end date. 
 

(And this would do a far better job aligning incentives on questions than the current leaderboard system, since for a leaderboard system, proper scoring rules for points are not actually incentive compatible.)


This is true, but you can create leaderboards that minimize the incentive to use variance-increasing strategies (or variance-decreasing ones if you're in the lead). (Basically just include a lot of questions so that variance-increasing strategies will most likely backfire, and then have gradually increasing payouts for better rankings.) 

I agree that what you describe sounds ideal, and maybe it makes sense for Metaculists to think of the points in that way. For making it a reality, I worry that it would cost a lot. (And you'd need a solution against the problem that everyone who wants a few extra dollars could create an account to predict the community median on every question to get some fraction of the total prize pool for just that.)

Comment by Lukas_Gloor on Covid 2/11: As Expected · 2021-02-12T08:15:35.856Z · LW · GW

Yes, but it doesn't take much time to just predict the community median when you don't have a clue about a question and don't want to take the time for getting into it. However, as another commenter points out, this means that Metaculus is rewarding a combination of time put in + prediction skills, rather than just prediction skills. 

Comment by Lukas_Gloor on Covid 2/11: As Expected · 2021-02-11T22:13:19.328Z · LW · GW

Metaculus points are not money, so positive points on a question doesn't mean you're a top predictor. However, they aren't meaningless either. It's about winning MORE points than the competition to win on the leaderboards. The incentive system is good for that (though there are some minor issues with variance-increasing strategies or questions with asymmetrical resolution timelines). 

Comment by Lukas_Gloor on What if we all just stayed at home and didn’t get covid for two weeks? · 2021-01-22T16:35:39.071Z · LW · GW

Building infrastructure and setting up preparations for doing this throughly could be an interesting safeguard against future pandemics worse than Covid. But I think there's a big problem with continuing to run hospitals and care-taking facilities, and care-taking in general. 

Comment by Lukas_Gloor on What to do if you can't form any habits whatsoever? · 2021-01-10T07:48:32.773Z · LW · GW

I'm similar and haven't found anything that works well. Reading how most EAs talk about their self-improvement "life hacks" always makes me think "fuck you, lol." I constantly alternate between periods where I'm trying lots of good routines at once and I'm somewhat productive and periods where things fell apart and I'm unproductive. In my experience, most of the leverage to be gained is by trying  to reduce the difference between these two states by not punishing myself for falling off the wave, i.e. getting right back into the attempts after a bad day or five. And if I'm on the wave I try to be extra cautious about avoiding things that could derail me.

I took time off from work late last year for personal reasons and used the opportunity to start some deeper-reaching attempts at mindset improvement based on CBT, visualizing my ideal day, and so on. I'm about to start schema therapy. Ideally I'd do the exercises daily but that's already challenging for obvious reasons. I haven't noticed any productivity improvements so far but I'm at least feeling better about myself.

Comment by Lukas_Gloor on Morality as "Coordination", vs "Do-Gooding" · 2020-12-30T22:27:58.158Z · LW · GW

I agree. I think of myself as a utilitarian in the same subjective sense that I think of myself as (kind of) identifying with voting Democrats (not that I'm a US citizen). I disagree with Republican values, but it wouldn't even occur to me to poison a Republican neighbor's tea so they can't go voting. Sure, there's a sense in which one could interpret "Democrat values" fanatically, so they might imply that I prefer worlds where the neighbor doesn't vote, where then we're tempted to wonder whether ends do justify the means in certain situations. But thinking like that seems like a category error if the sense in which I consider myself a Democrat is just one part of my larger political views, where I also think of things in terms of respecting the political process. So, it's the same with morality and my negative utilitarianism. Utilitarianism is my altruism-inspired life goal, the reason I get up in the morning, the thing I'd vote for and put efforts towards. But it's not what I think is the universal law for everyone. Contractualism is how I deal with the fact that other people have life goals different from mine. Nowadays, whenever I see discussions like "Is classical utilitarianism right or is it negative utilitarianism after all?" – I cringe. 

Comment by Lukas_Gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-27T11:34:01.737Z · LW · GW

So the emerging wisdom is that the SA variant is less contagious, or are you just using 20% as an example? The fact that SA is currently at the height of summer, and that they went from "things largely under control" to "more hospitalizations and deaths than the 1st wave in their winter" in a short amount of time, makes me suspect that the SA variant is at least as contagious as the UK variant. (I'm largely ignoring politicians bickering at each other over this, and of course if there's already been research on this question then I'll immediately quit speculating!) 

Comment by Lukas_Gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T18:06:25.268Z · LW · GW

It could be the time lag from when antibody-based plasma therapy (if that makes sense, I'm not even sure that's how it works) started to be used somewhat widely, plus the time it takes for a new variant to spread enough to get noticed. 

Comment by Lukas_Gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:56:59.052Z · LW · GW

Conditional on a 4th wave in the US happening in 2021, I wonder if it's >20% likely that it's going to be due to a variant that evolved on US soil. 

Comment by Lukas_Gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:53:43.639Z · LW · GW

Why are we seeing new variants emerge in several locations independently in a short time window? Is it that people are looking more closely now? Or does virus evolution have a kind of "molecular clock" based on law of large numbers? Or is the "clock" here mostly the time it takes a more infectious variant to become dominant enough to get noticed, and the count started whenever plasma therapy was used or whatever else happened with immunocompromised patients? Should we expect new more infectious variants to spring up all over the world in high-prevalence locations in the next couple of weeks anyway, regardless of whether the UK/SA/Nigeria variants made it there via plane? 

Comment by Lukas_Gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T16:56:21.877Z · LW · GW

To be clear, I don't mean to take a stance on how much more transmissible it is exactly, 33% or 65% or whatever. I think it's 85% likely that it's a difference that's significant enough to affect things, but it's less clear whether it's significant enough that previous containment strategies become vastly more costly or even unworkable. 

Comment by Lukas_Gloor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T16:38:35.166Z · LW · GW

I looked into things a bit and think it's 85% likely that the new variants in the UK and SA are significantly more transmissible, and that this will lead to more severe restrictions globally in the next few months because no way they aren't already in lots of places. I also think there's a 40% chance the SA variant is significantly more deadly than previous variants, but not sure if that means 50% higher IFR or 150% higher (I have no idea what prior to use for this).

Update December 26th: The longer we hear no concerning news about the lethality of the SA variant, the more likely it is that it's indeed benign and that initial anecdotal reports of it being surprisingly aggressive in young-ish people without comorbidities were just rumours. Right now I'm at 20% for it being significantly more deadly, and it's falling continuously. 

Comment by Lukas_Gloor on Draft report on AI timelines · 2020-11-08T08:04:26.132Z · LW · GW

This is a separate point from yours, but one thing I'm skeptical about is the following: 

The Genome Anchor takes the information in the human genome and looks at it as a kind of compression of brain architectures, right? But that wouldn't seem right to me. By itself, a genome is quite useless. If we had the DNA of a small dinosaur today, we probably couldn't just use ostriches as surrogate mothers. The way the genome encodes information is tightly linked to the rest of an organism's biology, particularly its cellular machinery and hormonal features in the womb. The genome is just one half of the encoding, and if we don't get the rest right, it all gets scrambled.

Edit: OK here's an argument why my point is flawed: Once you have the right type of womb, all the variation in a species' gene pool can be expressed phenotypically out of just one womb prototype. This suggests that the vast majority of the information is just in the genome. 

Comment by Lukas_Gloor on How should one deal with life threatening infections or air planes? · 2020-10-29T13:38:05.099Z · LW · GW

Effective altruism.

Comment by Lukas_Gloor on Does playing hard to get work? AB testing for romance · 2020-10-29T12:59:19.481Z · LW · GW

I just don’t feel comfortable if I act.


I think that's a great trait to have and I'd strongly recommend keeping it. If you can find enough things you like about yourself (and maybe have also worked on yourself to that end), you can also acquire genuine confidence in this way that feels way more robust than acting.

Maybe you've thought about this already, but I'd flag that some people (and more women than men) don't themselves compartmentalize so much between "just sex" and "romance". Humans have some degree of sexual dimorphism around attraction (e.g., "demisexuality" is rare among men but not that uncommon among women). So, the habit you mention and the way you phrase it might substantially decrease the pool of otherwise compatible partners. 

With the phrasing, I'd be worried that what many people might take away from your paragraph is not so much "This person cares about avoiding situations where they'd be incentivized to act inauthentically, therefore they prefer prostitutes over dating people with whom conversations don't feel meaningful", but rather "Something about intelligence, therefore hookers". 

The mismatch in psychologies is harder to address than the phrasing, and maybe that just means you don't think you're a good match to others who view the topic differently – it really depends on what feels right all things considered.

Just to be clear, I don't necessarily mean "view it differently" on moral grounds. For instance, I don't think extraverted people are immoral, but I'd feel weird and maybe too insecure with a partner who was too extroverted. Similarly, some women will feel weird and insecure if their partner has too much of a "men are bad/threatening" psychology, whether or not they think it's immoral. So finding other ways to meet the same needs could make sense if one worries about the pool of potential soulmates already being small enough, and if one places value on some of the normative intuitions, like importance of emotional connection during intimacy with a partner and not wanting to risk it being adversely affected. (The extraversion analogy isn't great because it sounds wrong to repress a core aspect of personality – the question with compartmentalization of romance vs. sex is if it's that or more/also influenced via habit formation and so on. I don't know much about the empirical issues.) 

Maybe you think what I write in the paragraphs above goes way too far in the direction of: 

Also implicitly you end up showing more regard for a stranger you don’t know than for yourself, because you basically end up fighting for someones affection instead of giving someone the choice to like you or not like you.

I'd say it depends. "Accommodations" come in degrees. Also, if you make them for any stranger, you're indeed not showing respect for yourself (as well as treating other people's personalities as interchangeable). However, if you find yourself particularly motivated to be good for partners with a certain type of character, that means that you already want to be the sort of person who appeals to them.

Comment by Lukas_Gloor on How should one deal with life threatening infections or air planes? · 2020-10-29T10:29:30.671Z · LW · GW

I'm assuming you still exercise and go outside and so on, and maybe arrange video calls with friendly people? Because the negative physiological effects from low amounts of exercise or social interactions can easily be a lot worse than the risks from Covid.

It sounds like you've built up a habit of mentally punishing yourself for taking "irrational" risks, and as a result, spend a lot of time worrying over risks in general, including very small but salient ones. I did the same thing when I learned about EA (I don't want to live forever, but I suddenly started to care a lot more about not dying because I do want to accomplish things in life and be rational in the pursuit of that).

I don't have great advice for how to deal with it; I just try to keep an eye on my habits and consciously get myself to change them if it ever feels like it's wandering too far into OCD territory. If you suspect that some of the motivation is also fear instead of just "rational" arguments, you can prepare for the eventuality of getting the virus to make that more palatable. (E.g., prepare food to eat while sick; check-list for what to do, when to call the doctor, etc.)

If you do end up dying, that doesn't mean you played the game poorly. Even death is an acceptable outcome as long as you did your best to reach your goals.

I'd try to "avoid daily dilemmas" by thinking once about the precautions you want to take, and then adhere to them without constantly wondering if you can do even more. And you can reassess the situation at regular intervals.

Regarding the general rationality of this sort of thing: If slightly increasing the chance of living a million years is indeed super important to you, it can make sense to take more precautions than the typical person. (Of course, maybe the mental energy would be better spent on other ways to avoid risks or get benefits.) However, I would make sure that you're doing this because it is truly what you want, not something you think is implied by rational arguments. There are many options to choose from when it comes to purposeful life goals.

Comment by Lukas_Gloor on Critiquing "What failure looks like" · 2020-10-29T09:35:05.215Z · LW · GW

A raving fascist or communist is more predictable and will lap up raving content. The machines can change our mind about our objective function so we are easier to satisfy.


That's a good way to put it! 

This might be stretching the analogy, but I feel like there's a similar thing going on with technological evolution of "gadgets" (digital watch, iPod, cell phone). It feels like people's expectations of what a gadget should be able to do for them to make them content continue to grow at a rate so fast that something as simple and obviously beneficial as "battery life" never really receives an improvement. I get that not everyone is bothered by having to charge things all the time (and losing the charger all the time), but how come it's borderline impossible to buy things that don't need to be charged so often? It feels like there's some optimization pressure at work here, and it's not making life more convenient. :) 

Comment by Lukas_Gloor on Critiquing "What failure looks like" · 2020-10-29T09:10:20.041Z · LW · GW

For people who share the intuition voiced in the OP, I'm curious if your intuitions change after thinking about the topic of recommender systems and filter bubbles in social media. Especially as portrayed in the documentary "The Social Dilemma" (summarized in this Sam Harris podcast). Does that constitute a historical precedent? 

Comment by Lukas_Gloor on No Causation without Reification · 2020-10-23T21:31:26.717Z · LW · GW

Hume made this point in An Enquiry Concerning Human Understanding. :) 

Edit: added a link. 

Comment by Lukas_Gloor on Draft report on AI timelines · 2020-10-20T07:59:18.367Z · LW · GW

I like this comment, and more generally I feel like there's more information to be gained from clarifying the analogies to evolution, and gaining clarity on when it's possible for researchers to tune hyperparameters with shortcuts, vs. cases where they'd have to "boil the oceans." 

Do you have a rough sense on how using your analogy would affect the timeline estimates? 

Comment by Lukas_Gloor on On AI and Compute · 2020-10-20T07:34:08.900Z · LW · GW

I tend to agree with Carey that the necessary compute to reach human-level AI lies somewhere around the 18 and 300-year milestones.

I'm sure there's a better discussion about which milestones to use somewhere else, but since I'm rereading older posts to catch up, and others may be doing the same, I'll make a brief comment here. 

I think this is going to be an important crux between people who estimate timelines differently. 

If you categorically disregard the evolutionary milestones, wouldn't you be saying that searching for the right architecture isn't the bottleneck, but training is? However, isn't it standardly the case that architecture search takes more compute with ML than training? I guess the terminology is confusing here. In ML, the part that takes the most compute is often called "training," but it's not analogous to what happens in a single human's lifetime, because there are architecture tweaks, hyperparameter tuning, and so on. It feels like what ML researchers call "training" is analogous to Hominid evolution, or something like that. Whereas the part that is analogous to a single human's lifetime is AlphaZero going from 0 to superhuman capacity in 3 days of runtime. That second step took a lot less compute than the architecture search that came before! 

Therefore, I would discount the 18y and 300y milestones quite a bit. That said, the 18y estimate was never a proper lower bound. The human brain may not be particularly optimal. 

So, I feel like all we can say with confidence is that is that brain evolution is a proper higher bound, and AGI might arrive way sooner depending on how much human foresight can cut it down, being smarter than evolution. I think what we need most is conceptual progress on how much architecture search in ML is "random" vs. how much human foresight can cut corners and speed things up.

I actually don't know what the "brain evolution" estimate refers to, exactly. If it counts compute wasted on lineages like birds, that seems needlessly inefficient. (Any smart simulator would realize that mammals are more likely to develop civilization, since they have fewer size constraints with flying.) But probably the "brain evolution" estimate just refers to how much compute it takes to run all the direct ancestors of a present-day human, back to the Cambrian period or something like that?

I'm sure others have done extensive analyses on these things, so I'm looking forward to reading all of that once I find it. 

Comment by Lukas_Gloor on Might humans not be the most intelligent animals? · 2020-09-16T09:47:24.607Z · LW · GW
If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn't seem like a hard separation between "being able to process culture inefficiently" and "able to process culture efficiently" other than the initial jump from not being able to do it at all, which we have already passed.

I keep hearing people say this (the part "and there doesn't seem to be a hard separation"), but I don't intuitively agree! I've spelled out my position here. I have the intuition that there's a basin of attraction for good reasoning ("making use of culture to improve how you reason") that can generate a discontinuity. You can observe this among humans. Many people, including many EAs, don't seem to "get it" when it comes to how to form internal world models and reason off of them in novel and informative ways. If someone doesn't do this, or does it in a fashion that doesn't sufficiently correspond to reality's structure, they predictably won't make original and groundbreaking intellectual contributions. By contrast, other people do "get it," and their internal models are self-correcting to some degree at least, so if you ran uploaded copies of their brains for millennia, the results would be staggeringly different.

Comment by Lukas_Gloor on SDM's Shortform · 2020-08-28T13:33:50.155Z · LW · GW
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn't say in that bracket was that 'maybe axiology' wasn't my only guess about what the objective, normative facts at the core of ethics could be.

I'm not sure. I have to read your most recent comments on the EA forum more closely. If I taboo "normative realism" and just describe my position, it's something like this:

  • I confidently believe that human expert reasoners won't converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it's true that if "life goals don't converge" then "population ethics also doesn't converge")
  • However, I think there would likely be converge on subdomains/substatements of ethics, such as "preference utilitarianism is a good way to view some important aspects of 'ethics'"

I don't know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that's allowed if I'm a naturalist normative realist?)

Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn't occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.

Cool! I personally wouldn't call it "normatively correct rule that ethics has to follow," but I think it's something that sticks out saliently in the space of all normative considerations.

(This still strikes me as exactly what we'd expect to see halfway to reaching convergence - the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we've been working on for longer.)

Okay, but isn't it also what you'd expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions "off distribution." Another intuition is that it's the only domain in ethics where it's ambiguous what "others' interests" refers to. I don't think it's an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it's kind of odd that anyone thought there'd be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between "whether population ethics is underdetermined" and "whether every person should have the same type of life goal." I think "not every person should have the same type of life goal" is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn't all want to replicate, and I'm confident that I'm not somehow confused about what I'm doing.)

Your case for SFE was intended to defend a view of population ethics - that there is an asymmetry between suffering and happiness. If we've decided that 'population ethics' is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can't I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we're going to leave population ethics undetermined?

Exactly! :) That's why I called my sequence a sequence on moral anti-realism. I don't think suffering-focused ethics is "universally correct." The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It's a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth's future light cone.

Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also "in tension," worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as "not more in tension than Democrats versus Republicans." This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to "what do we want to do with earth's future lightcone"). After you've chosen your life goals, that still leaves open the further question "How do you think about other people having different life goals from yours?" That's where preference utilitarianism comes in (if one takes a strong stance on how much to respect others' interests) or where we can refer to "norms of civil society" (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander's archipelago blogpost for inspiring this idea. I think he also had a blogpost on "axiology" that made a similar point, but by that point I might have already found my current position.]

In any case, I'm considering changing all my framings from "moral anti-realism" to "morality is underdetermined." It seems like people understand me much faster if I use the latter framing, and in my head it's the same message.

---

As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:

1. Morality could be underdetermined

2. Moral uncertainty and confidence in strong moral realism are in tension

3. There is no absolute wager for moral realism

(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – "what we on reflection care about" – doesn't suddenly lose its significance if there's less convergence than we initially thought. Just like I shouldn't like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn't care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)

4. Mistaken metaethics can lead to poorly grounded moral opinions

(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)

5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense

(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn't reaching a different conclusion on the same task. Instead, they're doing a different task. I'm interested in all the three questions I dissolved ethics into, whereas people who play the game "pick your version of consequentialism and answer every broadly-morality-related question with that" are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)