Posts

Isnasene's Shortform 2019-12-21T17:12:32.834Z
Effective Altruism Book Review: Radical Abundance (Nanotechnology) 2018-10-14T23:57:36.099Z

Comments

Comment by Isnasene on Quick general thoughts on suffering and consciousness · 2021-10-31T05:36:53.546Z · LW · GW

Thanks for clarifying. To the extent that you aren't particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I'm just kinda surprised that Eliezer's view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community.

My impression is that the justification for the argument your mention is something along the lines of "the primary reason one would develop a coherent picture of their own mind is so they could convey a convincing story about themselves to others -- which only became a relevant need once language developed."

I was under the impression you were focused primarily on suffering from the first two sections and the similarity of the above logic to the discussion of pain-signaling earlier. When I think about your generic argument about consciousness, I get confused however. While I can imagine why would one benefit from an internal narrative around their goals, desires, etc, I'm not even sure how I'd go about squaring pressures for that capacity with respect to the many basic sensory qualia that people have (e.g. sense of sight, sense of touch) -- especially in the context of language.

Comment by Isnasene on Quick general thoughts on suffering and consciousness · 2021-10-30T22:24:13.863Z · LW · GW

If one accepts Eliezer Yudkowsky's view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim "qualia requires reflectivity" implies all qualia require reflectivity. This includes qualia like "what is the color red like?" and "how do smooth and rough surfaces feel different?" These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting.

If you find the question of whether suffering in particular is sufficiently complex that it exists in certain animals but not others by virtue of evolutionary pressure, you're operating in a frame where these arguments are not superseded by the much more generic claim that complex social modeling is necessary to feel anything

If you think Eliezer is very likely to be right, these additional meditations on the nature of suffering are mostly minutiae.

[EDIT to note: I'm mostly pointing this out because it appears that there is one group that uses "complex social pressures" to claim animals do not suffer because animals feel nothing and another group that uses "complex social pressures" to claim that animals do not specifically suffer because suffering specifically depend on these things. That these two groups of people just happen to start from a similar guiding principle and happen to reach a similar answer for very different reasons makes me extremely suspicious of the epistemics around the moral patienthood of animals.]

Comment by Isnasene on Petrov Day Retrospective: 2021 · 2021-10-23T00:34:45.388Z · LW · GW

Forgive me if engage with only part of this, I believe that the OP already acknowledges most of the problem you've described.

No forgiveness needed! I agree that the OP addresses this portion -- I read the OP somewhat quickly the first time and didn't fully process that part of it. And, as I've said, I do appreciate the thought you've put into all this.

I think I differ from the text of the OP in that social-shaming/lack-of-protest-method in rituals is often an okay and sensible thing. It is only when this property is combined with a serious problem with the ritual itself that I get worried -- but I have a hunch that you'd agree with this.

I care about the entire LessWrong community. I'm not sure where the exact boundaries lie–it's more than posters/commenters and probably short of anyone who's ever read a LW post–but I'm especially interested in the group who I feel like I can trust to work with me when the stakes are real and high. The Petrov Day ritual to date was designed to show that this group exists and trust each other, and I think that's a powerful and valuable thing to do, if you can do it.

I agree that having/establishing a group of people you can work with/trust is a good thing, and I think that rituals about this can be beneficial. However I have two main objections to this perspective:

#1. 
It is not obvious to me that identifying a group unlikely to press the button in a Petrov Day ritual is one capable of coordinating generally when stakes are real and high. As commenters have noted, social pressure incentives stack pressure against defecting. Moreover, if you are selecting for people who you know well enough to speculate on behavior in these circumstances, you are probably also selecting for people more deeply connected for the community for whom these pressures matter a lot. 

#2. 
I don't think an existence proof for a 100-strong set of LWers who don't press the button in a Petrov's Day ritual is particularly useful or surprising to me. If 50% of LWers would press the button and 50% wouldn't, its mathematically obvious that such a group exists. The actually arguably impressive/surprising part of the Ritual is not "does this group exist?" -- its "hey look! we have a selection process with strong enough discriminatory power to find one-hundred people who will act in a certain way." 

This could mean something important symbolically -- about how so many people in the community are trustworthy that we can assemble a group with even our imperfect judgement. But it could also mean the following things:

  •  We have 10,000 people to select from and the top-one-percentile of people ranked by unwillingness to press the button is very unwilling to press the button
  • We are just very good at predicting people's behavior in this circumstance and at least 10% of the community won't press the button. 
    • Note that this also strongly interacts with point #1 because then you can select both for people you're confident are trustworthy and for people who will who'd submit under strong social pressure (not that I think you'd do that deliberately)

So its hard for me to find much symbolic importance in the thing that the current Petrov Day Ritual is establishing. 

But, social pressure aside, the establishment of high-stakes trust group is not obviously useful/relevant information to a typical community member. The capacity for distinguishing a high-trust group in a given community is only relevant to me if I can also distinguish a high-trust group that I can work with. In other words, knowing that such a group exists theoretically does not mean much to me if I happen to be in a branch of the community that I can't/shouldn't trust. My impression is that people who receive the code and choose not to press the button are basically anonymous, so this is the case here.[1]

If one knew for sure that this group delineated trustworthy people capable of coordinating effectively in high-stakes scenarios, it may be useful for ritualized (optional) self-reveals after the fact. However, I would actually caution against this as it could have a reverse/harmful effect if any participant does anything harmful ever -- since it could establish a coalitional Schelling Point of mutual protection or the perception of one -- creating real or perceived complicitness in harm.

 

Naturally, an ideal Petrov Day design would be both something for the ideal community and perhaps also be something that strengthens the trust between an especially devoted community core.

Yep! In practice, I don't think Petrov Day needs to everything though -- and its probably easiest to create a really strong ritual that captures one theme and then explore secondary activities/processes that don't interfere with the core. 

Strengthening trust of the community core is a good thing -- and I don't think every ritual has to be about the entire community or vice-versa. I'm more concerned about what the selection-process+ritual-combi itself signals (both to the core and everyone else) about the kind of compliance behaviors expected by the core (the current process sort of confounds them). For contrast, if the selection process credibly demonstrated that it was just selecting Core Members, dropped the social-pressure incentives, and then demonstrating that core can trust each other not to blow things up then it would be a lot more meaningful in the sense you are gesturing at.

Perhaps there's also a minor frame difference here where you already see the process as basically something like "pick core members" based on your experience while that wasn't my default assumption.

[1] To be fair, I doubt that actually affiliating oneself with this group/parts-of-it would be overwhelmingly difficult -- given the obvious affiliations and the of people who have stated that they've received codes.

Comment by Isnasene on Petrov Day Retrospective: 2021 · 2021-10-22T21:18:13.914Z · LW · GW

You seem to have put a lot of thought into this ritual and I appreciate the consideration you, Ben, and others are giving it. Anyway, here's some raw unfiltered (potentially overly-harsh) criticism/commentary on Petrov Day -- take what you need from it: 

In addition to Lethriloth's criticism of LW Petrov Day failing to match the incentives/dynamics associated with Petrov (an important consideration indeed given the importance of incentive consideration in the LW cannon), it is also important to consider that Community Rituals may serve ends wildly disparate from their stated purposes, whether intentionally or unintentionally.

Yes requires the possibility of no. If you are under the impression (as I am), that this Ritual deliberately curates an anonymous subset of the population likely to produce a desired outcome, that people who transgress on this outcome are socially shamed, and that the maximally obvious way to express distaste of a Ritual is to transgress upon it... then this Ritual as it stands systematically attacks the feasibility of "no." 

To elaborate, it does the following things:

  • The Ritual misrepresents the true opinion of the community, by selecting those who would take it seriously and erasing those who wouldn't[1].
  • If the Ritual fails to sufficiently filter out people who don't want to take it seriously, it preemptively punishes their actions with social shaming.

From the lens of someone who thinks LW Petrov Day fails to meaningfully reflect Petrov's decision, this creates the impression that subsets of rationalist community can concoct arbitrary rituals (as long as they have some plausible justification) and declare them to be community rituals all-the-while excluding members who have strong reasons for disagreement. Moreover, it establishes that this subset can use social-shaming strategies to coerce disagreeing members from protesting in obvious (but materially mild) ways[2].

I'd imagine that these considerations where present in the people you alluded to who preferred to self-exclude from The Ritual but I could be wrong.

Given recent criticism of the rationalist community, I consider course-correcting away from these types of dynamics as pretty crucial.

[1] There is a second criticism here, which is that the existing Ritual prioritizes the impression/illusion of coordination in the community over the actual level of coordination in the community -- which is both epistemically disadvantageous and meta-level disadvantageous as one would not expect a community like LW to prefer symbolic coordination over actual coordination.

[2] There is an argument that protesting in this way corresponds to a Unilateralist Veto and, on a meta-level, the community should disincentive this means of protesting and that social shaming is an acceptable way to do this. But this is a pretty roundabout way of handling that and I think the first-order social effects swamp the import of this argument.

Comment by Isnasene on Analysis of World Records in Speedrunning [LINKPOST] · 2021-08-04T17:25:51.279Z · LW · GW

This is cool! I like speedrunning! There's definitely a connection between speed-running and AI optimization/misalignment (see When Bots Teach Themselves to Cheat, for example). Some specific suggestions:

  • Speedrun times have a defined lower bound on the minimization problem (zero seconds). So over an infinite amount of time, the time vs speedrun time plot necessarily converges to a flat line. You can avoid this by converting to an unbounded maximization problem. For example, you might wanna try plotting Speed-Run-Time-on-Game-Release divided by Speed-Run-Time at time t vs time. Some benefits of this include
    • Intuitive meaning: This ratio tells you how many optimal speed-runs at time t could be accomplished over the course of a single speed-run at game release
    • Partially addresses diminishing returns: Say the game's first speed-run completes the game in 60 seconds and the second speed-run is completes the game at 15 seconds (a 45 second improvement). No matter how much you work at the game, its not possible to reduce the speed-run time by more than the 45 second improvement (at most you can do 15 seconds) so diminishing returns are implied

      In contrast, if you look at the ratio, the first speed has a ratio of 1 (60 seconds/60 seconds), the second has a ratio of 4 (60 seconds/15 seconds), and a third one-second speed run has a ratio of 60 (60 seconds/1 second). Between the second and third speed-run, we've gone from a value of 4 to a value of 60 (a 15x increase!). Diminishing returns are no longer inevitable!
    • Easier to visualize: By normalizing by the initial speed-run time, all games start out with the same value regardless of how long they objectively take. This will allow you to more easily identify similarities between the trends.
    • More comparable to tech progress: Since diminishing returns aren't inevitable by construction, this looks more like tech progress where diminishing returns also aren't inevitable by construction. Note that they still can be in practice however
  • Instead of plotting absolute dates, you plot time relative to when the first speed-run was registered. That is, set the date of the first speed run to t=0. This should help you identify trends.
  • A lot of the games you review indicate that, in many cases, our best speed-run time so far isn't even 3x as faster as the original speed-run. This implies that optimizing speed-run time (or the ratio I introduce above) is bounded and you can't get more than a factor of 3 or 4 in terms of improvement. But obviously tech capabilities have improved by several orders of magnitude. So structurally, I don't think speed-running can be particularly predictive of the tech advances
  • Given the above, I suggest that if you want to model speed-runs, you should use functions that expect asymptotes (eg logistic equations). Combinations of logistic equations can probably capture the cascading L curves you notice in your write-up. May also be worth doing some basic analysis like counting the number of inflections in each speed-run (do this by plotting derivatives and counting the number of peaks).
    • If you do this, I strongly suggest doing a transformation like the one I suggested above since otherwise, you're probably gonna get diminishing returns right off the bat and logistic equations don't expect this. If you don't transform for whatever reason, try exponential decay.
  • Speed-running world records have times that, by definition, must monotonically decrease. So its expected that most of the plots will look like continuous functions. As you're plotting things now, diminishing returns are built-in so you should also expect the derivatives to

Have fun out there!

Comment by Isnasene on My Journey to the Dark Side · 2021-05-11T18:47:58.210Z · LW · GW

Thanks, I appreciate the concrete examples of untrustworthiness than don't rely on inferences made about reputation. I am specifically concerned about things like this (which seems like a weird and bad direction to take a conversation (https://sinceriously.fyi/net-negative/). It also seems hard to recount falsely without active deception or complete detachment from reality and I doubt Ziz is completely detached from reality:

They asked if I’d rape their corpse. Part of me insisted this was not going as it was supposed to. But I decided inflicting discomfort in order to get reliable information was a valid tactic.

You say that you "never said that sentence or tried to imply it..." Do you have any sense of why Ziz interpreted you as saying that? I'd like to gauge the distance between what you said and how Ziz interpreted it to gauge degree-of-untrustworthiness.

Comment by Isnasene on My Journey to the Dark Side · 2021-05-11T18:35:20.764Z · LW · GW

The article was the first impression I got about Ziz (I live in Germany and never have attended a CFAR workshop) and I would expect that I'm not the only person for which it's true. 

Ah, mea culpa. I saw your other comment amount Pasek crashing with you and interpreted it to mean you were pretty close to the Ziz-related part of the community. I'm less hesitant about talking to you now so I'll hop back in.

they are done because the person considers expression of their sexual of gender identity to be a sacred value. Sith robes are not expressions of their sexual of gender identity and thus taking the reputational hit for them shows valuing reputation less. 

I really feel that you're making a category error by repeatedly merging the concepts "credibility with a small set of helpful ppl" and "general reputation." I don't see why Sith Robes or gender identity or aesthetics in general should cause me to trust someone less, especially when I have other information on them I consider more relevant. This because, unlike most social conventions which serve as forms of control/submission to the mob/etc, the ability to be perceived as honest by those you want to work with allows you to more easily work with them.

Gervais sociopaths often have principles that include telling the truth.

I don't think her aesthetic was stoically motivated as much as motivated by the desire to treat ones own interests and values as logically prior to social convention -- a refusal to let ones own interests bow to the mob. This seems conceptually similar to me as treating something as a sacred value. It just has more decision theory behind it.

It's about the generator function. The question is about what generator function explains all three events

I think this is somewhat noncentral because (as mentioned), I disagree that a single generator produced all three events. What do you think the actual relevant generator is, and why do you think it also generates lie-behavior against parties Ziz might want to work with (eg publishing everyone-facing lies on the internet)?

While it likely played out worse then she expected beforehand, I don't think the idea that it was only likely to damage her reputation with the CFAR staff (whom she thinks defected) was a reasonable model of the situation.

Yeah fair enough. I agree that this isn't a reasonable model but my point still stands I think. The issue is that I neglected a third group aside from people who plan on defecting against Ziz or have low opinions of her judgement. People who automatically flinch away from others who do unconstrained things would also likely trust her less. Still, that group would be unable to help do the unconstrained things she wants to so I don't think it means much to Ziz that she can't work with them.

What group of people do you think Ziz wanted to work with that she is no longer able to because of the protest?

Comment by Isnasene on My Journey to the Dark Side · 2021-05-11T15:43:58.874Z · LW · GW

Since you've quoted Ziz out of context, let me finish that quote for you. It is clear that the other half of her (whatever that means) did in fact believe those things and it is clear that this was a recounting of a live-conversation rather than a broad strategy. It is not that weird to not have fully processed the things that you partially believe, live, in the middle of a conversation such that you are surprised by them.

The other half of me was like isn’t it obvious. They are disturbed at me because intense suffering is scary. Because being trans in a world where it would make things worse to transition was pain to intense for social reality to acknowledge, and therefore a threat.

I see you're equivocating between "honesty"/"credibility"  and "reputation" again:

Reading https://www.sfchronicle.com/bayarea/article/Mystery-in-Sonoma-County-after-kidnap-arrests-of-14844155.php is going to make any normal person consider the people to have no credibility and having an article like that with your legal name that people can google to find more about you in interactions like applying for a flat, is a heavy reputational cost. 

I see you've quietly dropped the two other reasons to ding credibility I mentioned to focus on the protest, which along with a misquote is your main reason for why Ziz is a liar:

False imprisonment of kids that are innocent bystanders isn't just "handles it inappropriately". None of the LGBTQ+ people I know personally have to the extend of my knowledge done something as bad nor would I expect that to be in their range of possible actions. 

It seems obvious to me that false imprisonment of kids is a noncentral description of what Ziz was doing (ie "she staged a protest and unbeknownst to her there were children somewhere" is my model). Given that this was scaled back to a misdemeanor, I imagine that you're focusing on this specifically for rhetorical effect. 

While I'm dubious about the protest being a Smart Move, I don't think this has much bearing on Ziz's honesty and I certainly don't think the coincidental presence of children somewhere in the area has any bearing on it.

From the TDT perspective actually fulfilling what you threaten seems quite reasonable.

From a TDT perspective, actually treating the first thing you come up with after someone asks you a question (esp when its couched in wiggle-terms like "probably") as a binding pre-commitment does not seem reasonable to me at all.

Given your manipulation of context above, and my notable lack of context wrt this whole situation, you have an asymmetric information advantage here that I suspect you may use to deceive me. As a result, I'm tapping out of the convo here.

If you are in good faith, I wish you well.

Comment by Isnasene on My Journey to the Dark Side · 2021-05-11T11:12:35.086Z · LW · GW

the completely unfounded belief that only good-aligned people can cooperate or use game theory and that nongood people will defect on each other too often to defeat her alliance. 

Can you elaborate on why you think this belief is completely unfounded? It seems to me that there are clear asymmetries in coordination capacities of good vs nongood. For example, being more open to the idea of a "Good Person" in power than a "Bad Person" seems like common sense. Similarly, groups of good people are intrinsically value-aligned while teams of bad people are not (each has a distinct selfish motivation) -- and I think value-alignedness increases effectiveness.

Comment by Isnasene on My Journey to the Dark Side · 2021-05-11T10:47:09.998Z · LW · GW

Assuming Ziz is being honest, she pulled the stunt at CFAR after she had already been defected against. This does not globally damage her credibility. It does damage her reputation among a) ppl who think they can't defect against her sneakily but plan to try and b) ppl who think she is bad at judging when she's been defected against. I am in neither of those categories so I have no reason to expect Ziz to defect by lying at me.

In contrast, if Ziz was being dishonest, she pulled that stunt for... inscrutable reasons that may or may not be in the web of lies she might have made. I think this is unlikely. As I've already said, her claims seem plausible and, if she was lying, she could do far worse than she did. If she wanted to defect really hard and wasn't constrained by truth, she could just raise issues that non-marginalized communities have a personal stake in (instead of something like transphobia).

Wearing Sith robes and naming themselves after a fanfic villian is similar in that it damages reputation among many people and not a strategy to develop a reputation as someone to be trusted. 

Do you think Little Nas X is less honest because he became Satan in his hit music video Montero?  I doubt that wearing Sith Robes / naming yourself after a villain (Ziz is a mythological bird?) is useful information about how honest someone is. This goes double when you know other things. Three points here:

  • Marginalized people have understandable reasons for inverting mainstream aesthetics. Good/evil aesthetics are defined by  mainstream culture. If that culture has betrayed you, it can be therapeutic to reject it in turn by inverting its aesthetics. Many of my LGBTQ+ friends do this. Given that Ziz is a trans women who has an ontology that treats most ppl as "nongood", it makes sense that she would also (either because of morality-related alienation or gender-related alienation).
  • You're conflating reputation/credibility among "many ppl" with reputation/credibility among ppl who would actually help you. Only the second group matters and optimizing credibility with the first is a waste of time. If you look at a marginalized person who has moral integrity (vegan) and writes extensively on TDT (solving coordination problems) and conclude that they're a liar because clothing choices, this says a lot about your values.
    You either a) buy-in to nongood mainstream clothing norms as reflecting a person's goodness/evilness or b) think Ziz is making a critical error by pushing away ppl who think this is what clothes mean.  
    IMO people who take either of these positions are sufficiently invested in the status quo that they'd only impede Ziz's work. Credibility with them isn't worth much.
  • While I doubt it turned off helpful ppl, Ziz admitted (I think in a comment on sinceriously somewhere) that her aesthetic was a tactical mistake in hindsight because it attracted a bunch of edgy bad ppl. You framing it as a strategic choice is incorrect for this reason.

How do you explain those three decisions if you think that she's committed to upholding her credibility?

Well I think I've covered it above. The purpose of "Upholding Credibility" is so that the ppl you care about knowing the truth can actually receive the truth from you.  And none of the decisions above impede information-conveyance or reflect defections on the groups of ppl Ziz would be interested in helping (ethical ppl who can work with her, concerned trans women who might be at risk, etc).

...

I'll admit, I'm a little angry with your response so I might have a harsher tone here. Some of your arguments struck a nerve with me because it really feels like you're implying that a bunch of my (non-rationalist, completely unaffiliated) LGBTQ+ friends are liars. Here's what your arguments sound like to me:

  • "If someone claims Something Bad is happening and then handles it inappropriately, we should give significant weight to the possibility that they are lying because handling it inappropriately was defection" 
  • "When a marginalized person copes with marginalization by inverting mainstream aesthetics, we should give significant weight to the possibility that they are lying because not following the mainstream means you don't care about being taken honestly"
  • "If a marginalized person claims Something Bad is happening, handles it inappropriate, and copes with marginalization by inverting mainstream aesthetics, they're systematically a liar"
Comment by Isnasene on My Journey to the Dark Side · 2021-05-08T04:38:43.050Z · LW · GW

I'm hesitant about saying things here since, to the extent that my epistemics are right, this is a relatively adversarial environment. I think discussing things would reveal things that I know/how I found out about them without many positive effects (I'm also disconnected from the Bay Area Community). After all, if you were confident that Ziz was lying, nothing I know would likely change your mind. Similarly, if you felt like Ziz might be telling the truth, the gravity of the claims probably has more relevance to your actions than the extent to which my info would move the probability.

That being said, DM me and we can chat. I'm also pretty curious about your interactions with Ziz/how she tried to manipulate you. 

Comment by Isnasene on My Journey to the Dark Side · 2021-05-08T01:23:08.616Z · LW · GW

Since this post is back-up, let's just have convo here alright? Don't wanna make things confusing

Comment by Isnasene on My Journey to the Dark Side · 2021-05-08T01:22:22.207Z · LW · GW

Per the top post, Ziz never lies (for a reasonable definition of what a lie is). Other than that, I don't think she is lying for four main reasons: 1) her decision theory implies that she isn't,  2) the content of her claims seems plausible to me, 3) her claims don't seem particularly strategically helpful) and 4)  I have been able to independently verify some sub-components of her claims

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does.

Here's my extended reasoning for the four justifications above:

  • Ziz's whole philosophy is based on TDT and lying is trivially a defection that globally damages credibility (your response is an example of how). I think Ziz has spoken on decision theory in good faith and, frankly, has an unusually nuanced understanding of it to the extent that she wouldn't lie.
    • Corollary #1: If Ziz did choose to lie, it would imply that that she already inferred truthfulness would not actually establish her credibility in social reality (and my guess is her reasoning for this inference would be reasonable). This would mean that I shouldn't trust Ziz but I also shouldn't trust anyone else about Ziz.
    • Corollary #2: If you think Ziz is being deceptive about her own decision theory, then you can't infer anything about what she claims her goals are from what she says (since all of this is heavily based on decision theory). I think Ziz is being honest about her goals.
    • Corollary #3: "bruh but she might still be lying if she thinks she can get away with it!!!". To which I reply, "schelling points bruh."
  • Ziz is a trans women which makes her vulnerable to have ppl act against her in ways that most cis people never experience (and therefore default to disbelieving unless they're particular aware of what goes on). For this reason, a trans women with the reputation of a liar is in much greater danger than a cis person because it allows transphobes to take unconstrained actions against them. I doubt Ziz would act in ways that increase this risk
    • Corollary #1: if this stuff didn't happen to Ziz, I'd be surprised that she'd independently come up with stuff on her own that matches my priors this well. I'm not trans myself but I have plenty of trans friends and, from what I've learned from them, Ziz's account seems plausible
  • Many of the claims she has made are not obviously effective at furthering her own goals as one might expect if they were lies
  • I have personally verified a number of small details and claims Ziz has made for myself. This makes me inclined to believe she is being honest.
Comment by Isnasene on [deleted post] 2021-05-07T19:59:14.219Z

since everything was deleted, I'm reposting my comment below. If my comment doesn't make sense, its likely that the above document was edited. Below my original comment, I'll post ChristianKIs reply and my response to it.

--------ORIGINAL COMMENT--------------------------------------

So first off, thanks for sharing -- its really interesting to hear other ppls experience with scrupulosity and Ziz's work. That being said... I have a fair amount of criticism wrt your discussion of Ziz

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does

Ziz has made a number of specific claims about the rationality community that seem extremely bad to me including (off the top of my head): endemic transphobia in CFAR, sexual misconduct, an attempted cover-up of sexual misconduct endemic (at least at a point) in MIRI. If these occured, they are real concrete events independent of worldview. 

That stuff matters. It mattered enough to me that I've been off this website and un-associated with the rationality community for upwards of a year because I heard about it.

The final Big Idea on Sinceriously is the one which is widely considered to be the most intensely radioactive and results in most of the hostility aimed at her and her followers. This is Ziz’s moral theory, which is, to put it lightly, very extreme.... To her, carnism is a literal holocaust, on ongoing and perpetual nightmare of torture, rape, and murder being conducted on a horrifyingly vast scale by a race of flesh eating monsters.

Just showing up here to preemptively lightly push back on the textual association here between "very extreme moral theory" and "carnism is a literal holocaust." There is a very broad spectrum of moral beliefs that notice that carnism is a literal holocaust and Ziz's philosophy just happens to also be rigorous enough to notice this.

Hm maybe this set me off on defensiveness but, as I continued reading, I couldn't help reinterpret parts of it as a hit piece on Ziz. Here are specific quotes I view as designed to be unjustifiably adversarial (in bold) and my response (in italics) explaining why I perceive them as such:

  • "...is willing to go as far as holding protests at CFAR meetups and trying to create her own vegan torture basilisk to timelessly blackmail carnists into not eating meat."
    (If you buy the game-theory being right and buy that AGI would have correct game-theory, everyone working on AGI is trying to create a torture basilisk of some kind and Ziz's just happens to also be vegan. Seems like summarizing what Ziz is trying to do as "vegan torture basilisk" is just punishing her for explicitly thinking about doing the thing everyone's already doing.)
  • "What’s a few humans killing themselves when the stakes are literally all of sentient life and the future of all sentient life in the universe?"
    (The equivocation between "two people committed suicide because they read sinceriously" and "Ziz killed people" seems rhetorically adversarial here)
  • "This is not to say that you should go out and start using the specific formulation of utilitarianism and timeless decision theory which she does unless you’re also a radical vegan extremist"
    (Ouch lol. I'm a non-radical vegan and see below)
  • "Even being willing to write “my morals just happen to correspond with the most objectively correct version of morality” is a pretty gutsy move to make that seems to imply some degree of grandiosity and disconnection from reality"
    (Arbitrary demand for moral rigor. Note that Ziz's morality doesn't have to be the most objectively correct version for her to act as she does, it just has to beat out the competition -- which isn't hard to do since most ppl like factory farming)

--------CURRENT STATE OF CONVO WITH CHRISTIANKI AS OF POSTING-------------------------------------

ChristianKI responded to comment asking:
"It seems that Ziz has a worldview according to which she's willing to lie when it furthers her goals. Why do you believe her enough at this point?"

My response:

idk if you can read this since the post was deleted but the short answer is that, per the top post, Ziz never lies (for a reasonable definition of what a lie is) and I'm inclined to agree:

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does.

Moreover, if it is true that Ziz's goals are promoting a vegan singularity, then the specific claims she made about transphobia/cover-ups/etc are extremely sub-optimal for furthering this goal. [Also: My impression is that Ziz lies less often than a normal person for timeless-decision-theory reasons so I think odds are that she isn't.]*

[edited to note parenthetical wasn't in my original response to ChristianKI, which I tried to do like two minutes ago]

Comment by Isnasene on [deleted post] 2021-05-07T19:54:34.447Z

oh and if you can read this, hive reposted it so feel free so I'm bringing the discussion there

Comment by Isnasene on [deleted post] 2021-05-07T19:52:48.739Z

idk if you can read this since the post was deleted but the short answer is that, per the top post, Ziz never lies (for a reasonable definition of what a lie is) and I'm inclined to agree:

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does.

Moreover, if it is true that Ziz's goals are promoting a vegan singularity, then the specific claims she made about transphobia/cover-ups/etc are extremely suboptimal for furthering this goal

Comment by Isnasene on My Journey to the Dark Side · 2021-05-07T02:52:49.001Z · LW · GW

So first off, thanks for sharing -- its really interesting to hear other ppls experience with scrupulosity and Ziz's work. That being said... I have a fair amount of criticism wrt your discussion of Ziz

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does

Ziz has made a number of specific claims about the rationality community that seem extremely bad to me including (off the top of my head): endemic transphobia in CFAR, sexual misconduct, an attempted cover-up of sexual misconduct endemic (at least at a point) in MIRI. If these occured, they are real concrete events independent of worldview. 

That stuff matters. It mattered enough to me that I've been off this website and un-associated with the rationality community for upwards of a year because I heard about it.

The final Big Idea on Sinceriously is the one which is widely considered to be the most intensely radioactive and results in most of the hostility aimed at her and her followers. This is Ziz’s moral theory, which is, to put it lightly, very extreme.... To her, carnism is a literal holocaust, on ongoing and perpetual nightmare of torture, rape, and murder being conducted on a horrifyingly vast scale by a race of flesh eating monsters.

Just showing up here to preemptively lightly push back on the textual association here between "very extreme moral theory" and "carnism is a literal holocaust." There is a very broad spectrum of moral beliefs that notice that carnism is a literal holocaust and Ziz's philosophy just happens to also be rigorous enough to notice this.

Hm maybe this set me off on defensiveness but, as I continued reading, I couldn't help reinterpret parts of it as a hit piece on Ziz. Here are specific quotes I view as designed to be unjustifiably adversarial (in bold) and my response (in italics) explaining why I perceive them as such:

  • "...is willing to go as far as holding protests at CFAR meetups and trying to create her own vegan torture basilisk to timelessly blackmail carnists into not eating meat."
    (If you buy the game-theory being right and buy that AGI would have correct game-theory, everyone working on AGI is trying to create a torture basilisk of some kind and Ziz's just happens to also be vegan. Seems like summarizing what Ziz is trying to do as "vegan torture basilisk" is just punishing her for explicitly thinking about doing the thing everyone's already doing.)
  • "What’s a few humans killing themselves when the stakes are literally all of sentient life and the future of all sentient life in the universe?"
    (The equivocation between "two people committed suicide because they read sinceriously" and "Ziz killed people" seems rhetorically adversarial here)
  • "This is not to say that you should go out and start using the specific formulation of utilitarianism and timeless decision theory which she does unless you’re also a radical vegan extremist"
    (Ouch lol. I'm a non-radical vegan and see below)
  • "Even being willing to write “my morals just happen to correspond with the most objectively correct version of morality” is a pretty gutsy move to make that seems to imply some degree of grandiosity and disconnection from reality"
    (Arbitrary demand for moral rigor. Note that Ziz's morality doesn't have to be the most objectively correct version for her to act as she does, it just has to beat out the competition -- which isn't hard to do since most ppl like factory farming)
Comment by Isnasene on Universal Eudaimonia · 2020-10-05T18:14:08.495Z · LW · GW

The trouble here is that deep disagreements aren't often symmetrically held with the same intensity. Consider the following situation:

Say we have Protag and Villain. Villain goes around torturing people and happens upon Protag's brother. Protag's brother is subsequently tortured and killed. Protag is unable to forgive Villain but Villain has nothing personal against Protag. Which of the following is the outcome?

  • Protag says "Villain must not go to Eudaemonia" so neither Protag nor Villain go to Eudaemonia
  • Protag says "Villain must not go to Eudaemonia" so Protag cannot go to Eudaemonia. Villain says "I don't care what happens to Protag; he can go if he wants" so Villain gets to go to Eudaemonia
  • Protag says "Villain must not go to Eudaemonia" but it doesn't matter because next month they talk to someone else they disagree with and both go to Eudaemonia anyway

The first case is sad but understandable here -- but also allows extremist purple-tribe members to veto non-extremist green-tribe members (where purple and green ideologies pertain to something silly like "how to play pool correctly"). The second case is perverse. The third case is just "violate people's preferences for retribution, but with extra steps."

Comment by Isnasene on Dutch-Booking CDT: Revised Argument · 2020-06-12T19:59:22.213Z · LW · GW

So, silly question that doesn't really address the point of this post (this may very well be just a point of clarity thing but it would be useful for me to have an answer due to earning-to-give related reasons off-topic for this post) --

Here you claim that CDT is a generalization of decision-theories that includes TDT (fair enough!):

Here, "CDT" refers -- very broadly -- to using counterfactuals to evaluate expected value of actions. It need not mean physical-causal counterfactuals. In particular, TDT counts as "a CDT" in this sense.

But here you describe CDT as two-boxing in Newcomb, which conflicts with my understanding that TDT one-boxes coupled with your claim that TDT counts as a CDT:

For example, in Newcomb, CDT two-boxes, and agrees with EDT about the consequences of two-boxing. The disagreement is only about the value of the other action.

So is this conflict a matter of using the colloquial definition of CDT in the second quote but a broader one in the first, having a more general framework for what two-boxing is than my own, or knowing something about TDT that I don't?

Comment by Isnasene on OpenAI announces GPT-3 · 2020-05-29T23:14:29.925Z · LW · GW

Thanks! This is great.

Comment by Isnasene on OpenAI announces GPT-3 · 2020-05-29T14:01:37.405Z · LW · GW
A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedeo with a paper bag over his head that read, "I am a shape-shifter. I can't change the world. I can only change myself."

-- GPT-3 generated news article humans found easiest to distinguish from the real deal.

... I haven't read the paper in detail but we may have done it; we may be on the verge of superhuman skill at absurdist comedy! That's not even completely a joke. Look at the sentence "I am a shape-shifter. I can't change the world. I can only change myself." It's successful (whether intended or not) wordplay. "I can't change the world. I can only change myself" is often used as a sort of moral truism (e.g. Man in the Mirror, Michael Jackson). In contrast, "I am a shape-shifter" is a literal claim about one's ability to change themselves.

The upshot is that GPT-3 can equivocate between the colloquial meaning of a phrase and the literal meaning of a phrase in a way that I think is clever. I haven't looked into whether the other GPTs did this (it makes sense that a statistical learner would pick up this kind of behavior) but dayum.

Comment by Isnasene on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-10T06:56:18.862Z · LW · GW
I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us.

This is probably the crux of our disagreement. If an AI is indeed powerful enough to wrest power from humanity, the catastrophic convergence conjecture implies that it by default will. And if the AI is indeed powerful enough to wrest power from humanity, I have difficulty envisioning things we could offer it in trade that it couldn't just unilaterally satisfy for itself in a cheaper and more efficient manner.

As an intuition pump for this, I think that the AI-human power differential will be more similar to the human-animal differential than the company-human differential. In the latter case, the company actually relies on humans for continued support (something an AI that can roll-out human-level AI won't need to do at some point) and thus has to maintain a level of trust. In the former case, well... people don't really negotiate with animals at all.

Comment by Isnasene on Multiple Arguments, Multiple Comments · 2020-05-09T03:26:43.954Z · LW · GW

Yeah I don't do it for mainly selfish reasons but I agree that there are a lot of benefits to separating arguments into multiple comments in terms of improving readability and structure. Frankly, I commend you for doing it (and I'm particularly amenable to it because I like bullet-points). With that said, here are some reasons you shouldn't take too seriously for why I don't:

Selfish Reasons:

  • It's straightforwardly easier -- I tend to write my comments with a sense of flow. It feels more natural for me to type from start to finish and hit submit once than write and submit multiple things
  • I often use my comments to practice writing/structure and, the more your arguments are divided into different comments, the less structure you need. In some cases, reducing structure is a positive but its not really what I'm going for.
  • When I see several comment notifications on the little bell in the corner of my screen, my immediate reaction is "oh no I messed up" followed by "oh no I have a lot of work to do now." When I realize its all by one person, some of this is relieved but it does cause some stress -- more comments feels like more people even if it isn't

Practical Reasons:

  • If multiple arguments rely on the same context, it allows me to say the context and then say the two arguments following it. If I'm commenting each argument separately, I have to say the context multiple times -- one for each argument relying on it
  • Arguments in general can often have multiple interactions -- so building on one argument might strengthen/weaken my position on a different argument. If I'm splitting each argument into its own comments, then I have to link around to different places to build this
  • When I'm reading an argument, its often because I'm trying to figure out which position on a certain thing is right and I don't want to dig through comments that may serve other purposes (ie top level replies may often include general commentary or explorations of post material that aren't literally arguments). In this context, having to dig through many different kinds of comments to find the arguments is a lot more work than just finding a chain [Epistemic Status: I haven't actually tried this]. This isn't an issue for second-level comments.
  • Similarly, when deciding what position to take, I like some broader unifying discussion of which arguments were right and which were wrong which lead to some conclusion about the position itself. If 3/4 of your arguments made good points and its not a big deal that the fourth was wrong, this should be explored. Similarly, if 1/4 of your arguments made good points but that one is absolutely crucially significant compared to the others, this should be explored as well. If you do a conventional back-and-forth argument, this is a nice way to end the conversation but it becomes more complex if you split your arguments into multiple comments. [Note that in some cases though, its better to make your readers review each argument and think critically for themselves!]
Comment by Isnasene on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T02:27:27.247Z · LW · GW

Nice post! The moof scenario reminds me somewhat of Paul Christiano's slow take-off scenario which you might enjoy reading about. This is basically my stance as well.

AI boxing is actually very easy for Hardware Bound AI. You put the AI inside of an air-gapped firewall and make sure it doesn't have enough compute power to invent some novel form of transmission that isn't known to all of science. Since there is a considerable computational gap between useful AI and "all of science", you can do quite a bit with an AI in a box without worrying too much about it going rogue.

My major concern with AI boxing is the possibility that the AI might just convince people to let it out (ie remove the firewall, provide unbounded internet access, connect it to a Cloud). Maybe you can get around this by combining a limited AI output data stream with a very arduous gated process for letting the AI out in advance but I'm not very confident.

If the biggest threat from AI doesn't come from AI Foom, but rather from Chinese-owned AI with a hostile world-view.

The biggest threat from AI comes from AI-owned AI with a hostile worldview -- no matter whether how the AI gets created. If we can't answer the question "how do we make sure AIs do the things we want them to do when we can't tell them all the things they shouldn't do?", we might wind up with Something Very Smart scheming to take over the world while lacking at least one Important Human Value. Think Age of Em except the Ems aren't even human.

Advancing AI research is actually one of the best things you can do to ensure a "peaceful rise" of AI in the future. The sooner we discover the core algorithms behind intelligence, the more time we will have to prepare for the coming revolution. The worst-case scenario still is that some time in the mid 2030's a single research team comes up with a revolutionary new software that puts them miles ahead of anyone else. The more evenly distributed AI research is, the more mutually beneficial economic games will ensure the peaceful rise of AI.

Because I'm still worried about making sure AI is actually doing the things we want it to do, I'm worried that faster AI advancements will imperil this concern. Beyond that, I'm not really worried about economic dominance in the context of AI. Given a slow takeoff scenario, the economy will be booming like crazy wherever AI has been exercised to its technological capacities even before AGI emerges. In a world of abundant labor and so on, the need for mutually beneficial economic games with other human players, let alone countries, will be much less.

I'm a little worried about military dominance though -- since the country with the best military AI may leverage it to radically gain a geopolitical upper-hand. Still, we were able to handle nuclear weapons so we should probably be able to handle this to.

Comment by Isnasene on It's Not About The Nail · 2020-04-28T22:47:43.900Z · LW · GW

Admittedly the first time I read this I was confused because you went "When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world." This gave the sense that the issue was with the model of the world and not the world itself. This isn't what you meant but I made a list of reasons talking is a thing people do anyway:

  • When you become more vulnerable and the world is less predictable, the support systems you have for handling those things which were created in a more safe/predictable world will have a greater burden. Talking to people in that support system about the issue makes them aware of it and establishes precedent for you requesting more help than usual in the future. Pro-active support system preparation.
  • Similar to talking as a way re-affirming relationships (like you mentioned), talking can also be used directly to strengthen relationships. This might not solve the object-level problem but it gives you more slack to solve it. Pro-active support system building.
  • Even when talking doesn't seem to be providing a solution, it still often provides you information about the problem at hand. For instance, someone else's reaction to your problem can help you gauge its severity and influence your strategy. Often times you don't actually want to find the solution to the problem immediately -- you want to collect a lot of information so you can slowly process it until you reach a conclusion. Information collection.
    • Similarly this is really good if you actually want to solve the problem but don't trust the person you're talking to to actually give you good solutions.
  • Even when talking doesn't seem to be providing a solution, talking typically improves your reasoning ability anyway -- see rubber duck debugging for instance. Note that literally talking about your problems to a rubber duck is more trouble than its worth in cases where "I'm talking about my problems to a rubber duck" is an emotionally harmful concept
  • People are evolved to basically interact with far fewer people than we actually interact with today. In the modern world, talking to someone about a problem often has little impact. But back in the day, talking to one of the dozen or so people in your tribe could have massive utility. In this sense I think that talking to people about problems is kinda instinctual and has built in emotional benefits.
Comment by Isnasene on Is ethics a memetic trap ? · 2020-04-24T01:16:29.452Z · LW · GW
Applying these systems to the kind of choices that I make in everyday life I can see all of them basically saying something like:...

The tricky thing with these kinds of ethical examples is that a bunch of selfish (read: amoral) people would totally take care of their bodies, be nice to they're in iterated games with, try to improve themselves in their professional lives, and seek long-term relationship value. The only unambiguously selfless thing on that list in my opinion is donating -- and that tends to kick the question of ethics down the road to the matter of who you are donating to. This differs in different ethical philosophies.

In any case, the takeaway from this is that people's definitions of what they ought to do are deeply entangled with the things that they would want to do. I think this is why many of the ethical systems you're describing make similar suggestions. But, once you start to think about actions you might not actually be comfortable doing -- many ethical systems make nontrivial claims.

Not every ethical system says you may lie if it makes people feel better. Not every ethical system says you shouldn't eat meat. Not every ethical system says you should invest in science. Not every ethical system says you should pray. Not every ethical system says you should seek out lucrative employment purely to donate the money.

These non-trivial claims matter. Because in some cases, they correspond to the highest leverage ethical actions a person could possibly take -- eclipsing the relevance of ordinary day-to-day actions entirely.

There are easy ways to being a better moral agent, but to do that, you should probably maximize the time you spend taking care of yourself, taking care of others, volunteering, or working towards important issues… rather than reading Kant.

I agree with this though. If you want to do ethical things... just go ahead and do them. If it wasn't something you cared about before you read about moral imperatives, its unlikely to start being something you care about after.

Comment by Isnasene on TheRealClippy's Shortform · 2020-04-22T03:52:11.612Z · LW · GW

Nah. Based on my interaction with humans who work from home, most aren't really that invested in the whole "support the paperclip factories" thing -- as evidenced by their willingness to chill out now that they're away from offices and can do it without being yelled at (sorry humans! forgive me for revealing your secrets!). Nearly half of Americans live paycheck to paycheck so (on the margin), Covid19 is absolutely catastrophic for the financial well-being (read: self-agency) of many people which propagates into the long-term via wage scarring. It's completely understandable that they're freaking out.

Also note that many of the people objecting to being forced to stay home are older. They might not be as at-risk as old/infirm people but they're still at serious risk anyway. I'd frankly do quite a bit to avoid getting coronavirus if I could and I'm young. If you're in dire enough straits to risk getting coronavirus for employment, you're probably doing it because you need to -- certainly not because of any abstract concerns about paperclip factories.

That being said, there are totally a bunch of people who are acting like our paperclip-making capabilities outweigh the importance of old and infirm humans. They aren't most humans but they exist. They're called Moloch's Army and a bunch of the other humans really are working on figuring how to talk about them in public. Beware though, the protestors you are thinking of might not be the droids you're looking for.

Comment by Isnasene on April Coronavirus Open Thread · 2020-04-19T03:51:55.850Z · LW · GW

I think the brief era of me looking at Kinsa weathermap data has ended for now. My best guess is that that covid spread among Kinsa users has been almost completely mitigated by the lockdown and current estimatess of r0 are being driven almost exclusively by other demographics. Otherwise, the data doesn't really line up:

  • As of now, Kinsa reports 0% ill for the United States (this is likely just a matter of misleading rounding: New York county has 0.73% ill)
  • New York's trend is a much more aggressive drop than what would be anticipated by Cuomo's official estimate of r0=0.9.
  • None of these trends really fall in line with state-by-state r0 estimates[1] either
    • Georgia has the worst r0 estimate of 1.5 but Fulton County GA (Atlanta) has been flat at 0% ill since April 7 according to Kinsa

[1] Linking to the Twitter link because there is some criticism of these estimates: "They use case counts, which are massively and non-uniformly censored. A big daily growth rate in positive cases is often just testing ramping up or old tests finally coming back."

Comment by Isnasene on Deminatalist Total Utilitarianism · 2020-04-17T18:19:41.676Z · LW · GW

On the practical side, figuring out the -u0 penalty for non-humans is extremely important for those adopting this sort of ethical system. Animals that produce lots of offspring that rarely survive to adulthood would rack up -u0 penalties extremely quickly while barely living long enough to offset those penalties with hedonic utility. This happens at a large enough of scale that, if -u0 is non-negligible, wild animal reproduction might be the most dominant source of disutility by many orders of magnitude.

When I try to think about how to define -u0 for non-humans, I get really confused -- more-so than I do when I reason about how animals suffer. The panpsychist approach would probably be something like "define death/life or non-existence/existence as a spectrum and make species-specific u0s proportional to where species fall on that spectrum." Metrics of sapience/self-awareness/cognition/other-things-in-that-cluster might be serviceable for this though.

Comment by Isnasene on The Unilateralist’s “Curse” Is Mostly Good · 2020-04-17T04:53:09.402Z · LW · GW

Yeah, my impression as that the Unilateralist's Curse as something bad mostly relies on the assumption that everyone is taking actions based on the common good. From the paper,

Suppose that each agent decides whether or not to undertake X on the basis of her own independent judgement of the value of X, where the value of X is assumed to be independent of who undertakes X, and is supposed to be determined by the contribution of X to the common good...

That is to say-- if each agent is not deciding to undertake X on the basis of the common good, perhaps because of fundamental value differences or subconscious incentives, there is no longer an implication that the unilateral action will be chosen more often than it ought to be.

I believe that example of Galileo and the Pentagon Papers are both cases where the "common good" assumption fails. In the context of Galileo, it's easy to justify this -- I'm an anti-realist and the Church does not share my ethical stances so they differ in terms of the common good. In the context of the Pentagon papers, one has to grapple with the fact that most of the people choosing not to leak them were involved not-very-common-good-at-all actions that those papers revealed.

The stronger argument for the Unilateralist's Curse for effective altruism in particular is that, for most of us, our similar perceptions of the common good are what attracted us in the first place (whereas, in many examples of the Unilateralist Curse, values are very inhomogenous). Also because cooperation is game-theoretically great, there's a sort of institutional pressure for those involves in effective altruism to assume others are considering the common good in good-faith.

Comment by Isnasene on Choosing the Zero Point · 2020-04-09T17:53:33.913Z · LW · GW

Thanks for confirming. For what it's worth, I can envision your experience being a somewhat frequent one (and I think it's probably actually more common among rationalists than the average Jo). It's somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There's no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it's easy.

Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.

I'll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it's definitely led to some unusual psychology -- I'm planning on doing a post on it one of these days.

Comment by Isnasene on Choosing the Zero Point · 2020-04-09T17:32:12.370Z · LW · GW
That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.)

Ah, understandable. I felt a similar way back when I was doing materials engineering -- and I admit I put a lot of work into figuring out how to connect my research with doing good before I moved on from that. I think that when you're working on something you're passionate about, you're much more likely to try to connect it to making a big positive impact and to convince yourself that your coworkers are making a big positive impact.

That being said, I think it's important to distinguish impressiveness from ethical significance and to recognize that impressiveness itself is a personally-selected free variable. If I described myself as a very skilled computational researcher (more impressive), I'd feel very good about my ethical performance relative to my reference class. But if I described myself as a financially blessed rationalist (less impressive), I'd feel rather bad.

There are two opposite pieces of advice here, and I don't know how to tell people which is true for them- if anything, I think they might gravitate to the wrong piece of advice, since they're already biased in that direction.

In any case, I agree with you at the object level with respect to academia. Because academic research is often a passion project, and we prefer our passions to be ethically significant, and academic culture is particularly conducive to imposter syndrome, overestimating the ethical contributions of our corresponding academic reference class is pretty likely. Now that I'm EtG in finance, the environmental consequences are different.

Actually, how about this -- instead of benchmarking against a world where you're a random member of your reference class, you just benchmark against the world where you don't exist at all? It might be more lax than benchmarking against a member of your reference-class in cases where your reference class is doing good things but it also protects you from unnecessary ethical anguish caused by social distortions like imposter syndrome. Also, since we really want to believe that our existences are valuable anyway, it probably won't incentivize any psychological shenanigans we aren't already incentivized to do.

Comment by Isnasene on Choosing the Zero Point · 2020-04-08T23:34:04.924Z · LW · GW
I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class"

If you move your zero-point to reflect world-trajectory based on a random person in your reference class, it creates incentives to view the average person in your reference class as less altruistic than they truly are and to unconsciously normalize bad behavior in that class.

Comment by Isnasene on Choosing the Zero Point · 2020-04-08T23:29:43.044Z · LW · GW
It's also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions.

I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you'd have to be using some really nonstandard utilitarianism.

Comment by Isnasene on Choosing the Zero Point · 2020-04-08T23:24:36.373Z · LW · GW

Huh... I think the crux of our differences here is that I don't view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior -- I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn't really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn't modify the ethical framework/ultimate actions really perturbs me.

Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just "positive reinforcement vs punishment" (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.

Comment by Isnasene on Choosing the Zero Point · 2020-04-07T23:39:41.655Z · LW · GW
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)

I share your problem with purity ethics... I almost agree with this? Frankly, I have some issue with using the claim "a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!" and juxtaposing it with something kind-of like the claim "it's alright to not be very utilitarian!" The claims kind of invalidate each other. Don't get me wrong, there's definitely some sort of ethical pareto-frontier where you balance the strength of each claim individually but, unless that's qualified, I'm not thrilled.

For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The "insufficiently horrified" framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.

There are two things going on here -- the actual action of meat consumption and the internal characterization of horror. Actions that involve consuming less meat might point to short-term ethical improvements but people who are horrified of consuming meat point to much longer-term ethical improvements. If I had a choice between two people who cut meat by two-thirds and the same people doing the same thing while also kinda being horrified of what they're doing, I'd choose the latter.

Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?

For similar reasons, I'd prefer one vegan over two people who'd cut meat by 2/3. Being vegan points to a level of experienced horror and that points to them being a long-term ethical ally. Cutting meat by 2/3 points towards people who are kinda uncomfortable with animal suffering (but more likely health concerns tbh) but who probably aren't going to take any significantly helpful actions about it.

And in reverse, I'd prefer one meat-eater on the margin who does it out of physical necessity but is horrified by it to a vegan who does it because that's how they grew up. The long-term implication of the horror is sometimes better than the direct consequence of the action.

Comment by Isnasene on Choosing the Zero Point · 2020-04-07T23:12:06.172Z · LW · GW
Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening.

Load-bearing horror != constant anguish. There are ways to have an intuitively low zero point measure of the world that don't lead to constant anguish. Other than that, I agree with you -- constant anguish is bad. The extent of my ethics-related anguish is probably more along the lines of 2-3 hour blocks of periodic frustration that happen every couple weeks.

That could be true for you, but it seems counter to the way most people work. Constant anguish tends not to motivate, it instead leads to psychological collapse, or to frantic measures when patience would achieve more, or to protected beliefs that resist challenge in any small part.

Yeah, this is my experience with constant anguish as well (though the root cause of that was more school-related than anything else). I agree with your characterization (and as a mildly self-interested person), I also don't really think its reasonable to demand that people be in constant anguish at all -- regardless of the utilitarian consequences.

To play Devil's Advocate though, I (and many others) are not in the class of people who's psychological wellbeing or decision-making skills actually much contribute to ethical improvement at all; we're in the class of people who donate money. Unless the anguish of someone in this class is strong enough to impede wealth accumulation toward donating (which it basically can't once you have enough money that your stock market returns compete with your income), there's not really a reason to limit it.

Comment by Isnasene on Choosing the Zero Point · 2020-04-07T15:56:45.828Z · LW · GW

As an animal-welfare lacto-vegetarian who's seen a fair number of arguments along these lines, they don't really do it for me. In my experience, it's not really possible to separate human peace of mind from the actions you make (the former reflect an ethical framework and the latter reflect strategies and together they form an aesthetic feedback loop) . To be explicit:

  • I don't think my moral zero-point was ever up for grabs. Moreover, it wasn't "the world I interact with every day." it was driven by an internal sense of what makes existing okay and what doesn't and extrapolating that over the universe. Raising/lowering my zero-point is therefore internally connected with my heuristic for whether more beings should exist or not and in this sense, the zero-point was only a proxy for my psychological anguish pointing at this concept. If I artificially inflate/depreciate my zero-point while maintaining awareness that this has no effect on whether or not the average being existing is good or bad, it won't actually change how I feel psychologically.
  • A vast amount of my anguish around having a very low zero-point was social angst. A low zero-point (especially when due to animal welfare) not only meant that the world was bad; it meant that barely anyone cared (and in my immediate bubble, literally no one cares). This stuff occurred to me when I was very young and can result in what I now know to be institutional betrayal trauma. Had I been an ordinary kiddo that didn't make real-time psychological corrections when my brain started acting funny, this would've happened to me.
    • Also, while I get what you're saying, having a different value of something psychologically linked to a normative claim about "when it is good to exist" or "the bare standard of human decency" will gaslight people traumatized by mismatches between those claims and people's actual actions. If you keep this zero-point alteration tool solely for the psychological benefits, it's not a big deal. But if you talk to people about ethics and think your moral statements might be reflective of a modified zero-point, then it can be an issue. In light of this, I'd recommend preambling your ethical statements with something like "if I seem insufficiently horrified, it is only because I am deliberately modifying my definition of the bare standard of human decency/zero-point for reasons of mental well-being". Otherwise, you'll mess a whole bunch of people up.
  • You've pointed out changing your zero-point gives you a number of psychological benefits. However, I think most of these psychological benefits come from the fact that people are more satisficing than utilitarian and this causes zero-point shifts to also cause nonlinear transformations of your utility function. If you're accustomed to being internally satisfied by the world having utility over threshold X and you change your zero-point for the world without changing that threshold, you'll predictably have more acceptance, relief and hope but this is because you've performed a de-facto nonlinear transformation of your utility function. Sometimes this, conditioned on being an irrational human, is a good thing to do to be more effective. Sometimes it makes you vulnerable to unbounded amounts of moral hazard. If you're arguing in favor of zero-point moving, you need to address the concerns implied by the latter possibility.
  • For evidence that these claim generalize beyond me, just look at your quote from Rob. He's talking about a "bare standard of human decency" but note that this standard is actually a set of strategies! As you pointed out, strategies are invariant if you change your utility function's zero point so the bare standard of human decency should be invariant too! As a non-utilitarian, this means you have four options with respect to your zero-point and each of them have their own drawbacks:
    • Not changing your zero-point and bite the bullet psychologically
    • Changing your zero-point but decoupling it from your sense of the "bare standard of human decency" which is held constant. This eliminates the psychological benefits
    • Changing your zero-point and allowing your "bare standard of human decency" to drift. This modifies your utility function.
    • Changing your zero-point and allowing your "bare standard of decency" to drift but decoupling your "bare standard of decency" from the actions you actually make. This will either eliminate the psychological benefits or break your sense of ethics
Comment by Isnasene on April Coronavirus Open Thread · 2020-04-06T13:54:02.165Z · LW · GW

Thanks for pointing this out. Having recently looked at Ohio County KY, I think this is correct. %ill there max'd out at above 1% the typical range but has since dropped below 0.4% of the typical range and started rising again (which is notable in contrast with seasonal trends) [Edit to point out that this is true for many counties in the Kentucky/Tennessee area]. This basically demonstrates that having a reported %ill now that is lower than previous in the Kinsa database is insufficient to show r0<1. Probably best to stick with the prior of containment failure.

Comment by Isnasene on Life as metaphor for everything else. · 2020-04-05T20:01:50.815Z · LW · GW
"I only care about animal rights because animals are alive"
1. Imagine seeing someone take a sledgehammer to a beautiful statue. How do you feel?
2. Someone swats a mosquito. How do you feel?

In this context, I think the word rights is doing a lot of work that your question is not capturing. While seeing someone destroy a beautiful stature would feel worse than seeing someone swat a mosquito, this in no way indicates that I care about "statue rights." I acknowledge that the word rights is kind of fuzzy but here's my interpretation:

I feel bad about someone destroying a beautiful statue simply because a) I find the statue beautiful and view its existence as advancing my values with respect to beauty and b) I express empathy for others who care about the statue. It doesn't have a right to exist; I would just prefer it to and ascribe a right to living beings who have similar preferences to have those preferences remain unviolated.

I feel bad about a mosquito getting swatted insofar as the mosquito has a right to exist -- because its own preferences and individual experiences merit consideration on their own grounds.

Also, do you bury or eat the dead? (Animals, not humans. What about pets?)

If you bury the dead for the sake of deceased, then you grant the dead rights -- and I think many people do this. But if you bury the dead for your own sake, then you do not -- you are just claiming that you have the right to bury the dead or that the alive have the right to ensure the burial of their dead bodies.

If you bury pets but not other animals, it is not the pet that has the right to be buried; it is that pet owners have the right for their pets to be buried.

Comment by Isnasene on April Coronavirus Open Thread · 2020-04-04T09:09:03.907Z · LW · GW

I've been playing with the Kinsa Health weathermap data to get a sense of how effective US lockdowns have been at reducing US fever. The main thing I am interested in is the question of whether lockdown has reduced coronavirus's r0 below 1 (stopping the spread) or not (reducing spread-rate but not stopping it). I've seen evidence that Spain's complete lockdown has not worked so my expectation is that this is probably the case here. Also, Kinsa's data has two important caveats:

  • People who own smart thermometers are more likely to be health conscious which makes them more likely to be health conscious than the overall population. Kinsa may therefore overstate the effect of the lockdown by not effectively sampling the health apathetic people more likely to get the virus.
  • Kinsa data cannot separate coronavirus fever symptoms with flu fever symptoms. At the early stages of coronavirus spread, seasonal flu illness dominates coronavirus illness and seasonal flu r0 is between 1-2. This means that a lockdown can easily eliminate symptoms caused by seasonal flu illness by reducing flu r0 below zero without reducing coronavirus's r0 below zero.
    • I'm addressing this by comparing the largest amounts of observed atypical illness over the last month in different locations with their current total illness to get a conservative estimate of how much coronavirus %ill have changed.

With this in mind, my overall conclusion is that the Kinsa data does not disconfirm the possibility that we've reduced r0 below 1. Within the population of people who use smart thermometer's, we've probably stopped the spread but it may/may not have stopped in the overall population. Here are my specific observations:

  • The overall US %ill weakly suggests we may have reduced r0 below 1. It maxed out at around 5.1% ill compared to a range of 3.7-4.7 %ill . This indicates that 0.4-1.4% of overall illness was due to coronavirus and currently total illness is only 0.88%. This means that, for many values in that range, our lockdowns are actually cutting into the percent of people getting coronavirus and therefore that the virus is not growing.
  • New York county NY %ill weakly suggests that we may have reduced r0 below 1. It maxed out at 6.4 %ill compared to a typical range of 2.75-4.32, indicating that 2.1-3.65% of people had coronavirus. Currently, total illness is 2.56%. Again, for most values in that range, it looks like we're reducing the absolute amount of coronavirus.
  • Cook county IL (Chicago) %ill is very weakly positive on reducing r0 below 1. It maxed out at 5.4 %ill with a range of 2.8-4.9 indicating that 0.5-2.6% of people had coronavirus. Currently the total is 0.92% which suggests we've likely cut into coronavirus illness. The range of typical values is so large though that its hard to reach a conclusion
  • Essex country NJ (Newark) %ill doesn't say much about r0. It maxed out at 6.1 compared to a typical range of 2.9-4.5 which implies a range of coronavirus %ill of 1.6-3.2 The current value is 2.63% which is closer to the higher end of the range so there's no evidence that we've reduced the amount of coronavirus. Still %ill is continuing to trend down so this may change in the future.
  • I also considered looking at Santa Clara County CA, Los Angeles County CA, and Orleans Parish LA (New Orleans) but their %ill never exceeded the atypical value by a large enough amount for me to perform comparison.
  • On Mar28, the overall US %ill changed from a steep linear drop of ~-0.3%ill/day to a weaker linear drop of ~-0.1%ill/day. Also, on Mar28, both Newark's and New York's fast linear drop is broken with a slight increase in illness and it looks like we're on our second leg down there now. Similar on Mar27, Chicago's fast linear drop is broken with a a brief plateau and second leg down. No idea why this happened.
Comment by Isnasene on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-04T00:07:03.983Z · LW · GW

Fair enough. When I was thinking about "broad covid risk", I was referring more to geographical breadth -- something more along the lines of "is this gonna be a big uncontained pandemic" than "is coronavirus a bad thing to get." I grant that the latter could have been a valid consideration (after all, it was with H1N1) and that claiming that it makes "no implication" about broader covid risk was a mis-statement on my part.

That being said, I wouldn't really consider it an alarm bell (and when I read it, it wasn't one for me). The top answer, Connor Flexman, states:

Tl;dr long-term fatigue and mortality from other pneumonias make this look very roughly 2x as bad to me as the mortality-alone estimates.
It’s less precise than looking at CoVs specifically, but we can look at long-term effects just from pneumonia.

For me personally:

  • A 2x increase in how bad Covid19 was in February was not cause for much alarm in general. I just wasn't that worried worried about a pandemic
  • The answer is based long-term effects of pneumonia, not covid itself (which isn't measurable). If I read something that said "hey you have a surprisingly high likelihood of getting pneumonia this year", I would be alarmed. This wasn't really that post
  • I was already kind of expecting that Covid could cause pneumonia based on typical coverage of the virus -- I wasn't surprised by the post in the way I'd expect to be if it was an alarm bell

I'll give the post some points for pointing out a useful, valuable and often-neglected consideration but I dunno. At that time I saw "you are in danger of getting coronavirus" posts as different from "coronavirus can cause bad things to happen" posts. And the former would've been alarm bells and the latter wouldn't've been.

Comment by Isnasene on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-03T21:17:52.239Z · LW · GW

While I agree with the specific claims this post is making (i.e. "Less Wrong provided information about coronavirus risk similar to or just-lagging the stock market"), I think it misses the thing that matters. We're a rationality forum, not a superintelligent stock-market-beating cohort[1]! Compared to the typical human's response to coronavirus, we've done pretty well at recognizing the dangers posed by the exponential spread of pandemics and acting accordingly. Compared to the very smart people who make money by predicting the economic effects of a virus, we've done expectedly mediocre -- after all none of us (including the stock market) really had any special information about the virus's trajectory.

Maybe it is disappointing if we lagged the stock market instead of being perfectly on pace with it but a week of lag is a pretty small amount of time in the grand scheme of things. And I'd expect different auditing methodologies/interpretations to have about that amount in variance. In any case, I don't really think that it's a big deal.

[1]That is, unless you count Bitcoin, which Eliezer Yudkowsky doesn't.

Comment by Isnasene on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-03T20:25:36.446Z · LW · GW

The question in this post is "was Less Wrong a good alarm bell" and in my opinion only one of those links constitute alarm bells -- the one on EAForums. Acknowledging/discussing the existence of the coronavirus is vastly different from acknowledging/discussing the risk of the coronavirus.

  • "Will ncov survivors suffer lasting disability at a high rate?" is a medical question that makes no implication about broader covid risk.
  • "Some quick notes on hand-hygene" does not mention the coronavirus in the main post (but to be fair does have a coronavirus tag). It does make an offhand reference implying the coronavirus could be a "maybe pandemic" but this isn't a concrete estimation of actual risk
  • "Concerning the recent 2019 novel coronavirus outbreak" is a fantastic post that makes concrete claims like it now seems reasonable to assign a non-negligible probability (>2%) to the proposition that the current outbreak will result in a global disaster (>50 million deaths resulting from the pathogen within 1 year). Per one of the comments, this was consistent with Metaculus.

Overall, I'd say that LessWrong was about on par with "having lunch conversations with Chinese-American coworkers" in terms of serving as an actual alarm bell. Moreover, in the case that we admit a weaker standard for what an alarm bell is, it's worth noting that we still don't really beat the stock market -- because it did respond to the coronavirus back in January. It just didn't respond strongly enough to convey an actual broad concrete risk.

I also would be somewhat hesitant about saying that the markets crashed on February 20th. The market continued crashing for quite a while, and this is when Wei Dai wrote some comments about his investment strategy, which, if you had followed it at that point would have still made you a good amount of money.

As someone who pretty regularly follows Less Wrong, I missed Wei Dai's investment strategy which makes me lean in the direction that most casual readers wouldn't have benefitted from it. The linked comment itself also doesn't have very strong valence, stating " The upshot is that maybe it's not too late to short the markets." Low valence open-thread comments don't really sound like alarm bells to me. Wei Dai has also acknowledged that this was a missed opportunity on EAforums.

Moreover, there was also an extremely short actionable window. On February 28th, the stock market saw a swift >5% bear market rally before the second leg of the crash which temporarily undid half the losses. Unless your confidence in "maybe its not too late to short the markets" was strong enough to weather through this, you would've probably lost money. This almost happened to me -- I sold the Thursday morning after Wei Dai's comment and bought back in Monday, netting a very meek ~3% gain.

Comment by Isnasene on March Coronavirus Open Thread · 2020-03-30T22:58:00.101Z · LW · GW

[Epistemic Status: It's easy to be fooled by randomness in the coronavirus data but the data and narrative below make sense to me. Overall, I'm about 70% confident in the actual claim. ]

Iran's recent worldometer data serves case study demonstrating relationship between sufficient testing and case-fatality rate. After a 16 day long plateau (Mar 06-22) in daily new cases which may have seemed reassuring, we've seen five days (Mar 24-28) of roughly linear rise. We could anticipate this by noticing that in a similar time frame (Mar 07-19), we were seeing a linear rise in case fatality rate before it became constant. This indicates the following narrative (not sure if it's actually true):

  • Coronavirus spreads uncontrolled in Iran without increased testing capabilities. This causes new daily cases to stay constant despite increased infection -- the 16 day long plateau in daily new cases
  • Because cases are increasing, the number of severe cases is also increasing - - and severe cases are more likely to get tested than less severe cases. This causes fatality rate to rise as the severity of the cases that are actually tested increases -- the 12 day linear rise in case fatality
  • Recently, testing capabilities were ramped, allowing testing of more people and the observation of less severe cases. As a result, the number of daily cases started increasing again with the testing rate. Simultaneously, the fatality rate plateau'd as the (complex) trend in severe cases being tested in greater proportion to less severe cases was cancelled out by trend in testing. Hence the last five days of daily new case rise and the past eight days of constant fatality rate.
    • Note that this narrative suggests that testing is being continuously ramped up while remaining the bottle-neck. Two pieces of evidence for this:
      • The daily cases start increasing linearly from the plateau. If testing was increased dramatically, one would an immediate discontinuous increase in number of daily cases at the point where more tests are done.
      • Iran's death rate is still much higher (17% compared to an IFR which should be less than 5%) so testing is unlikely to be sufficient to capture the true infection rate
Comment by Isnasene on Adding Up To Normality · 2020-03-27T00:08:48.025Z · LW · GW
I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.

Ah, yeah I agree with this observation -- and it could be good to just assume things add up to normality as a general defense against people rapidly taking naive actions. Scarcity bias is a thing after all and if you get into a mindset where now is the time to act, it's really hard to prevent yourself from acting irrationally.

Comment by Isnasene on Adding Up To Normality · 2020-03-26T18:04:30.786Z · LW · GW
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.

Yeah, but my point is not about catastrophic risk -- it's about the risk/reward trade-off in general. You can have risk>reward in scenarios that aren't catastrophic. Catastrophic risk is just a good general example of where things don't add up to normality (catastrophic risks by nature correspond to not-normal scenarios and also coincide with high risk). Don't promise yourself to steer the plane mostly as normal, promise yourself to pursue the path that reduces risk over all outcomes you're uncertain about.

I don't think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.

Good point, it really depends on the details of the example but this is just because of the different risk-reward trade-offs, not because you ought to always treat things as adding up to normality. I'll counter that while you shouldn't leave the job (high risk, hard to reverse), you should see if you could use your PTO as soon as possible so you can figure things out without potentially causing further negative impact. It all depends on the risk-reward trade-off:

  • If stopping activism corresponds to something like leaving a job, which is hard to reverse, doing so involves taking on a lot of risk if you're uncertain and waiting for a bit can reduce that risk.
  • If stopping activism corresponds to something like shifting your organizations priorities, and your organization's path can be reversed, then stopping work (after satisfying all existing contracts of course) is pretty low risk and you should stop
  • If stopping activism corresponds to donating large amounts of money (in earning-to-give contexts), your strategy can easily be reversed and you should stop now.

This is true even if you only have "small" amounts of impact.

Caveat:

People engage in policies for many reasons at once. So if you think the goal of your policy is X, but it's actually X, Y and Z, then dramatic actions justified on uncertainty about X alone will probably be harmful due to Y and Z effects even if its the appropriate decision with respect to X. Because it's easy to notice when why a thing might go wrong (like X) and hard to notice why they're going right (like Y and Z), adding-up-to-normality serves as a way to generally protect Y and Z.

Comment by Isnasene on Adding Up To Normality · 2020-03-25T15:39:29.150Z · LW · GW

I think the strongest version of this idea of adding p to normality is "new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations." Therefore, when one's actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen -- regardless of any new information.But this strict version of 'adding up to normality' does not apply in situations where one's actions are contingent on unobservables. In cases where new evidence/knowledge may cause someone to dramatically revise the implications of previous observations, things don't add up to normality. Whether this is the case or not for you as an individual depends on your gears-level understanding of your observations.

So in retrospect, the main thing I'd recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift

I somewhat disagree with this. I think, in these kinds of situations, the recommendation should be more along the lines of "promise yourself to make the best risk/reward trade-off you can given your state of uncertainty." If you're flying in a plane that has a good track record of flying, definitely don't touch anything because its more risky to break something that has evidence of working than it is rewarding to fix things that might not actually work. But if you're flying in the world's first plane and realize you don't understand lift, land it as soon as possible.

Some Reasons Things Add Up to Normality

  • If you think the thing you don't understand might be a Chesterton's Fence, there's a good chance it will add up to normality
  • If you think the thing you don't understand can be predicted robustly by inductive reasoning and you only care about being able to accurately predict the thing itself, there's a good chance it will add up to normality

Some Examples where Things Don't Add Up

Example #1 (Moral Revisionism)

You're an eco-rights activist who has tirelessly worked to make the world a better place by protecting wildlife because you believe animals have the right to live good lives on this planet too. Things are going just fine until your friend claims that R-selection implies most animals live short horrible lives and you realize you have no idea whether animals actually live good lives in the wild. Should you immediately panic in fear that you're making things worse?

Yes. Whether or not the claim in question is accurate, your general assumption that protecting wildlife implies improved animal welfare was not well-founded enough to address significant moral risk. You should really stop doing wildlife stuff until you get this figured out or you could actually cause bad things to happen.

Example #2 (Prediction Revisionism)

You've built an AGI and, with all your newfound free-time and wealth, you have a lengthy chat with a mathematician. Things are going along just fine until they point out to you that your understanding of the safety measures used to ensure alignment are wrong, and that the AGI shouldn't be aligned from the safety measures you thought were responsible.Should you immediately panic in fear that the AGI will destroy us all?

Yes. The previous observations are not sufficient to make reliable predictions. But note that a random bystander who is uninvolved with AGI development would be justified in not panicking -- their gears-level understanding hinges on believing that the people who created the AGI are competent enough to address safety, not on believing that the specific details designed to make the AGI safe actually work.

Comment by Isnasene on Good News: the Containment Measures are Working · 2020-03-21T23:02:55.102Z · LW · GW

I shared this post with some of my friends and they pointed out that, as of 3/21/2020, the Italy and Spain curves no longer look as optimistic:

  • On March 16, cases in Italy appeared to be leveling off. Immediately following that, they broke trend and began rising again. March 16 had ~3200 daily cases. March 20 has ~6000.
  • Spain appeared to be leveling off up through March 17th (~1900 daily cases). But on March 18th, it spiked to ~3000. As of March 20th, things may be leveling off again but I wouldn't draw any conclusions
  • Iran's daily cases have stayed flat for a pretty long period of time now -- at around 1000 per day. This seems like it should be good news, tho I'm not sure how good: Since March 8, Iran's death rate (closed cases) has been steadily rising from 8% to 17.5%
Comment by Isnasene on Assorted thoughts on the coronavirus · 2020-03-19T00:35:53.542Z · LW · GW
To me that nudges things somewhat, but isn't a game changer. I don't think it makes it 10x less bad or anything.

Fair enough. As a leaning-utilitarian, I personally share your intuition that it isn't 10x bad (if I had to choose between coronavirus and ending negative consequences of live-style factors for one year, I don't have a strong intuition in favor of coronavirus). Psychologically speaking, from the perspective of average deontological Joe, I think that it (in some sense) is/feels 10x as bad.

Is that really a possibility? I imagine that governments would impose a strict quarantine before letting it get that bad.

10% is unlikely but possible -- not because of the coronavirus itself alone but because of the potential for systemic failure of our healthcare system (based on this comment). I think it's likely that governments may impose a strict quarantine before it gets that bad or (alternatively) bite the bullet and let coronavirus victims die to triage young people with more salient medical needs.

In the situation where you don't have savings or a job, here is what I'm imagining. The majority would have family or a friend they could stay with until they get back on their feet, which doesn't seem that bad.

I partially agree with this. Frankly, as a well-off person myself, I'm not exactly sure what people would do in that situation. Conditioned on having friends or (non-abusive) family with the appropriate economic runway to be supportive, I agree that it wouldn't be that bad. However these (in my sphere) are often significant contributing factors to being low-income in the first place.For low-income families, things also get messier to do the need-to-support-people being built in.

Homeless shelters do provide basic needs, so if you want to be really hardcore with the "happiness is all in your head" stuff, you should still in theory be ok. But I don't know much about what it's truly like; maybe there's more to it than that.

I agree that this kind of stoicism helps (I resonate a lot with stoicism as a philosophy myself). But I view this as more of a mental skill that is built-up rather than something that people start doing immediately when thrust into lower-standad-of-living situations. Hedonic adaptation takes time and the time it takes before setting in can also be unpleasant. I'd also like to push-back a little on the idea of hedonic adapation with respect to losing money because there is a correlation between measures of happiness and income which only starts breaking down around $50k.