Posts

Isnasene's Shortform 2019-12-21T17:12:32.834Z · score: 3 (1 votes)
Effective Altruism Book Review: Radical Abundance (Nanotechnology) 2018-10-14T23:57:36.099Z · score: 48 (12 votes)

Comments

Comment by isnasene on Universal Eudaimonia · 2020-10-05T18:14:08.495Z · score: 10 (8 votes) · LW · GW

The trouble here is that deep disagreements aren't often symmetrically held with the same intensity. Consider the following situation:

Say we have Protag and Villain. Villain goes around torturing people and happens upon Protag's brother. Protag's brother is subsequently tortured and killed. Protag is unable to forgive Villain but Villain has nothing personal against Protag. Which of the following is the outcome?

  • Protag says "Villain must not go to Eudaemonia" so neither Protag nor Villain go to Eudaemonia
  • Protag says "Villain must not go to Eudaemonia" so Protag cannot go to Eudaemonia. Villain says "I don't care what happens to Protag; he can go if he wants" so Villain gets to go to Eudaemonia
  • Protag says "Villain must not go to Eudaemonia" but it doesn't matter because next month they talk to someone else they disagree with and both go to Eudaemonia anyway

The first case is sad but understandable here -- but also allows extremist purple-tribe members to veto non-extremist green-tribe members (where purple and green ideologies pertain to something silly like "how to play pool correctly"). The second case is perverse. The third case is just "violate people's preferences for retribution, but with extra steps."

Comment by isnasene on Dutch-Booking CDT: Revised Argument · 2020-06-12T19:59:22.213Z · score: 2 (2 votes) · LW · GW

So, silly question that doesn't really address the point of this post (this may very well be just a point of clarity thing but it would be useful for me to have an answer due to earning-to-give related reasons off-topic for this post) --

Here you claim that CDT is a generalization of decision-theories that includes TDT (fair enough!):

Here, "CDT" refers -- very broadly -- to using counterfactuals to evaluate expected value of actions. It need not mean physical-causal counterfactuals. In particular, TDT counts as "a CDT" in this sense.

But here you describe CDT as two-boxing in Newcomb, which conflicts with my understanding that TDT one-boxes coupled with your claim that TDT counts as a CDT:

For example, in Newcomb, CDT two-boxes, and agrees with EDT about the consequences of two-boxing. The disagreement is only about the value of the other action.

So is this conflict a matter of using the colloquial definition of CDT in the second quote but a broader one in the first, having a more general framework for what two-boxing is than my own, or knowing something about TDT that I don't?

Comment by isnasene on OpenAI announces GPT-3 · 2020-05-29T23:14:29.925Z · score: 1 (1 votes) · LW · GW

Thanks! This is great.

Comment by isnasene on OpenAI announces GPT-3 · 2020-05-29T14:01:37.405Z · score: 16 (10 votes) · LW · GW
A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedeo with a paper bag over his head that read, "I am a shape-shifter. I can't change the world. I can only change myself."

-- GPT-3 generated news article humans found easiest to distinguish from the real deal.

... I haven't read the paper in detail but we may have done it; we may be on the verge of superhuman skill at absurdist comedy! That's not even completely a joke. Look at the sentence "I am a shape-shifter. I can't change the world. I can only change myself." It's successful (whether intended or not) wordplay. "I can't change the world. I can only change myself" is often used as a sort of moral truism (e.g. Man in the Mirror, Michael Jackson). In contrast, "I am a shape-shifter" is a literal claim about one's ability to change themselves.

The upshot is that GPT-3 can equivocate between the colloquial meaning of a phrase and the literal meaning of a phrase in a way that I think is clever. I haven't looked into whether the other GPTs did this (it makes sense that a statistical learner would pick up this kind of behavior) but dayum.

Comment by isnasene on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-10T06:56:18.862Z · score: 3 (2 votes) · LW · GW
I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us.

This is probably the crux of our disagreement. If an AI is indeed powerful enough to wrest power from humanity, the catastrophic convergence conjecture implies that it by default will. And if the AI is indeed powerful enough to wrest power from humanity, I have difficulty envisioning things we could offer it in trade that it couldn't just unilaterally satisfy for itself in a cheaper and more efficient manner.

As an intuition pump for this, I think that the AI-human power differential will be more similar to the human-animal differential than the company-human differential. In the latter case, the company actually relies on humans for continued support (something an AI that can roll-out human-level AI won't need to do at some point) and thus has to maintain a level of trust. In the former case, well... people don't really negotiate with animals at all.

Comment by isnasene on Multiple Arguments, Multiple Comments · 2020-05-09T03:26:43.954Z · score: 1 (1 votes) · LW · GW

Yeah I don't do it for mainly selfish reasons but I agree that there are a lot of benefits to separating arguments into multiple comments in terms of improving readability and structure. Frankly, I commend you for doing it (and I'm particularly amenable to it because I like bullet-points). With that said, here are some reasons you shouldn't take too seriously for why I don't:

Selfish Reasons:

  • It's straightforwardly easier -- I tend to write my comments with a sense of flow. It feels more natural for me to type from start to finish and hit submit once than write and submit multiple things
  • I often use my comments to practice writing/structure and, the more your arguments are divided into different comments, the less structure you need. In some cases, reducing structure is a positive but its not really what I'm going for.
  • When I see several comment notifications on the little bell in the corner of my screen, my immediate reaction is "oh no I messed up" followed by "oh no I have a lot of work to do now." When I realize its all by one person, some of this is relieved but it does cause some stress -- more comments feels like more people even if it isn't

Practical Reasons:

  • If multiple arguments rely on the same context, it allows me to say the context and then say the two arguments following it. If I'm commenting each argument separately, I have to say the context multiple times -- one for each argument relying on it
  • Arguments in general can often have multiple interactions -- so building on one argument might strengthen/weaken my position on a different argument. If I'm splitting each argument into its own comments, then I have to link around to different places to build this
  • When I'm reading an argument, its often because I'm trying to figure out which position on a certain thing is right and I don't want to dig through comments that may serve other purposes (ie top level replies may often include general commentary or explorations of post material that aren't literally arguments). In this context, having to dig through many different kinds of comments to find the arguments is a lot more work than just finding a chain [Epistemic Status: I haven't actually tried this]. This isn't an issue for second-level comments.
  • Similarly, when deciding what position to take, I like some broader unifying discussion of which arguments were right and which were wrong which lead to some conclusion about the position itself. If 3/4 of your arguments made good points and its not a big deal that the fourth was wrong, this should be explored. Similarly, if 1/4 of your arguments made good points but that one is absolutely crucially significant compared to the others, this should be explored as well. If you do a conventional back-and-forth argument, this is a nice way to end the conversation but it becomes more complex if you split your arguments into multiple comments. [Note that in some cases though, its better to make your readers review each argument and think critically for themselves!]
Comment by isnasene on AI Boxing for Hardware-bound agents (aka the China alignment problem) · 2020-05-09T02:27:27.247Z · score: 1 (1 votes) · LW · GW

Nice post! The moof scenario reminds me somewhat of Paul Christiano's slow take-off scenario which you might enjoy reading about. This is basically my stance as well.

AI boxing is actually very easy for Hardware Bound AI. You put the AI inside of an air-gapped firewall and make sure it doesn't have enough compute power to invent some novel form of transmission that isn't known to all of science. Since there is a considerable computational gap between useful AI and "all of science", you can do quite a bit with an AI in a box without worrying too much about it going rogue.

My major concern with AI boxing is the possibility that the AI might just convince people to let it out (ie remove the firewall, provide unbounded internet access, connect it to a Cloud). Maybe you can get around this by combining a limited AI output data stream with a very arduous gated process for letting the AI out in advance but I'm not very confident.

If the biggest threat from AI doesn't come from AI Foom, but rather from Chinese-owned AI with a hostile world-view.

The biggest threat from AI comes from AI-owned AI with a hostile worldview -- no matter whether how the AI gets created. If we can't answer the question "how do we make sure AIs do the things we want them to do when we can't tell them all the things they shouldn't do?", we might wind up with Something Very Smart scheming to take over the world while lacking at least one Important Human Value. Think Age of Em except the Ems aren't even human.

Advancing AI research is actually one of the best things you can do to ensure a "peaceful rise" of AI in the future. The sooner we discover the core algorithms behind intelligence, the more time we will have to prepare for the coming revolution. The worst-case scenario still is that some time in the mid 2030's a single research team comes up with a revolutionary new software that puts them miles ahead of anyone else. The more evenly distributed AI research is, the more mutually beneficial economic games will ensure the peaceful rise of AI.

Because I'm still worried about making sure AI is actually doing the things we want it to do, I'm worried that faster AI advancements will imperil this concern. Beyond that, I'm not really worried about economic dominance in the context of AI. Given a slow takeoff scenario, the economy will be booming like crazy wherever AI has been exercised to its technological capacities even before AGI emerges. In a world of abundant labor and so on, the need for mutually beneficial economic games with other human players, let alone countries, will be much less.

I'm a little worried about military dominance though -- since the country with the best military AI may leverage it to radically gain a geopolitical upper-hand. Still, we were able to handle nuclear weapons so we should probably be able to handle this to.

Comment by isnasene on It's Not About The Nail · 2020-04-28T22:47:43.900Z · score: 1 (1 votes) · LW · GW

Admittedly the first time I read this I was confused because you went "When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world." This gave the sense that the issue was with the model of the world and not the world itself. This isn't what you meant but I made a list of reasons talking is a thing people do anyway:

  • When you become more vulnerable and the world is less predictable, the support systems you have for handling those things which were created in a more safe/predictable world will have a greater burden. Talking to people in that support system about the issue makes them aware of it and establishes precedent for you requesting more help than usual in the future. Pro-active support system preparation.
  • Similar to talking as a way re-affirming relationships (like you mentioned), talking can also be used directly to strengthen relationships. This might not solve the object-level problem but it gives you more slack to solve it. Pro-active support system building.
  • Even when talking doesn't seem to be providing a solution, it still often provides you information about the problem at hand. For instance, someone else's reaction to your problem can help you gauge its severity and influence your strategy. Often times you don't actually want to find the solution to the problem immediately -- you want to collect a lot of information so you can slowly process it until you reach a conclusion. Information collection.
    • Similarly this is really good if you actually want to solve the problem but don't trust the person you're talking to to actually give you good solutions.
  • Even when talking doesn't seem to be providing a solution, talking typically improves your reasoning ability anyway -- see rubber duck debugging for instance. Note that literally talking about your problems to a rubber duck is more trouble than its worth in cases where "I'm talking about my problems to a rubber duck" is an emotionally harmful concept
  • People are evolved to basically interact with far fewer people than we actually interact with today. In the modern world, talking to someone about a problem often has little impact. But back in the day, talking to one of the dozen or so people in your tribe could have massive utility. In this sense I think that talking to people about problems is kinda instinctual and has built in emotional benefits.
Comment by isnasene on Is ethics a memetic trap ? · 2020-04-24T01:16:29.452Z · score: 4 (3 votes) · LW · GW
Applying these systems to the kind of choices that I make in everyday life I can see all of them basically saying something like:...

The tricky thing with these kinds of ethical examples is that a bunch of selfish (read: amoral) people would totally take care of their bodies, be nice to they're in iterated games with, try to improve themselves in their professional lives, and seek long-term relationship value. The only unambiguously selfless thing on that list in my opinion is donating -- and that tends to kick the question of ethics down the road to the matter of who you are donating to. This differs in different ethical philosophies.

In any case, the takeaway from this is that people's definitions of what they ought to do are deeply entangled with the things that they would want to do. I think this is why many of the ethical systems you're describing make similar suggestions. But, once you start to think about actions you might not actually be comfortable doing -- many ethical systems make nontrivial claims.

Not every ethical system says you may lie if it makes people feel better. Not every ethical system says you shouldn't eat meat. Not every ethical system says you should invest in science. Not every ethical system says you should pray. Not every ethical system says you should seek out lucrative employment purely to donate the money.

These non-trivial claims matter. Because in some cases, they correspond to the highest leverage ethical actions a person could possibly take -- eclipsing the relevance of ordinary day-to-day actions entirely.

There are easy ways to being a better moral agent, but to do that, you should probably maximize the time you spend taking care of yourself, taking care of others, volunteering, or working towards important issues… rather than reading Kant.

I agree with this though. If you want to do ethical things... just go ahead and do them. If it wasn't something you cared about before you read about moral imperatives, its unlikely to start being something you care about after.

Comment by isnasene on TheRealClippy's Shortform · 2020-04-22T03:52:11.612Z · score: 5 (3 votes) · LW · GW

Nah. Based on my interaction with humans who work from home, most aren't really that invested in the whole "support the paperclip factories" thing -- as evidenced by their willingness to chill out now that they're away from offices and can do it without being yelled at (sorry humans! forgive me for revealing your secrets!). Nearly half of Americans live paycheck to paycheck so (on the margin), Covid19 is absolutely catastrophic for the financial well-being (read: self-agency) of many people which propagates into the long-term via wage scarring. It's completely understandable that they're freaking out.

Also note that many of the people objecting to being forced to stay home are older. They might not be as at-risk as old/infirm people but they're still at serious risk anyway. I'd frankly do quite a bit to avoid getting coronavirus if I could and I'm young. If you're in dire enough straits to risk getting coronavirus for employment, you're probably doing it because you need to -- certainly not because of any abstract concerns about paperclip factories.

That being said, there are totally a bunch of people who are acting like our paperclip-making capabilities outweigh the importance of old and infirm humans. They aren't most humans but they exist. They're called Moloch's Army and a bunch of the other humans really are working on figuring how to talk about them in public. Beware though, the protestors you are thinking of might not be the droids you're looking for.

Comment by isnasene on April Coronavirus Open Thread · 2020-04-19T03:51:55.850Z · score: 3 (2 votes) · LW · GW

I think the brief era of me looking at Kinsa weathermap data has ended for now. My best guess is that that covid spread among Kinsa users has been almost completely mitigated by the lockdown and current estimatess of r0 are being driven almost exclusively by other demographics. Otherwise, the data doesn't really line up:

  • As of now, Kinsa reports 0% ill for the United States (this is likely just a matter of misleading rounding: New York county has 0.73% ill)
  • New York's trend is a much more aggressive drop than what would be anticipated by Cuomo's official estimate of r0=0.9.
  • None of these trends really fall in line with state-by-state r0 estimates[1] either
    • Georgia has the worst r0 estimate of 1.5 but Fulton County GA (Atlanta) has been flat at 0% ill since April 7 according to Kinsa

[1] Linking to the Twitter link because there is some criticism of these estimates: "They use case counts, which are massively and non-uniformly censored. A big daily growth rate in positive cases is often just testing ramping up or old tests finally coming back."

Comment by isnasene on Deminatalist Total Utilitarianism · 2020-04-17T18:19:41.676Z · score: 2 (2 votes) · LW · GW

On the practical side, figuring out the -u0 penalty for non-humans is extremely important for those adopting this sort of ethical system. Animals that produce lots of offspring that rarely survive to adulthood would rack up -u0 penalties extremely quickly while barely living long enough to offset those penalties with hedonic utility. This happens at a large enough of scale that, if -u0 is non-negligible, wild animal reproduction might be the most dominant source of disutility by many orders of magnitude.

When I try to think about how to define -u0 for non-humans, I get really confused -- more-so than I do when I reason about how animals suffer. The panpsychist approach would probably be something like "define death/life or non-existence/existence as a spectrum and make species-specific u0s proportional to where species fall on that spectrum." Metrics of sapience/self-awareness/cognition/other-things-in-that-cluster might be serviceable for this though.

Comment by isnasene on The Unilateralist’s “Curse” Is Mostly Good · 2020-04-17T04:53:09.402Z · score: 6 (4 votes) · LW · GW

Yeah, my impression as that the Unilateralist's Curse as something bad mostly relies on the assumption that everyone is taking actions based on the common good. From the paper,

Suppose that each agent decides whether or not to undertake X on the basis of her own independent judgement of the value of X, where the value of X is assumed to be independent of who undertakes X, and is supposed to be determined by the contribution of X to the common good...

That is to say-- if each agent is not deciding to undertake X on the basis of the common good, perhaps because of fundamental value differences or subconscious incentives, there is no longer an implication that the unilateral action will be chosen more often than it ought to be.

I believe that example of Galileo and the Pentagon Papers are both cases where the "common good" assumption fails. In the context of Galileo, it's easy to justify this -- I'm an anti-realist and the Church does not share my ethical stances so they differ in terms of the common good. In the context of the Pentagon papers, one has to grapple with the fact that most of the people choosing not to leak them were involved not-very-common-good-at-all actions that those papers revealed.

The stronger argument for the Unilateralist's Curse for effective altruism in particular is that, for most of us, our similar perceptions of the common good are what attracted us in the first place (whereas, in many examples of the Unilateralist Curse, values are very inhomogenous). Also because cooperation is game-theoretically great, there's a sort of institutional pressure for those involves in effective altruism to assume others are considering the common good in good-faith.

Comment by isnasene on Choosing the Zero Point · 2020-04-09T17:53:33.913Z · score: 2 (2 votes) · LW · GW

Thanks for confirming. For what it's worth, I can envision your experience being a somewhat frequent one (and I think it's probably actually more common among rationalists than the average Jo). It's somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There's no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it's easy.

Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.

I'll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it's definitely led to some unusual psychology -- I'm planning on doing a post on it one of these days.

Comment by isnasene on Choosing the Zero Point · 2020-04-09T17:32:12.370Z · score: 1 (1 votes) · LW · GW
That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.)

Ah, understandable. I felt a similar way back when I was doing materials engineering -- and I admit I put a lot of work into figuring out how to connect my research with doing good before I moved on from that. I think that when you're working on something you're passionate about, you're much more likely to try to connect it to making a big positive impact and to convince yourself that your coworkers are making a big positive impact.

That being said, I think it's important to distinguish impressiveness from ethical significance and to recognize that impressiveness itself is a personally-selected free variable. If I described myself as a very skilled computational researcher (more impressive), I'd feel very good about my ethical performance relative to my reference class. But if I described myself as a financially blessed rationalist (less impressive), I'd feel rather bad.

There are two opposite pieces of advice here, and I don't know how to tell people which is true for them- if anything, I think they might gravitate to the wrong piece of advice, since they're already biased in that direction.

In any case, I agree with you at the object level with respect to academia. Because academic research is often a passion project, and we prefer our passions to be ethically significant, and academic culture is particularly conducive to imposter syndrome, overestimating the ethical contributions of our corresponding academic reference class is pretty likely. Now that I'm EtG in finance, the environmental consequences are different.

Actually, how about this -- instead of benchmarking against a world where you're a random member of your reference class, you just benchmark against the world where you don't exist at all? It might be more lax than benchmarking against a member of your reference-class in cases where your reference class is doing good things but it also protects you from unnecessary ethical anguish caused by social distortions like imposter syndrome. Also, since we really want to believe that our existences are valuable anyway, it probably won't incentivize any psychological shenanigans we aren't already incentivized to do.

Comment by isnasene on Choosing the Zero Point · 2020-04-08T23:34:04.924Z · score: 2 (2 votes) · LW · GW
I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class"

If you move your zero-point to reflect world-trajectory based on a random person in your reference class, it creates incentives to view the average person in your reference class as less altruistic than they truly are and to unconsciously normalize bad behavior in that class.

Comment by isnasene on Choosing the Zero Point · 2020-04-08T23:29:43.044Z · score: 1 (1 votes) · LW · GW
It's also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions.

I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you'd have to be using some really nonstandard utilitarianism.

Comment by isnasene on Choosing the Zero Point · 2020-04-08T23:24:36.373Z · score: 3 (3 votes) · LW · GW

Huh... I think the crux of our differences here is that I don't view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior -- I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn't really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn't modify the ethical framework/ultimate actions really perturbs me.

Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just "positive reinforcement vs punishment" (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.

Comment by isnasene on Choosing the Zero Point · 2020-04-07T23:39:41.655Z · score: 4 (3 votes) · LW · GW
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)

I share your problem with purity ethics... I almost agree with this? Frankly, I have some issue with using the claim "a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!" and juxtaposing it with something kind-of like the claim "it's alright to not be very utilitarian!" The claims kind of invalidate each other. Don't get me wrong, there's definitely some sort of ethical pareto-frontier where you balance the strength of each claim individually but, unless that's qualified, I'm not thrilled.

For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The "insufficiently horrified" framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.

There are two things going on here -- the actual action of meat consumption and the internal characterization of horror. Actions that involve consuming less meat might point to short-term ethical improvements but people who are horrified of consuming meat point to much longer-term ethical improvements. If I had a choice between two people who cut meat by two-thirds and the same people doing the same thing while also kinda being horrified of what they're doing, I'd choose the latter.

Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?

For similar reasons, I'd prefer one vegan over two people who'd cut meat by 2/3. Being vegan points to a level of experienced horror and that points to them being a long-term ethical ally. Cutting meat by 2/3 points towards people who are kinda uncomfortable with animal suffering (but more likely health concerns tbh) but who probably aren't going to take any significantly helpful actions about it.

And in reverse, I'd prefer one meat-eater on the margin who does it out of physical necessity but is horrified by it to a vegan who does it because that's how they grew up. The long-term implication of the horror is sometimes better than the direct consequence of the action.

Comment by isnasene on Choosing the Zero Point · 2020-04-07T23:12:06.172Z · score: 5 (3 votes) · LW · GW
Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening.

Load-bearing horror != constant anguish. There are ways to have an intuitively low zero point measure of the world that don't lead to constant anguish. Other than that, I agree with you -- constant anguish is bad. The extent of my ethics-related anguish is probably more along the lines of 2-3 hour blocks of periodic frustration that happen every couple weeks.

That could be true for you, but it seems counter to the way most people work. Constant anguish tends not to motivate, it instead leads to psychological collapse, or to frantic measures when patience would achieve more, or to protected beliefs that resist challenge in any small part.

Yeah, this is my experience with constant anguish as well (though the root cause of that was more school-related than anything else). I agree with your characterization (and as a mildly self-interested person), I also don't really think its reasonable to demand that people be in constant anguish at all -- regardless of the utilitarian consequences.

To play Devil's Advocate though, I (and many others) are not in the class of people who's psychological wellbeing or decision-making skills actually much contribute to ethical improvement at all; we're in the class of people who donate money. Unless the anguish of someone in this class is strong enough to impede wealth accumulation toward donating (which it basically can't once you have enough money that your stock market returns compete with your income), there's not really a reason to limit it.

Comment by isnasene on Choosing the Zero Point · 2020-04-07T15:56:45.828Z · score: 27 (14 votes) · LW · GW

As an animal-welfare lacto-vegetarian who's seen a fair number of arguments along these lines, they don't really do it for me. In my experience, it's not really possible to separate human peace of mind from the actions you make (the former reflect an ethical framework and the latter reflect strategies and together they form an aesthetic feedback loop) . To be explicit:

  • I don't think my moral zero-point was ever up for grabs. Moreover, it wasn't "the world I interact with every day." it was driven by an internal sense of what makes existing okay and what doesn't and extrapolating that over the universe. Raising/lowering my zero-point is therefore internally connected with my heuristic for whether more beings should exist or not and in this sense, the zero-point was only a proxy for my psychological anguish pointing at this concept. If I artificially inflate/depreciate my zero-point while maintaining awareness that this has no effect on whether or not the average being existing is good or bad, it won't actually change how I feel psychologically.
  • A vast amount of my anguish around having a very low zero-point was social angst. A low zero-point (especially when due to animal welfare) not only meant that the world was bad; it meant that barely anyone cared (and in my immediate bubble, literally no one cares). This stuff occurred to me when I was very young and can result in what I now know to be institutional betrayal trauma. Had I been an ordinary kiddo that didn't make real-time psychological corrections when my brain started acting funny, this would've happened to me.
    • Also, while I get what you're saying, having a different value of something psychologically linked to a normative claim about "when it is good to exist" or "the bare standard of human decency" will gaslight people traumatized by mismatches between those claims and people's actual actions. If you keep this zero-point alteration tool solely for the psychological benefits, it's not a big deal. But if you talk to people about ethics and think your moral statements might be reflective of a modified zero-point, then it can be an issue. In light of this, I'd recommend preambling your ethical statements with something like "if I seem insufficiently horrified, it is only because I am deliberately modifying my definition of the bare standard of human decency/zero-point for reasons of mental well-being". Otherwise, you'll mess a whole bunch of people up.
  • You've pointed out changing your zero-point gives you a number of psychological benefits. However, I think most of these psychological benefits come from the fact that people are more satisficing than utilitarian and this causes zero-point shifts to also cause nonlinear transformations of your utility function. If you're accustomed to being internally satisfied by the world having utility over threshold X and you change your zero-point for the world without changing that threshold, you'll predictably have more acceptance, relief and hope but this is because you've performed a de-facto nonlinear transformation of your utility function. Sometimes this, conditioned on being an irrational human, is a good thing to do to be more effective. Sometimes it makes you vulnerable to unbounded amounts of moral hazard. If you're arguing in favor of zero-point moving, you need to address the concerns implied by the latter possibility.
  • For evidence that these claim generalize beyond me, just look at your quote from Rob. He's talking about a "bare standard of human decency" but note that this standard is actually a set of strategies! As you pointed out, strategies are invariant if you change your utility function's zero point so the bare standard of human decency should be invariant too! As a non-utilitarian, this means you have four options with respect to your zero-point and each of them have their own drawbacks:
    • Not changing your zero-point and bite the bullet psychologically
    • Changing your zero-point but decoupling it from your sense of the "bare standard of human decency" which is held constant. This eliminates the psychological benefits
    • Changing your zero-point and allowing your "bare standard of human decency" to drift. This modifies your utility function.
    • Changing your zero-point and allowing your "bare standard of decency" to drift but decoupling your "bare standard of decency" from the actions you actually make. This will either eliminate the psychological benefits or break your sense of ethics
Comment by isnasene on April Coronavirus Open Thread · 2020-04-06T13:54:02.165Z · score: 3 (2 votes) · LW · GW

Thanks for pointing this out. Having recently looked at Ohio County KY, I think this is correct. %ill there max'd out at above 1% the typical range but has since dropped below 0.4% of the typical range and started rising again (which is notable in contrast with seasonal trends) [Edit to point out that this is true for many counties in the Kentucky/Tennessee area]. This basically demonstrates that having a reported %ill now that is lower than previous in the Kinsa database is insufficient to show r0<1. Probably best to stick with the prior of containment failure.

Comment by isnasene on Life as metaphor for everything else. · 2020-04-05T20:01:50.815Z · score: 4 (3 votes) · LW · GW
"I only care about animal rights because animals are alive"
1. Imagine seeing someone take a sledgehammer to a beautiful statue. How do you feel?
2. Someone swats a mosquito. How do you feel?

In this context, I think the word rights is doing a lot of work that your question is not capturing. While seeing someone destroy a beautiful stature would feel worse than seeing someone swat a mosquito, this in no way indicates that I care about "statue rights." I acknowledge that the word rights is kind of fuzzy but here's my interpretation:

I feel bad about someone destroying a beautiful statue simply because a) I find the statue beautiful and view its existence as advancing my values with respect to beauty and b) I express empathy for others who care about the statue. It doesn't have a right to exist; I would just prefer it to and ascribe a right to living beings who have similar preferences to have those preferences remain unviolated.

I feel bad about a mosquito getting swatted insofar as the mosquito has a right to exist -- because its own preferences and individual experiences merit consideration on their own grounds.

Also, do you bury or eat the dead? (Animals, not humans. What about pets?)

If you bury the dead for the sake of deceased, then you grant the dead rights -- and I think many people do this. But if you bury the dead for your own sake, then you do not -- you are just claiming that you have the right to bury the dead or that the alive have the right to ensure the burial of their dead bodies.

If you bury pets but not other animals, it is not the pet that has the right to be buried; it is that pet owners have the right for their pets to be buried.

Comment by isnasene on April Coronavirus Open Thread · 2020-04-04T09:09:03.907Z · score: 11 (4 votes) · LW · GW

I've been playing with the Kinsa Health weathermap data to get a sense of how effective US lockdowns have been at reducing US fever. The main thing I am interested in is the question of whether lockdown has reduced coronavirus's r0 below 1 (stopping the spread) or not (reducing spread-rate but not stopping it). I've seen evidence that Spain's complete lockdown has not worked so my expectation is that this is probably the case here. Also, Kinsa's data has two important caveats:

  • People who own smart thermometers are more likely to be health conscious which makes them more likely to be health conscious than the overall population. Kinsa may therefore overstate the effect of the lockdown by not effectively sampling the health apathetic people more likely to get the virus.
  • Kinsa data cannot separate coronavirus fever symptoms with flu fever symptoms. At the early stages of coronavirus spread, seasonal flu illness dominates coronavirus illness and seasonal flu r0 is between 1-2. This means that a lockdown can easily eliminate symptoms caused by seasonal flu illness by reducing flu r0 below zero without reducing coronavirus's r0 below zero.
    • I'm addressing this by comparing the largest amounts of observed atypical illness over the last month in different locations with their current total illness to get a conservative estimate of how much coronavirus %ill have changed.

With this in mind, my overall conclusion is that the Kinsa data does not disconfirm the possibility that we've reduced r0 below 1. Within the population of people who use smart thermometer's, we've probably stopped the spread but it may/may not have stopped in the overall population. Here are my specific observations:

  • The overall US %ill weakly suggests we may have reduced r0 below 1. It maxed out at around 5.1% ill compared to a range of 3.7-4.7 %ill . This indicates that 0.4-1.4% of overall illness was due to coronavirus and currently total illness is only 0.88%. This means that, for many values in that range, our lockdowns are actually cutting into the percent of people getting coronavirus and therefore that the virus is not growing.
  • New York county NY %ill weakly suggests that we may have reduced r0 below 1. It maxed out at 6.4 %ill compared to a typical range of 2.75-4.32, indicating that 2.1-3.65% of people had coronavirus. Currently, total illness is 2.56%. Again, for most values in that range, it looks like we're reducing the absolute amount of coronavirus.
  • Cook county IL (Chicago) %ill is very weakly positive on reducing r0 below 1. It maxed out at 5.4 %ill with a range of 2.8-4.9 indicating that 0.5-2.6% of people had coronavirus. Currently the total is 0.92% which suggests we've likely cut into coronavirus illness. The range of typical values is so large though that its hard to reach a conclusion
  • Essex country NJ (Newark) %ill doesn't say much about r0. It maxed out at 6.1 compared to a typical range of 2.9-4.5 which implies a range of coronavirus %ill of 1.6-3.2 The current value is 2.63% which is closer to the higher end of the range so there's no evidence that we've reduced the amount of coronavirus. Still %ill is continuing to trend down so this may change in the future.
  • I also considered looking at Santa Clara County CA, Los Angeles County CA, and Orleans Parish LA (New Orleans) but their %ill never exceeded the atypical value by a large enough amount for me to perform comparison.
  • On Mar28, the overall US %ill changed from a steep linear drop of ~-0.3%ill/day to a weaker linear drop of ~-0.1%ill/day. Also, on Mar28, both Newark's and New York's fast linear drop is broken with a slight increase in illness and it looks like we're on our second leg down there now. Similar on Mar27, Chicago's fast linear drop is broken with a a brief plateau and second leg down. No idea why this happened.
Comment by isnasene on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-04T00:07:03.983Z · score: 1 (1 votes) · LW · GW

Fair enough. When I was thinking about "broad covid risk", I was referring more to geographical breadth -- something more along the lines of "is this gonna be a big uncontained pandemic" than "is coronavirus a bad thing to get." I grant that the latter could have been a valid consideration (after all, it was with H1N1) and that claiming that it makes "no implication" about broader covid risk was a mis-statement on my part.

That being said, I wouldn't really consider it an alarm bell (and when I read it, it wasn't one for me). The top answer, Connor Flexman, states:

Tl;dr long-term fatigue and mortality from other pneumonias make this look very roughly 2x as bad to me as the mortality-alone estimates.
It’s less precise than looking at CoVs specifically, but we can look at long-term effects just from pneumonia.

For me personally:

  • A 2x increase in how bad Covid19 was in February was not cause for much alarm in general. I just wasn't that worried worried about a pandemic
  • The answer is based long-term effects of pneumonia, not covid itself (which isn't measurable). If I read something that said "hey you have a surprisingly high likelihood of getting pneumonia this year", I would be alarmed. This wasn't really that post
  • I was already kind of expecting that Covid could cause pneumonia based on typical coverage of the virus -- I wasn't surprised by the post in the way I'd expect to be if it was an alarm bell

I'll give the post some points for pointing out a useful, valuable and often-neglected consideration but I dunno. At that time I saw "you are in danger of getting coronavirus" posts as different from "coronavirus can cause bad things to happen" posts. And the former would've been alarm bells and the latter wouldn't've been.

Comment by isnasene on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-03T21:17:52.239Z · score: 17 (6 votes) · LW · GW

While I agree with the specific claims this post is making (i.e. "Less Wrong provided information about coronavirus risk similar to or just-lagging the stock market"), I think it misses the thing that matters. We're a rationality forum, not a superintelligent stock-market-beating cohort[1]! Compared to the typical human's response to coronavirus, we've done pretty well at recognizing the dangers posed by the exponential spread of pandemics and acting accordingly. Compared to the very smart people who make money by predicting the economic effects of a virus, we've done expectedly mediocre -- after all none of us (including the stock market) really had any special information about the virus's trajectory.

Maybe it is disappointing if we lagged the stock market instead of being perfectly on pace with it but a week of lag is a pretty small amount of time in the grand scheme of things. And I'd expect different auditing methodologies/interpretations to have about that amount in variance. In any case, I don't really think that it's a big deal.

[1]That is, unless you count Bitcoin, which Eliezer Yudkowsky doesn't.

Comment by isnasene on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-03T20:25:36.446Z · score: 4 (6 votes) · LW · GW

The question in this post is "was Less Wrong a good alarm bell" and in my opinion only one of those links constitute alarm bells -- the one on EAForums. Acknowledging/discussing the existence of the coronavirus is vastly different from acknowledging/discussing the risk of the coronavirus.

  • "Will ncov survivors suffer lasting disability at a high rate?" is a medical question that makes no implication about broader covid risk.
  • "Some quick notes on hand-hygene" does not mention the coronavirus in the main post (but to be fair does have a coronavirus tag). It does make an offhand reference implying the coronavirus could be a "maybe pandemic" but this isn't a concrete estimation of actual risk
  • "Concerning the recent 2019 novel coronavirus outbreak" is a fantastic post that makes concrete claims like it now seems reasonable to assign a non-negligible probability (>2%) to the proposition that the current outbreak will result in a global disaster (>50 million deaths resulting from the pathogen within 1 year). Per one of the comments, this was consistent with Metaculus.

Overall, I'd say that LessWrong was about on par with "having lunch conversations with Chinese-American coworkers" in terms of serving as an actual alarm bell. Moreover, in the case that we admit a weaker standard for what an alarm bell is, it's worth noting that we still don't really beat the stock market -- because it did respond to the coronavirus back in January. It just didn't respond strongly enough to convey an actual broad concrete risk.

I also would be somewhat hesitant about saying that the markets crashed on February 20th. The market continued crashing for quite a while, and this is when Wei Dai wrote some comments about his investment strategy, which, if you had followed it at that point would have still made you a good amount of money.

As someone who pretty regularly follows Less Wrong, I missed Wei Dai's investment strategy which makes me lean in the direction that most casual readers wouldn't have benefitted from it. The linked comment itself also doesn't have very strong valence, stating " The upshot is that maybe it's not too late to short the markets." Low valence open-thread comments don't really sound like alarm bells to me. Wei Dai has also acknowledged that this was a missed opportunity on EAforums.

Moreover, there was also an extremely short actionable window. On February 28th, the stock market saw a swift >5% bear market rally before the second leg of the crash which temporarily undid half the losses. Unless your confidence in "maybe its not too late to short the markets" was strong enough to weather through this, you would've probably lost money. This almost happened to me -- I sold the Thursday morning after Wei Dai's comment and bought back in Monday, netting a very meek ~3% gain.

Comment by isnasene on March Coronavirus Open Thread · 2020-03-30T22:58:00.101Z · score: 10 (2 votes) · LW · GW

[Epistemic Status: It's easy to be fooled by randomness in the coronavirus data but the data and narrative below make sense to me. Overall, I'm about 70% confident in the actual claim. ]

Iran's recent worldometer data serves case study demonstrating relationship between sufficient testing and case-fatality rate. After a 16 day long plateau (Mar 06-22) in daily new cases which may have seemed reassuring, we've seen five days (Mar 24-28) of roughly linear rise. We could anticipate this by noticing that in a similar time frame (Mar 07-19), we were seeing a linear rise in case fatality rate before it became constant. This indicates the following narrative (not sure if it's actually true):

  • Coronavirus spreads uncontrolled in Iran without increased testing capabilities. This causes new daily cases to stay constant despite increased infection -- the 16 day long plateau in daily new cases
  • Because cases are increasing, the number of severe cases is also increasing - - and severe cases are more likely to get tested than less severe cases. This causes fatality rate to rise as the severity of the cases that are actually tested increases -- the 12 day linear rise in case fatality
  • Recently, testing capabilities were ramped, allowing testing of more people and the observation of less severe cases. As a result, the number of daily cases started increasing again with the testing rate. Simultaneously, the fatality rate plateau'd as the (complex) trend in severe cases being tested in greater proportion to less severe cases was cancelled out by trend in testing. Hence the last five days of daily new case rise and the past eight days of constant fatality rate.
    • Note that this narrative suggests that testing is being continuously ramped up while remaining the bottle-neck. Two pieces of evidence for this:
      • The daily cases start increasing linearly from the plateau. If testing was increased dramatically, one would an immediate discontinuous increase in number of daily cases at the point where more tests are done.
      • Iran's death rate is still much higher (17% compared to an IFR which should be less than 5%) so testing is unlikely to be sufficient to capture the true infection rate
Comment by isnasene on Adding Up To Normality · 2020-03-27T00:08:48.025Z · score: 3 (2 votes) · LW · GW
I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.

Ah, yeah I agree with this observation -- and it could be good to just assume things add up to normality as a general defense against people rapidly taking naive actions. Scarcity bias is a thing after all and if you get into a mindset where now is the time to act, it's really hard to prevent yourself from acting irrationally.

Comment by isnasene on Adding Up To Normality · 2020-03-26T18:04:30.786Z · score: 3 (2 votes) · LW · GW
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.

Yeah, but my point is not about catastrophic risk -- it's about the risk/reward trade-off in general. You can have risk>reward in scenarios that aren't catastrophic. Catastrophic risk is just a good general example of where things don't add up to normality (catastrophic risks by nature correspond to not-normal scenarios and also coincide with high risk). Don't promise yourself to steer the plane mostly as normal, promise yourself to pursue the path that reduces risk over all outcomes you're uncertain about.

I don't think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.

Good point, it really depends on the details of the example but this is just because of the different risk-reward trade-offs, not because you ought to always treat things as adding up to normality. I'll counter that while you shouldn't leave the job (high risk, hard to reverse), you should see if you could use your PTO as soon as possible so you can figure things out without potentially causing further negative impact. It all depends on the risk-reward trade-off:

  • If stopping activism corresponds to something like leaving a job, which is hard to reverse, doing so involves taking on a lot of risk if you're uncertain and waiting for a bit can reduce that risk.
  • If stopping activism corresponds to something like shifting your organizations priorities, and your organization's path can be reversed, then stopping work (after satisfying all existing contracts of course) is pretty low risk and you should stop
  • If stopping activism corresponds to donating large amounts of money (in earning-to-give contexts), your strategy can easily be reversed and you should stop now.

This is true even if you only have "small" amounts of impact.

Caveat:

People engage in policies for many reasons at once. So if you think the goal of your policy is X, but it's actually X, Y and Z, then dramatic actions justified on uncertainty about X alone will probably be harmful due to Y and Z effects even if its the appropriate decision with respect to X. Because it's easy to notice when why a thing might go wrong (like X) and hard to notice why they're going right (like Y and Z), adding-up-to-normality serves as a way to generally protect Y and Z.

Comment by isnasene on Adding Up To Normality · 2020-03-25T15:39:29.150Z · score: 8 (5 votes) · LW · GW

I think the strongest version of this idea of adding p to normality is "new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations." Therefore, when one's actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen -- regardless of any new information.But this strict version of 'adding up to normality' does not apply in situations where one's actions are contingent on unobservables. In cases where new evidence/knowledge may cause someone to dramatically revise the implications of previous observations, things don't add up to normality. Whether this is the case or not for you as an individual depends on your gears-level understanding of your observations.

So in retrospect, the main thing I'd recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift

I somewhat disagree with this. I think, in these kinds of situations, the recommendation should be more along the lines of "promise yourself to make the best risk/reward trade-off you can given your state of uncertainty." If you're flying in a plane that has a good track record of flying, definitely don't touch anything because its more risky to break something that has evidence of working than it is rewarding to fix things that might not actually work. But if you're flying in the world's first plane and realize you don't understand lift, land it as soon as possible.

Some Reasons Things Add Up to Normality

  • If you think the thing you don't understand might be a Chesterton's Fence, there's a good chance it will add up to normality
  • If you think the thing you don't understand can be predicted robustly by inductive reasoning and you only care about being able to accurately predict the thing itself, there's a good chance it will add up to normality

Some Examples where Things Don't Add Up

Example #1 (Moral Revisionism)

You're an eco-rights activist who has tirelessly worked to make the world a better place by protecting wildlife because you believe animals have the right to live good lives on this planet too. Things are going just fine until your friend claims that R-selection implies most animals live short horrible lives and you realize you have no idea whether animals actually live good lives in the wild. Should you immediately panic in fear that you're making things worse?

Yes. Whether or not the claim in question is accurate, your general assumption that protecting wildlife implies improved animal welfare was not well-founded enough to address significant moral risk. You should really stop doing wildlife stuff until you get this figured out or you could actually cause bad things to happen.

Example #2 (Prediction Revisionism)

You've built an AGI and, with all your newfound free-time and wealth, you have a lengthy chat with a mathematician. Things are going along just fine until they point out to you that your understanding of the safety measures used to ensure alignment are wrong, and that the AGI shouldn't be aligned from the safety measures you thought were responsible.Should you immediately panic in fear that the AGI will destroy us all?

Yes. The previous observations are not sufficient to make reliable predictions. But note that a random bystander who is uninvolved with AGI development would be justified in not panicking -- their gears-level understanding hinges on believing that the people who created the AGI are competent enough to address safety, not on believing that the specific details designed to make the AGI safe actually work.

Comment by isnasene on Good News: the Containment Measures are Working · 2020-03-21T23:02:55.102Z · score: 12 (3 votes) · LW · GW

I shared this post with some of my friends and they pointed out that, as of 3/21/2020, the Italy and Spain curves no longer look as optimistic:

  • On March 16, cases in Italy appeared to be leveling off. Immediately following that, they broke trend and began rising again. March 16 had ~3200 daily cases. March 20 has ~6000.
  • Spain appeared to be leveling off up through March 17th (~1900 daily cases). But on March 18th, it spiked to ~3000. As of March 20th, things may be leveling off again but I wouldn't draw any conclusions
  • Iran's daily cases have stayed flat for a pretty long period of time now -- at around 1000 per day. This seems like it should be good news, tho I'm not sure how good: Since March 8, Iran's death rate (closed cases) has been steadily rising from 8% to 17.5%
Comment by isnasene on Assorted thoughts on the coronavirus · 2020-03-19T00:35:53.542Z · score: 1 (1 votes) · LW · GW
To me that nudges things somewhat, but isn't a game changer. I don't think it makes it 10x less bad or anything.

Fair enough. As a leaning-utilitarian, I personally share your intuition that it isn't 10x bad (if I had to choose between coronavirus and ending negative consequences of live-style factors for one year, I don't have a strong intuition in favor of coronavirus). Psychologically speaking, from the perspective of average deontological Joe, I think that it (in some sense) is/feels 10x as bad.

Is that really a possibility? I imagine that governments would impose a strict quarantine before letting it get that bad.

10% is unlikely but possible -- not because of the coronavirus itself alone but because of the potential for systemic failure of our healthcare system (based on this comment). I think it's likely that governments may impose a strict quarantine before it gets that bad or (alternatively) bite the bullet and let coronavirus victims die to triage young people with more salient medical needs.

In the situation where you don't have savings or a job, here is what I'm imagining. The majority would have family or a friend they could stay with until they get back on their feet, which doesn't seem that bad.

I partially agree with this. Frankly, as a well-off person myself, I'm not exactly sure what people would do in that situation. Conditioned on having friends or (non-abusive) family with the appropriate economic runway to be supportive, I agree that it wouldn't be that bad. However these (in my sphere) are often significant contributing factors to being low-income in the first place.For low-income families, things also get messier to do the need-to-support-people being built in.

Homeless shelters do provide basic needs, so if you want to be really hardcore with the "happiness is all in your head" stuff, you should still in theory be ok. But I don't know much about what it's truly like; maybe there's more to it than that.

I agree that this kind of stoicism helps (I resonate a lot with stoicism as a philosophy myself). But I view this as more of a mental skill that is built-up rather than something that people start doing immediately when thrust into lower-standad-of-living situations. Hedonic adaptation takes time and the time it takes before setting in can also be unpleasant. I'd also like to push-back a little on the idea of hedonic adapation with respect to losing money because there is a correlation between measures of happiness and income which only starts breaking down around $50k.

Comment by isnasene on Assorted thoughts on the coronavirus · 2020-03-18T15:58:18.409Z · score: 13 (5 votes) · LW · GW

This is anecdotal but last week I read the article by Mr Money Mustache which you linked. As part of it, he posts this picture with the caption "I went out on the town at the peak of the scare. The reality is different from the news headlines."

Then I went to Venkatesh Rao's twitter and was immediately confronting with this picture. Stores empty. People are in danger. This is an exceptional case given Venkatesh's location and the timing. Nevertheless, the simple fact that Mr Money Mustache describes the picture as being at the peak of the scare has seriously lowered my faith in him. As if it was a scare. As if it wasn't going to get worse.

"Alas, it is hard to overreact. We did ordinary cheap preparing. We had a month’s worth of food, all our medicines and stuff like that. Initially I thought that would be the plan."

After reading Mr. Money Mustache's take on the coronavirus, I started having a few doubts about how bad it actually is. I didn't realize that 2M people in America die each year of things related to "lifestyle factors".

No. Never compare the effects of things like death from "lifestyle factors" -- things that happen because people willingly trade-off having a long-time for having a good-time, things subject to hyperbolic discounting, things that (on an individual level) are really very hard to track the effects of -- with an imminent risk that 1-10% of everyone dies within the next two years. Personally, covid poses little threat to me but we don't know the end-game here: we're fighting between potentially lengthy economic shutdowns and the possibility of containment failure and global health system collapse. And if low-income people are forced back to work due to money-needs before containment succeeds, the economy crashes and our healthcare system fails.

Is losing money really going to be that bad?

Once you have enough money, losing 50-90% of your wealth really isn't that bad at all -- which is I like the idea of earning-to-give once I'm confident in my runway. Indeed, if you're the kind of person who reads Mr Money Mustache, you're probably going to be fine in general.

For my low-income friends though, yes. Yes it is going to be that bad. Sometimes people don't have jobs. Sometimes people don't have savings. A large portion of people live paycheck to paycheck. Many people are going to die because of the virus. Many people are going to die because our healthcare systems will at least partially fail. Many people are going to die because that is what the economics imply.

Comment by isnasene on Even if your Voice Shakes · 2020-03-16T03:29:48.107Z · score: 5 (3 votes) · LW · GW

Donated.

Comment by isnasene on DanielFilan's Shortform Feed · 2020-02-05T00:16:16.565Z · score: 1 (1 votes) · LW · GW
I think you're overstating the stigma against not having kids. I Googled "is there stigma around not having kids" and the top two US-based articles both say something similar:

Agreed. Per my latest reply to DanielFilan:

However, I've actually been overstating my case here. The childfree rate in the US is currently around 15%which is much larger than I expected. The childfree rate for women with above a bachelor's degree is 25%. In absolute terms, these are not small numbers and I've gotta admit that this indicates a pretty high population density at the margin.

I massively underestimated the rate of childfree-ness and, broadly speaking, I'm in agreement with Daniel now.

Comment by isnasene on DanielFilan's Shortform Feed · 2020-02-02T17:17:06.593Z · score: 1 (1 votes) · LW · GW
I continue to think that you aren't thinking on the margin, or making some related error (perhaps in understanding what I'm saying). Electing for no kids isn't going to become more costly, so if you make having kids more costly, then you'll get fewer of them than you otherwise would, as the people who were just leaning towards having kids (due to idiosyncratically low desire to have kids/high cost to have kids) start to lean away from the plan.

Yeah, I was thinking in broad strokes there. I agree that there is a margin at which point people switch from choosing to have kids to choosing not to have kids and that moving that margin to a place where having kids is less net-positive will cause some people to choose to have fewer kids.

My point was that the people on the margin are not people who will typically say"well we were going to have two kids but now we're only going to have one because home-schooling"; they're people who will typically say "we're on the fence about having kids at all." Whereas most marginal effects relating to having kids (ie the cost of college) pertain to the former group, the bulk of marginal effects on reproduction pertaining to schooling stigmas pertain to the latter group.

Both the margin and the population density at the margin matter in terms of determining the effect. What I'm saying is that the population density at the margin relevant to schooling-stigmas is notably small.

However, I've actually been overstating my case here. The childfree rate in the US is currently around 15% which is much larger than I expected. The childfree rate for women with above a bachelor's degree is 25%. In absolute terms, these are not small numbers and I've gotta admit that this indicates a pretty high population density at the margin.

(I assume you meant pressure in favour of home-schooling?) Please note that I never said it had a high effect relative to other things: merely that the effect existed and was large and negative enough to make it worthwhile for homeschooling advocates to change course.

Per the above stats, I've updated to agree with this claim.

Comment by isnasene on DanielFilan's Shortform Feed · 2020-01-29T01:53:57.268Z · score: 1 (1 votes) · LW · GW
Developed countries already have below-replacement fertility (according to this NPR article, the CDC claims that the US has been in this state since 1971), so apparently you can have pressures that outweigh pressures to have children.
...
Rich people have fewer kids than poor people and it doesn't seem strange to me to imagine that that's partly due to the fact that each child comes at higher expected cost.

I think the crux of our perspective difference is that we model the decrease in reproduction differently. I tend to view poor people and developing countries having higher reproduction rates as a consequence of less economic slack. That is to say, people who are poorer have more kids because those kids are decent long-term investments overall (ie old-age support, help-around-the-house). In contrast, wealthy people can make way more money by doing things that don't involve kids.

This can be interpreted in two ways:

  • Wealthier people see children as higher cost and elect not to have children because of the costs

or

  • Wealthier people are not under as much economic pressure so have fewer children because they can afford to get away with it

At the margin, both of these things are going on at the same time. Still, I attribute falling birthrates as mostly due to the latter rather than the former. So I don't quite buy the claim that falling birth-rates have been dramatically influenced by greater pressures.

Of course, Wei Dai indicates that parental investment definitely has an effect so maybe my attribution isn't accurate. I'd be pretty interested in seeing some studies/data trying to connect falling birthrates to the cultural demands around raising children.

...

Also, my understanding of the pressures re:homeschooling is something like this:

  • The social stigma against having kids is satisficing. Having one kid (below replacement level) hurts you dramatically less than having zero kids
  • The capacity to home-school is roughly all-or-nothing. Home-schooling one kid immediately scales to home-schooling all your kids.
  • I doubt the stigma for schooling would punish a parent who sends two kids to school more than a parent who sends one kid to school

This means that, for a given family, you essentially chose between having kids and home-schooling all of them (expected-cause of home-schooling doesn't scale with number of children) or having no kids (maximum social penalty). Electing for "no kids" seems like a really undesirable trade-off for most people.

There are other negative effects but they're more indirect. This leads me to believe that, compared to other pressures against having kids, stigmas against home-schooling will have an unusually low marginal effect.

Presumably this is not true in a world where many people believe that schools are basically like prisons for children, which is a sentiment that I do see and seems more memetically fit than "homeschooling works for some families but not others".

Interesting -- my bubble doesn't really have a "schools are like prisons" group. In any case, I agree that this is a terrible meme. To be fair though, a lot of schools do look like prisons. But this definitely shouldn't be solved by home-schooling; it should be solved by making schools that don't look like prisons.

Comment by isnasene on DanielFilan's Shortform Feed · 2020-01-26T23:29:15.524Z · score: 9 (2 votes) · LW · GW
A bunch of my friends are very skeptical of the schooling system and promote homeschooling or unschooling as an alternative. I see where they're coming from, but I worry about the reproductive consequences of stigmatising schooling in favour of those two alternatives.

While I agree that a world where home/un-schooling is a norm would result in greater time-costs and a lower child-rate, I don't think that promoting home/un-schooling as an alternative will result in a world where home/un-schooling is normative. Because of this, I don't think that promoting home/un-schooling as an alternative to the system carries any particularly broad risks.

Here's my reasoning:

  • I expect the associated stigmas and pressures for having kids to always dwarf the associated stigmas and pressures against having kids if they are not home/un-schooled. Having kids is an extremely strong norm both because of the underpinning evolutionary psychology and because a lot of life-style patterns after thirty are culturally centered around people who have kids.
  • Despite its faults, public school does the job pretty well for most of people. This applies to the extent that the opportunity cost of home/un-schooling instead of building familial wealth probably outweighs the benefits for most people. Thus, I don't believe that the promoting of home/un-schooling is scaleable to everyone.
  • Lots of rich people who have the capacity to home/un-school who dislike the school system decide not to do that. Instead they (roughly speaking) coordinate towards expensive private schools outside the public system. I doubt that this has caused a significant number of people to avoid having children for fear of not sending them to a fancy boading school.
  • Even if the school system gets sufficiently stigmatised, I actually expect that the incentives will naturally align around institutional schooling outside the system for most children. Comparative advantages exist and local communities will exploit them.
  • Home/un-schooling often already involves institutional aspects. Explicitly, home/un-schooled kids would ideally have outlets for peer-to-peer interactions during the school-day and these are often satisfied through community coordination

I grant that maybe increased popularity of home/un-schooling could reduce reproduction rate by an extremely minor amount on the margin. But I don't think that amount is anywhere near even the size of, say, the way that people who claim they don't want to have kids because global warming will reproduce less on the margin.

And as someone who got screwed by the school system, I really wish that when I asked my parents about home/un-schooling, there was some broader social movement that would incentivize them to actually listen.

Comment by isnasene on Material Goods as an Abundant Resource · 2020-01-26T02:33:19.243Z · score: 22 (9 votes) · LW · GW

Great series! I broadly agree with it and the approach. However, this post has given me a vagueish "no matter how many things are abundant, the economic rat-race is inescapable" vibe which I disagree with.

Towards the end, a grocer explains the new status quo eloquently:
"... not very many people will buy beans and chuck roast, when they can eat wild rice and smoked pheasant breast. So, you know what I've been thinking? I think what we'll have to have, instead of a supermarket, is a sort of super-delicatessen. Just one item each of every fancy food from all over the world, thousands and thousands, all different"

I see the idea here but I disagree with it. I'm a human for goodness sake! I eat food to stay alive and to stay healthy and for the pure pleasure of eating it! Neither my time nor my money is a worthy trade-off for special unique food if it's not going to do any of those things significantly better. I grant that there might be a niche market for this kind of thing but, the way I see it, being free of the need for material goods will free people from the rat-race: It will let them completely abandon their existing financial strategies insofar as those strategies were previously necessary to keep them alive.

This is what the FIRE community does. They save up enough money so that they only participate in the economy as much as it actually improves their lives.

Why? Because material goods are not the only economic constraints. If a medieval book-maker has an unlimited pile of parchment, then he’ll be limited by the constraint on transcriptionists. As material goods constraints are relaxed, other constraints become taut.

Broadly speaking, I agree with the description here of economic supply chains as a sequence of steps (ie potential bottle-necks. But, in general, I perceive these sequences of steps as finite. For example, the book-maker has unlimited parchment and is then limited by transcriptionists, so the book-maker automated transcription and is limited by books, so the book-maker automates writing (or it turns out the number of writers wasn't a real bottleneck) so what then? Bookstores are shuttering. I have the internet and the last time I handed money to anyone in the book-making supply chain was because I wanted something to read on the plane.

Again, maybe there's a niche market for more unique books or more elegantly bound collectible books but that's a market I can opt out of. It's superfluous to me having a good life.

Here’s one good you can’t just throw on a duplicator: a college degree.
A college degree is more than just words on paper. It’s a badge, a mark of achievement. You can duplicate the badge, but that won’t duplicate the achievement.

I didn't get my college degree to signal social status. I got it because I wanted to get a nice job. I wanted to get a nice job so I could get money. I wanted to get money so that I could use it towards the aim of having a fulfilling life. Give me all the material goods and I would've probably just learned botany instead.

So, to me, college degrees (and other intangible badges of achievement) haven't become the things they are because of abundance, they've become the things they are because social status will be instrumental to gaining important life-enhancing things for as long as those things are not abundant.

Social status might be vaguely zero-sum but, beyond a couple friends, it's not critical for living a good life. Given the tools to live a good life, I imagine many people just opting out of the economy. I'm not going to work for eight hours a day to zero-sum compete for more social status alone.

But given that things have in fact become way more abundant, why haven't we seen more of this opting out happening? Two answers:

1.

We have. Besides the FIRE community, we see it in retirees. I've personally seen it in a number of middle-aged adults who realize that trying to find another job in this tech'd up world just isn't worth the hassle when they have enough to get by on.

2.

With all this talk of zero-sum games, the last piece of the post-scarcity puzzle should come as no surprise: political rent-seeking.
Once we accept that economics does not disappear in the absence of material scarcity, that there will always be something scarce, we immediately need to worry about people creating artificial scarcity to claim more wealth.

Yep. I'd generalize rent-seeking beyond just politics and into the realm of moral maze rent-seeking but yep. I'd actually view the college-corporate complex as a subtrope of this. Colleges as a whole (for reasons of inadequate equilibria) collectively own the keys long-term social stability (excluding people who want to go into trades, and who are confident that those trades won't go away). They do this and charge a heckuva lot of money for it despite not actually providing much intrinsic value beyond fitting well into the existing incentive structure.

Remove material goods as a taut economic constraint, and what do you get? The same old rat race. Material goods no longer scarce? Sell intangible value. Sell status signals. There will always be a taut constraint somewhere.

Status symbol competition doesn't scare me in a post-material-scarcity world; I can do just fine without it. What terrifies me is the possibility of rent-seekers (or complex incentive structures) systematically inducing artificial scarcity into material that I care about despite it not literally being scarce.

Comment by isnasene on Matt Goldenberg's Short Form Feed · 2020-01-25T01:29:04.396Z · score: 4 (5 votes) · LW · GW
And the thing is, I would go as far as to say many people in the rationality community experience this same frustration. They found a group that they feel like should be their tribe, but they really don't feel a close connection to most people in it, and feel alienated as a result.

As someone who has considered making the Pilgrimmage To The Bay for precisely that reason and as someone who decided against it partly due to that particular concern, I thank you for giving me a data-point on it.

Being a rationalist in the real world can be hard. The set of people who actually worry about saving the world, understanding their own minds and connecting with others is pretty low. In my bubble at least, picking a random hobby and incidentally becoming friends with someone at it and then incidentally getting slammed and incidentally an impromptu conversation has been the best performing strategy so far in terms of success per opportunity-cost. As a result, looking from the outside at a rationalist community that cares about all these things looks like a fantastical life-changing ideal.

But, from the outside view, all the people I've seen who've aggressively targeted those ideals have gotten crushed. So I've adopted a strategy of Not Doing That.

(pssst: this doesn't just apply to the rationalist community! it applies to any community oriented around values disproportionately held by individuals who have been disenfranchised by broader society in any way! there are a lot of implications here and they're all mildly depressing!)

Comment by isnasene on Predictors exist: CDT going bonkers... forever · 2020-01-20T17:40:19.659Z · score: 1 (1 votes) · LW · GW

Can you clarify what you mean by "successfully formalised"? I'm not sure if I can answer that question but I can say the following:

Stanford's encyclopedia has a discussion of ratifiability dating back to the 1960s and (by the 1980s) it has been applied to both EDT and CDT (which I'd expect, given that constraints on having an accurate world model should be independent of decision theory). This gives me confidence that it's not just a random Less Wrong thing.

Abram Dempski from MIRI has a whole sequence on when CDT=EDT which leverages ratifiability as a sub-assumption. This gives me confidence that ratifiability is actually onto something (the Less Wrong stamp of approval is important!)

Whether any of this means that it's been "successfully formalised", I can't really say. From the outside-view POV, I literally did not know about the conventional version of CDT until yesterday. Thus, I do not really view myself as someone currently capable of verifying the extent to which a decision theory has been successfully formalised. Still, I consider this version of CDT old enough historically and well-enough-discussed on Less Wrong by Known Smart People that I have high confidence in it.


Comment by isnasene on Predictors exist: CDT going bonkers... forever · 2020-01-20T07:12:22.845Z · score: 1 (1 votes) · LW · GW

Having done some research, it turns out the thing I was actually pointing to was ratifiability and the stance that any reasonable separation of world-modeling and decision-selection should put ratifiability in the former rather than the latter. This specific claim isn't new: From "Regret and Instability in causal decision theory":

Second, while I agree that deliberative equilibrium is central to rational decision making, I disagree with Arntzenius that CDT needs to be ammended in any way to make it appropriately deliberational. In cases like Murder Lesion a deliberational perspective is forced on us by what CDT says. It says this: A rational agent should base her decisions on her best information about the outcomes her acts are likely to causally promote, and she should ignore information about what her acts merely indicate. In other words, as I have argued, the theory asks agents to conform to Full Information, which requires them to reason themselves into a state of equilibrium before they act. The deliberational perspective is thus already a part of CDT

However, it's clear to me now that you were discussing an older, more conventional, version of CDT[1] which does not have that property. With respect to that version, the thought-experiment goes through but, with respect to the version I believe to be sensible, it doesn't[2].

[1] I'm actually kind of surprised that the conventional version of CDT is that dumb -- and I had to check a bunch of papers to verify that this was actually happening. Maybe if my memory had complied at the time, it would've flagged your distinguishing between CDT and EDT here from past LessWrong articles I've read like CDT=EDT. But this wasn't meant to be so I didn't notice you were talking about something different.

[2] I am now confident it does not apply to the thing I'm referring to -- the linked paper brings up "Death in Damascus" specifically as a place where ratifiable CDt does not fail

Comment by Isnasene on [deleted post] 2020-01-20T06:14:34.366Z

When I first looked at these plots, I thought "ahhh, the top one has two valleys and the bottom one has two peaks. So, accounting for one reflecting error and the other reflecting accuracy, they capture the same behavior." But this isn't really what's happening.

Comparing these plots is a little tricky. For instance, the double-descent graph shows two curves -- "train error" (which can be interpreted as lack of confidence in model performance) and "test error" (which can be interpreted as lack of actual performance/lack of wisdom). Analogizing the double-descent curve to Dunning Kruger might be easier if one just plots "test error" on the y-axis and "train error" on the x-axis. Or better yet 1-error for both axes.

But actually trying to dig into the plots in this way is confusing. In the underfitted regime, there's a pretty high level of knowledge (ie test error near the minimum value) withpretty low confidence (ie train error far from zero). In the overfitted regime, we then get double-descent into a higher level of knowledge (ie test error at the minimum) but now with extremely high confidence. Maybe we can tentatively interpret these minima as the "valley of despair" and "slope of enlightenment" but

  • In both cases, our train error is lower than our test error -- implying a disproportionate amount of confidence all the time. This is not consistent with the Dunning-Kruger effect
    • The "slope of enlightenment" especially has way more unjustified confidence (ie train error near zero) despite still having some objectively pretty high test error (around 0.3). This is not consistent with the Dunning-Kruger effect
  • We see the same test error associated with both a high train error (in the underfit regime) and with a low train error (in the overfit regime). The Dunning-Kruger effect doesn't capture the potential for different levels of confidence at the same level of wisdom

To me, the above deviations from Dunning-Kruger make sense. My mechanistic understanding of the effect is that it appears in fields of knowledge that are vast, but whose vastness can only be explored by those with enough introductory knowledge. So what happens is

  • You start out learning something new and you're not confident
  • You master the introductory material and feel confident that you get things
  • You now realize that your introductory understanding gives you a glimpse into the vast frontier of the subject
  • Exposure to this vast frontier reduces your confidence
  • But as you explore it, both your understanding and confidence rise again

And this process can't really be captured in a set-up with a fixed train and test set. Maybe it could show up in reinforcement learning though since exploration is possible.


Comment by isnasene on Mary Chernyshenko's Shortform · 2020-01-18T22:18:24.208Z · score: 7 (4 votes) · LW · GW

This reminds me a little bit of the posts on anti-memes. There's a way in which people are constantly updating their worldviews based on personal experience that

  • is useless in discussion because people tend not to update on other people's personal experience over their own,
  • is personally risky in adversarial contexts because personal information facilitates manipulation
  • is socially costly because the personal experience that people tend to update on is usually the kind of emotionally intense stuff that is viewed as inappropriate in ordinary conversation

And this means that there are a lot of ideas and worldviews produced by The Statistics which are never discussed or directly addressed in polite society. Instead, these emerge indirectly through particular beliefs which really on arguments that obfuscate the reality.

Not only is this hard to avoid on a civilizational level; it's hard to avoid on a personal level: rational agents will reach inaccurate conclusions in adversarial (ie unlucky) environments.

Comment by isnasene on Underappreciated points about utility functions (of both sorts) · 2020-01-18T03:39:05.769Z · score: 4 (3 votes) · LW · GW

Thanks for the reply. I re-read your post and your post on Savage's proof and you're right on all counts. For some reason, it didn't actually click for me that P7 was introduced to address unbounded utility functions and boundedness was a consequence of taking the axioms to their logical conclusion.

Comment by isnasene on Underappreciated points about utility functions (of both sorts) · 2020-01-17T01:44:05.942Z · score: 1 (1 votes) · LW · GW

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]. As a result, I went on a tangent about infinity to begin exploring whether my modified notion of a utility function would break in ways that regular ones wouldn't.

Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

  • an axiom describing what it means for one infinite wager to be "strictly better" than another.
  • an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function. If it didn't, that'd be interesting. In any case, whatever happens will tell us more about either the structure our preferences should follow or the structure that our rationality-axioms should follow (if we cannot find a system).

Of course, maybe my modification of the idea of a utility function turns out to show such a decisioning-system exists by construction. In this case, modifying the idea of a utility function would help tell me that my preferences should follow the structure of that modification as well.

Does that address the question?

[1] From your post:

We should say instead, preferences are not up for grabs -- utility functions merely encode these, remember. But if we're stating idealized preferences (including a moral theory), then these idealized preferences had better be consistent -- and not literally just consistent, but obeying rationality axioms to avoid stupid stuff. Which, as already discussed above, means they'll correspond to a bounded utility function.
Comment by isnasene on Go F*** Someone · 2020-01-16T07:03:41.797Z · score: 17 (9 votes) · LW · GW

I had fun reading this post. But as someone who has a number of meaningful relationships but doesn't really bother dating, I was also confused of what to make of it.

Also, given that this is Rationalism-Land, its worth keeping in mind that many people who don't date got there because they have an unusually low prior on the idea that they will find someone they can emotionally connect with. This prior is also often caused by painful experience that advice like "date more!" will tacitly remind them of.

Anyway, things that I agree with you on:

  • Dating is hard
  • Self-improvement is relatively easy compared to being emotionally vulnerable
  • I hate the saying "you do you." I emotionally interpret it as "here's a shovel; bury yourself with it"

Things I disagree with you on:

  • We aren't more lonely because of aggressively optimizing relationships for status rather than connection; we're more lonely because the opportunity cost of going on dates is unusually high. Many reasons for this:
    • It's easier than ever to unilaterally do cool things (ie learn guitar from the internet, buy arts and crafts off Amazon). And, as you noted, there's a cottage industry for making this as awesome as possible
    • It's easier than ever to defect from your local community and hang out with online people who "get" you
    • This causes a feedback loop that reduces the people looking to date, which increases the effort it dates to date, which reduces the number of people looking to date. Everyone is else defecting so I'm gonna defect too
  • I think the general conflation of "self-improvement" with "bragging about stuff on social media" is odd in the context you're discussing. People who aren't interested in the human connection of dates generally don't get much out of social media. At least in my bubble, people who are into self-improvement tend to do things like delete facebook.
  • If you're struggling to build financial capital, the goal is to keep doing that until you're financially secure. The goal very much isn't to refocus your efforts on going on hundreds of dates to learn how to make others happy.

Comment by isnasene on Predictors exist: CDT going bonkers... forever · 2020-01-16T01:05:33.484Z · score: 1 (1 votes) · LW · GW

[Comment edited for clarity]

Since when does CDT include backtracking on noticing other people's predictive inconsistency?

I agree that CDT does not including backtracking on noticing other people's predictive inconsistency. My assumption is that decision-theories (including CDT) takesa world-map and outputs an action. I'm claiming that this post is conflating an error in constructing an accurate world-map with an error in the decision theory.

CDT cannot notice that Omega's prediction aligns with its hypothetical decision because Omega's prediction is causally "before" CDT's decision, so any causal decision graph cannot condition on it. This is why post-TDT decision theories are also called "acausal."

Here is a more explicit version of what I'm talking about. CDT makes a decision to act based on the expected value of its action. To produce such an action, we need to estimate an expected value. In the original post, there are two parts to this:

Part 1 (Building a World Model):

  • I believe that the predictor modeled my reasoning process and has made a prediction based on that model. This prediction happens before I actually instantiate my reasoning process
  • I believe this model to be accurate/quasi-accurate
  • I start unaware of what my causal reasoning process is so I have no idea what the predictor will do. In any case, the causal reasoning process must continue because I'm thinking.
  • As I think, I get more information about my causal reasoning process. Because I know that the predictor is modeling my reasoning process, this let's me update my prediction of the predictor's prediction.
  • Because the above step was part of my causal reasoning process and information about my causal reasoning process affects my model of the predictor's model of me, I must update on the above step as well
  • [The Dubious Step] Because I am modeling myself as CDT, I will make a statement intended to inverse the predictor. Because I believe the predictor is modeling me, this requires me to inverse myself. That is to say, every update my causal reasoning process makes to my probabilities is inversing the previous update
    • Note that this only works if I believe my reasoning process (but not necessarily the ultimate action) gives me information about the predictor's prediction.
  • The above leads to infinite regress

Part 2 (CDT)

  • Ask the world model what the odds are that the predictor said "one" or "zero"
  • Find the one with higher likelihood and inverse it

I believe Part 1 fails and that this isn't the fault of CDT. For instance, imagine the above problem with zero stakes such that decision theory is irrelevant. If you ask any agent to give the inverse of its probabilities that Omega will say "one" or "zero" with the added information that Omega will perfectly predict those inverses and align with them, that agent won't be able to give you probabilities. Hence, the failure occurs in building a world model rather than in implementing a decision theory.



-------------------------------- Original version

Since when does CDT include backtracking on noticing other people's predictive inconsistency?

Ever since the process of updating a causal model of the world based on new information was considered an epistemic question outside the scope of decision theory.

To see how this is true, imagine the exact same situation as described in the post with zero stakes. Then ask any agent with any decision theory about the inverse of the prediction it expects the predictor to make. The answer will always be "I don't know", independent of decision theory. Ask that same agent if it can assign probabilities to the answers and it will say "I don't know; every time I try to come up with one, the answer reverses."

All I'm trying to do is compute the probability that the predictor will guess "one" or "zero" and failing. The output of failing here isn't "well, I guess I'll default to fifty-fifty so I should pick at random"[1], it's NaN.

Here's a causal explanation:

  • I believe the predictor modeled my reasoning process and has made a prediction based on that model.
  • I believe this model to be accurate/quasi-accurate
  • I start unaware of what my causal reasoning process is so I have no idea what the predictor will do. But my prediction of the predictor depends on my causal reasoning process
  • Because my causal reasoning process is contingent on my prediction and my prediction is contingent on my causal reasoning process, I end up in an infinite loop where my causal reasoning process cannot converge on an actual answer. Every time it tries, it just keeps updating.
  • I quit the game because my prediction is incomputable
Comment by isnasene on Predictors exist: CDT going bonkers... forever · 2020-01-15T00:56:45.243Z · score: 3 (2 votes) · LW · GW

Decision theories map world models into actions. If you ever make a claim like "This decision-theory agent can never learn X and is therefore flawed", you're either misphrasing something or you're wrong. The capacity to learn a good world-model is outside the scope of what decision theory is[1]. In this case, I think you're wrong.

For example, suppose the CDT agent estimates the prediction will be "zero" with probability p, and "one" with probability 1-p. Then if p≥1/2, they can say "one", and have a probability p≥1/2 of winning, in their own view. If p<1/2, they can say "zero", and have a subjective probability 1−p>1/2 of winning.

This is not what a CDT agent would do. Here is what a CDT agent would do:

1. The CDT agent makes an initial estimate that the prediction will be "zero" with probability 0.9 and "one" with probability 0.1.

2. The CDT agent considers making the decision to say "one" but notices that Omega's prediction aligns with its actions.

3. Given that the CDT agent was just considering saying "one", the agent updates its initial estimate by reversing it. It declares "I planned on guessing one before but the last time I planned that, the predictor also guessed one. Therefore I will reverse and consider guessing zero."

4. Given that the CDT agent was just considering saying "zero", the agent updates its initial estimate by reversing it. It declares "I planned on guessing zero before but the last time I planned that, the predictor also guessed zero. Therefore I will reverse and consider guessing one."

5. The CDT agent realizes that, given the predictor's capabilities, its own prediction will be undefined

6. The CDT agent walks away, not wanting to waste the computational power

The longer and longer the predictor is accurate for, the higher and higher the CDT agent's prior becomes that its own thought process is casually affecting the estimate[2]. Since the CDT agent is embedded, it's impossible for the CDT agent to reason outside it's thought process and there's no use in it nonsensically refusing to leave the game.

Furthermore, any good decision-theorist knows that you should never go up against a Sicilian when death is on the line[3].

[1] This is not to say that world-modeling isn't relevant to evaluating a decision theory. But in this case, we should be fully discussing things that may/may not happen in the actual world we're in and picking the most appropriate decision theory for this one. Isolated thought experiments do not serve this purpose.

[2] Note that, in cases where this isn't true, the predictor should get worse over time. The predictor is trying to model the CDT agent's predictions (which depend on how the CDT agent's actions affect its thought-process) without accounting for the way the CDT agent is changing as it makes decision. As a result, a persevering CDT agent will ultimately beat the predictor here and gain infinite utility by playing the game forever

[3] The Battle of Wits from the Princess Bride is isomorphic to problem in this post