Ikaxas' Shortform Feed

post by Vaughn Papenhausen (Ikaxas) · 2018-01-08T06:19:40.370Z · LW · GW · 11 comments

Contents

  I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:
  I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.
None
11 comments

As Raemon has suggested a format for a short-form content feed, I'm going to go ahead and make one. His explanation of the format:

I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:

  1. don't feel ready to be written up as a full post
  2. I think the process of writing them up might make them worse (i.e. longer than they need to be)

I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.

Edit: For me at least, #2 also includes "writing them up as a full post would involve enough effort that the post would probably otherwise not get written."

11 comments

Comments sorted by top scores.

comment by Vaughn Papenhausen (Ikaxas) · 2020-02-16T02:21:53.641Z · LW(p) · GW(p)

Global coordination problems

I've said before [LW · GW] that I tentatively think that "foster global coordination" might be a good cause area in its own right, because it benefits so many other cause areas. I think it might be useful to have a term for the cause areas that global coordination would help. More specifically, a term for the concept "(reasonably significant) problem that requires global coordination to solve, or that global coordination would significantly help with solving." I propose "global coordination problem" (though I'm open to other suggestions). You may object "but coordination problem already has a meaning in game theory, this is likely to get confused with that." But global coordination problems are coordination problems in precisely the game theory sense (I think, feel free to correct me), so the terminological overlap is a benefit.

What are some examples of global coordination problems? Certain x-risks and global catastrophic risks (such as AI, bioterrorism, pandemic risk, asteriod risk), climate change, some of the problems mentioned in The Possibility of an Ongoing Moral Catastrophe, as well as the general problem of ferreting out and fixing moral catastrophes, and almost certainly others.

In fact, it may be useful to think about a spectrum of problems, similar to Bostrom's Global Catastrophic Risk spectrum, organized by how much coordination is required to solve them. Analogous to Bostrom's spectrum, we could have: personal coordination problems (i.e. problems requiring no coordination with others, or perhaps only coordination with parts of oneself), local coordination problems, national coordination problems, global coordination problems, and transgenerational coordination problems.

Replies from: jeff-ladish
comment by Jeffrey Ladish (jeff-ladish) · 2020-02-20T00:22:19.868Z · LW(p) · GW(p)

Nuclear arms control & anti-proliferation efforts are a big one here. Other forms of arms control are important too.

comment by Vaughn Papenhausen (Ikaxas) · 2018-05-31T04:35:03.705Z · LW(p) · GW(p)

I'm currently reading Peter Godfrey-Smith's book Other Minds: The Octopus, The Sea, and the Deep Origins of Consciousness. One thing I've learned from the book that surprised me a lot is that octopuses can differentiate between individual humans (for example, it's mentioned that at one lab, one of the octopuses had a habit of squirting jets of water at one particular researcher). If you didn't already know this, take a moment to let it sink in how surprising that is: octopuses, which 1.) are mostly nonsocial animals, 2.) have a completely different nervous-system structure that evolved on a completely different branch of the tree of life, and 3.) have no evolutionary history of interaction with humans, can recognize individual humans, and differentiate them from other humans. I'm not sure, but I bet humans have a pretty hard time differentiating between individual octopuses.

I feel as though a fact this surprising[1] ought to produce a pretty strong update to my world-model. I'm not exactly sure what parts of my model need to update, but here are one or two possibilities (I don't necessarily think all of these are correct):

1. Perhaps the ability to recognize individuals isn't as tied to being a social animal as I had thought

2. Perhaps humans are easier to tell apart than I thought (i.e. humans have more distinguishing features, or these distinguishing features are larger/more visually noticeable, etc., than I thought)

3. Perhaps the ability to distinguish individual humans doesn't require a specific psychological module, as I had thought, but rather falls out of a more general ability to distinguish objects from each other

4. Perhaps I'm overimagining how fine-grained the octopus's ability to distinguish humans is. I.e. maybe that person was the only one in the lab with a particular hair color, and they can't distinguish the rest of the people (though note, another example given in the book was that one octopus liked to squirt **new people**, people it hadn't seen regularly in the lab before. This wouldn't mesh very well with the "octopuses can only make coarse-grained distinctions between people" hypothesis)

Those are the only ones I can come up with right now; I'd welcome more thoughts on this in the comments. At the moment, I'm leaning most strongly towards 2, plus the thought that 3 is partially right; namely, perhaps there's a special module for this in **humans**, but for octopuses it **does** fall out of a general ability to distinguish objects from each other, and the reason that that ability is enough is because different humans have more/more obvious distinguishing characteristics than I had thought.

[1] This footnote serves to flag the Mind Projection Fallacy [LW · GW] inherent in calling something "surprising," rather than "surprising-to-my-model."

Replies from: Raemon
comment by Raemon · 2018-05-31T05:48:56.142Z · LW(p) · GW(p)

This comment is great both for the neat facts about Octopuses, and for the awareness of "man, I should sure make an update here", and then actually doing it. :)

comment by Vaughn Papenhausen (Ikaxas) · 2018-04-22T01:11:51.602Z · LW(p) · GW(p)

Musings on Metaethical Uncertainty

How should we deal with metaethical uncertainty? By "metaethics" I mean the metaphysics and epistemology of ethics (and not, as is sometimes meant in this community [LW(p) · GW(p)], highly abstract/general first-order ethical issues).

One answer is this: insofar as some metaethical issue is relevant for first-order ethical issues, deal with it as you would any other normative uncertainty. And insofar as it is not relevant for first-order ethical issues, ignore it (discounting, of course, intrinsic curiosity and any value knowledge has for its own sake).

Some people think that normative ethical issues ought to be completely independent of metaethics: "The whole idea [of my metaethical naturalism] is to hold fixed ordinary normative ideas and try to answer some further explanatory questions" (Schroeder, Mark. "What Matters About Metaethics?" In P. Singer ed. Does Anything Really Matter?: Essays on Parfit on Objectivity. OUP, 2017. P. 218-19). Others (e.g. McPherson, Tristram. For Unity in Moral Theorizing. PhD Dissertation, Princeton, 2008.) believe that metaethical and normative ethical theorizing should inform each other. For the first group, my suggestion in the previous paragraph recommends that they ignore metaethics entirely (again, setting aside any intrinsic motivation to study it), while for the second my suggestion recommends pursuing exclusively those areas which are likely to influence conclusions in normative ethics.

In fact, one might also take this attitude to certain questions in normative ethics. There are some theories in normative ethics that are extensionally equivalent: they recommend the exact same actions in every conceivable case. For example, some varieties of consequentialism can mimic certain forms of deontology, with the only differences between the theories being the reasons they give for why certain actions are right or wrong, not which actions they recommend. According to this way of thinking, these theories are not worth deciding between.

We might suggest the following method for ethical and metaethical theorizing: start with some set of decisions you're unsure about. If you are considering whether to investigate some ethical or metaethical issue, first ask yourself if it would make a difference to at least one of those decisions. If it wouldn't, ignore it. This seems to have a certain similarity with verificationism: if something wouldn't make a difference to at least some conceivable observation, then it's "metaphysical" and not worth talking about. Given this, it may be vulnerable to some of the same critiques as positivism, though I'm not sure, since I'm not very familiar with those critiques and the replies to them.

Note that I haven't argued for this position, and I'm not even entirely sure I endorse it (though I also suspect that it will seem almost laughably obvious to some). I just wanted to get it out there. I may write a top-level post later exploring these ideas with more rigor.

See also: Paul Graham on How To Do Philosophy

comment by Vaughn Papenhausen (Ikaxas) · 2018-03-15T18:59:56.785Z · LW(p) · GW(p)

Self-Defeating Reasons

Epistemic Effort: Gave myself 15 minutes to write this, plus a 5 minute extension, plus 5 minutes beforehand to find the links.

There's a phenomenon that I've noticed recently, and the only name I can come up with for it is "self-defeating reasons," but I don't think this captures it very well (or at least, it's not catchy enough to be a good handle for this). This is just going to be a quick post listing 3 examples of this phenomenon, just to point at it. I may write a longer, more polished post about it later, but if I didn't write this quickly it would not get written.

First example:

Kaj Sotala attempted a few days ago to explain some of the Fuzzy System 1 Stuff that has been getting attention recently. In the course of this explanation, in the section called "Understanding Suffering," he pointed out that, roughly: 1. if you truly understand the nature of suffering, you cease to suffer. You still feel all of the things that normally bring you suffering, but they cease to be aversive. This is because once you understand suffering, you realize that it is not something that you need to avoid. 2. If you use 1. as your motivation to try to understand suffering, you will not be able to do so. This is because your motivation for trying to understand suffering is to avoid suffering, and the whole point was that suffering isn't actually something that needs to be avoided. So, the way to avoid suffering is to realize that you don't need to avoid it.

Edit: forgot to add this illustrative quote from Kaj's post:

"You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself. In trying to reject the belief that making a good impression is important, and trying to do this with the motive of making a good impression, you just reinforce the belief that this is important. If you want to actually defuse from the belief, your motive for doing so has to come from somewhere else than the belief itself."

Second example:

The Moral Error Theory states that:

although our moral judgments aim at the truth, they systematically fail to secure it. The moral error theorist stands to morality as the atheist stands to religion. ... The moral error theorist claims that when we say “Stealing is wrong” we are asserting that the act of stealing instantiates the property of wrongness, but in fact nothing instantiates this property (or there is no such property at all), and thus the utterance is untrue.

The Normative Error Theory is the same, except with respect to not only moral judgments, but also other normative judgements, where "normative judgements" is taken to include at least judgements about self-interested reasons for action and (crucially) reasons for belief, as well as moral reasons. Bart Streumer claims that we are literally unable to believe this broader error theory, for the following reasons. 1. This error theory implies that there is no reason to believe this error theory (there are no reasons at all, so a fortiori there are no reasons to believe this error theory), and anyone who understands it well enough to be in a position to believe it would have to know this. 2. We can't believe something if we believe that there is no reason to believe it. 3. Therefore, we can't believe this error theory. Again, our belief in it would be in a certain way self-defeating.

Third example (Spoilers for Scott Alexander's novel Unsong):

Gur fgbevrf bs gur Pbzrg Xvat naq Ryvfun ora Nohlnu ner nyfb rknzcyrf bs guvf "frys-qrsrngvat ernfbaf" curabzraba. Gur Pbzrg Xvat pna'g tb vagb Uryy orpnhfr va beqre gb tb vagb Uryy ur jbhyq unir gb or rivy. Ohg ur pna'g whfg qb rivy npgf va beqre gb vapernfr uvf "rivy fpber" orpnhfr nal rivy npgf ur qvq jbhyq hygvzngryl or va gur freivpr bs gur tbbq (tbvat vagb Uryy va beqre gb qrfgebl vg), naq gurersber jbhyqa'g pbhag gb znxr uvz rivy. Fb ur pna'g npphzhyngr nal rivy gb trg vagb Uryy gb qrfgebl vg. Ntnva, uvf ernfba sbe tbvat vagb Uryy qrsrngrq uvf novyvgl gb npghnyyl trg vagb Uryy.

What all of these examples have in common is that someone's reason for doing something directly makes it the case that they can't do it. Unless they can find a different reason to do the thing, they won't be able to do it at all.

Replies from: None
comment by [deleted] · 2018-05-18T21:42:13.879Z · LW(p) · GW(p)

This feels, at least surface-level, similar to what I was trying to get at here [LW · GW] about how things can be self-defeating. Do you also think the connection is there?

comment by Vaughn Papenhausen (Ikaxas) · 2018-01-09T17:28:23.220Z · LW(p) · GW(p)

A couple of meta-notes about shortform content and the frontpage "comments" section:

  • It always felt weird to me to have the comments as their own section on the frontpage, but I could never quite figure out why. Well, I think I've figured it out: most comments are extremely context-dependent -- without having read the post one usually can't understand a top-level comment, and it's even worse for comments that are in the middle of a long thread. So having them all aggregated together feels not particularly useful because the usual optimal reading order is to read the post, then read the comments on that post, so getting to the comments from the post is better than getting to them from a general comments-aggregator. I have found it more useful than I thought I would, however, for 1.) discovering posts I otherwise wouldn't have because I'm intrigued by one of the comments and want to understand the context, and 2.) discovering new comments on posts I've already read but wouldn't have thought to check back for new comments on (I think this is probably the best use-case).
  • Note that comments on dedicated shortform-content posts like this one don't have this problem (or at least have it to a lesser degree) because they're supposed to be standalone posts, rather than building on the assumptions and framework laid out in a top-level post.
  • So, a way I think this shortform-content format could be expanded is if we had a tagging system similar to the one that was recently implemented in the section for top-level posts, but in particular with a tag that filters for all comments on posts with "shortform" in the title (that is to say, it doesn't show posts with "shortform" in the title, but rather it shows any comment that was made on a post with "shortform" in the title). That way, not only can anybody create one of these shortform feeds, but people can see these shortform posts without having to sort through all the regular comments, which differ from shortform posts in the way I outlined above.
Replies from: Raemon
comment by Raemon · 2018-01-10T01:43:46.663Z · LW(p) · GW(p)

Roughly agreed (at least with the underlying issues).

We have some vague plans for what I think are strict-improvements over the current status quo (on the front page, making it so you can quickly see all new comments from a given post, and making it so that you can easily see the parents of a comment on the frontpage to get more context), as well as more complex solutions roughly-in-the-direction of what you suggest in the third bullet point (although with different implementation details).

comment by Vaughn Papenhausen (Ikaxas) · 2018-08-16T03:57:06.686Z · LW(p) · GW(p)

I said in this comment [LW(p) · GW(p)] that I would post an update as to whether or not I had done deep reflection (operationalized as 5 days = 40 hours cumulatively) on AI timelines by August 15th. As it turns out, I have not done so. I had one conversation that caused me to reflect that perhaps timelines are not as key of a variable in my decision process (about whether to drop everything and try to retrain to be useful for AI safety) as I thought they were, but that is the extent of it. I'm not going to commit to do anything further with this right now, because I don't think that would be useful.

comment by Vaughn Papenhausen (Ikaxas) · 2018-06-02T17:36:45.275Z · LW(p) · GW(p)

I think one of the key distinctions between content that feels "shortform" and content that feels okay to post as a top-level post is that shortform content is content that doesn't feel important/well-developed/long/something enough to have a title. Now, this can't be the whole story, because I have several posts on this very shortform feed that have titles, but it feels like an important piece of the distinction.