Rob B's Shortform Feed

post by Rob Bensinger (RobbBB) · 2019-05-10T23:10:14.483Z · score: 19 (3 votes) · LW · GW · 16 comments

This is a repository for miscellaneous short things I want to post. Other people are welcome to make top-level comments here if they want. (E.g., questions for me you'd rather discuss publicly than via PM; links you think will be interesting to people in this comment section but not to LW as a whole; etc.)

16 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2019-09-08T02:58:32.070Z · score: 23 (7 votes) · LW(p) · GW(p)

Facebook comment I wrote in February, in response to the question 'Why might having beauty in the world matter?':

I assume you're asking about why it might be better for beautiful objects in the world to exist (even if no one experiences them), and not asking about why it might be better for experiences of beauty to exist.

[... S]ome reasons I think this:

1. If it cost me literally nothing, I feel like I'd rather there exist a planet that's beautiful, ornate, and complex than one that's dull and simple -- even if the planet can never be seen or visited by anyone, and has no other impact on anyone's life. This feels like a weak preference, but it helps get a foot in the door for beauty.

(The obvious counterargument here is that my brain might be bad at simulating the scenario where there's literally zero chance I'll ever interact with a thing; or I may be otherwise confused about my values.)

2. Another weak foot-in-the-door argument: People seem to value beauty, and some people claim to value it terminally. Since human value is complicated and messy and idiosyncratic (compare person-specific ASMR triggers or nostalgia triggers or culinary preferences) and terminal and instrumental values are easily altered and interchanged in our brain, our prior should be that at least some people really do have weird preferences like that at least some of the time.

(And if it's just a few other people who value beauty, and not me, I should still value it for the sake of altruism and cooperativeness.)

3. If morality isn't "special" -- if it's just one of many facets of human values, and isn't a particularly natural-kind-ish facet -- then it's likelier that a full understanding of human value would lead us to treat aesthetic and moral preferences as more coextensive, interconnected, and fuzzy. If I can value someone else's happiness inherently, without needing to experience or know about it myself, it then becomes harder to say why I can't value non-conscious states inherently; and "beauty" is an obvious candidate. My preferences aren't all about my own experiences, and they aren't simple, so it's not clear why aesthetic preferences should be an exception to this rule.

4. Similarly, if phenomenal consciousness is fuzzy or fake, then it becomes less likely that our preferences range only and exactly over subjective experiences (or their closest non-fake counterparts). Which removes the main reason to think unexperienced beauty doesn't matter to people.

Combining the latter two points, and the literature on emotions like disgust and purity which have both moral and non-moral aspects, it seems plausible that the extrapolated versions of preferences like "I don't like it when other sentient beings suffer" could turn out to have aesthetic aspects or interpretations like "I find it ugly for brain regions to have suffering-ish configurations".

Even if consciousness is fully a real thing, it seems as though a sufficiently deep reductive understanding of consciousness should lead us to understand and evaluate consciousness similarly whether we're thinking about it in intentional/psychologizing terms or just thinking about the physical structure of the corresponding brain state. We shouldn't be more outraged by a world-state under one description than under an equivalent description, ideally.

But then it seems less obvious that the brain states we care about should exactly correspond to the ones that are conscious, with no other brain states mattering; and aesthetic emotions are one of the main ways we relate to things we're treating as physical systems.

As a concrete example, maybe our ideal selves would find it inherently disgusting for a brain state that sort of almost looks conscious to go through the motions of being tortured, even when we aren't the least bit confused or uncertain about whether it's really conscious, just because our terminal values are associative and symbolic. I use this example because it's an especially easy one to understand from a morality- and consciousness-centered perspective, but I expect our ideal preferences about physical states to end up being very weird and complicated, and not to end up being all that much like our moral intuitions today.

Addendum: As always, this kind of thing is ridiculously speculative and not the kind of thing to put one's weight down on or try to "lock in" for civilization. But it can be useful to keep the range of options in view, so we have them in mind when we figure out how to test them later.

comment by jimrandomh · 2019-09-09T23:56:10.224Z · score: 10 (2 votes) · LW(p) · GW(p)

Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to "which expressed human preferences are valid?" to be anything other than "all of them". There's a common pattern in metaethics which looks like:

1. People seem to have preference X

2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.

3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.

In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.

comment by Rob Bensinger (RobbBB) · 2019-09-08T20:18:23.896Z · score: 2 (1 votes) · LW(p) · GW(p)

Old discussion of this on LW: https://www.lesswrong.com/s/fqh9TLuoquxpducDb/p/synsRtBKDeAFuo7e3 [? · GW]

comment by Rob Bensinger (RobbBB) · 2019-09-23T16:45:59.284Z · score: 17 (5 votes) · LW(p) · GW(p)

Rolf Degen, summarizing part of Barbara Finlay's "The neuroscience of vision and pain":

Humans may have evolved to experience far greater pain, malaise and suffering than the rest of the animal kingdom, due to their intense sociality giving them a reasonable chance of receiving help.

From the paper:

Several years ago, we proposed the idea that pain, and sickness behaviour had become systematically increased in humans compared with our primate relatives, because human intense sociality allowed that we could ask for help and have a reasonable chance of receiving it. We called this hypothesis ‘the pain of altruism’ [68]. This idea derives from, but is a substantive extension of Wall’s account of the placebo response [43]. Starting from human childbirth as an example (but applying the idea to all kinds of trauma and illness), we hypothesized that labour pains are more painful in humans so that we might get help, an ‘obligatory midwifery’ which most other primates avoid and which improves survival in human childbirth substantially ([67]; see also [69]). Additionally, labour pains do not arise from tissue damage, but rather predict possible tissue damage and a considerable chance of death. Pain and the duration of recovery after trauma are extended, because humans may expect to be provisioned and protected during such periods. The vigour and duration of immune responses after infection, with attendant malaise, are also increased. Noisy expression of pain and malaise, coupled with an unusual responsivity to such requests, was thought to be an adaptation.
We noted that similar effects might have been established in domesticated animals and pets, and addressed issues of ‘honest signalling’ that this kind of petition for help raised. No implication that no other primate ever supplied or asked for help from any other was intended, nor any claim that animals do not feel pain. Rather, animals would experience pain to the degree it was functional, to escape trauma and minimize movement after trauma, insofar as possible.

Finlay's original article on the topic: "The pain of altruism".

comment by Rob Bensinger (RobbBB) · 2019-09-23T16:56:36.999Z · score: 17 (5 votes) · LW(p) · GW(p)

[Epistemic status: Thinking out loud]

If the evolutionary logic here is right, I'd naively also expect non-human animals to suffer more to the extent they're (a) more social, and (b) better at communicating specific, achievable needs and desires.

There are reasons the logic might not generalize, though. Humans have fine-grained language that lets us express very complicated propositions about our internal states. That puts a lot of pressure on individual humans to have a totally ironclad, consistent "story" they can express to others. I'd expect there to be a lot more evolutionary pressure to actually experience suffering, since a human will be better at spotting holes in the narratives of a human who fakes it (compared to, e.g., a bonobo trying to detect whether another bonobo is really in that much pain).

It seems like there should be an arms race across many social species to give increasingly costly signals of distress, up until the costs outweigh the amount of help they can hope to get. But if you don't have the language to actually express concrete propositions like "Bob took care of me the last time I got sick, six months ago, and he can attest that I had a hard time walking that time too", then those costly signals might be mostly or entirely things like "shriek louder in response to percept X", rather than things like "internally represent a hard-to-endure pain-state so I can more convincingly stick to a verbal narrative going forward about how hard-to-endure this was".

comment by Rob Bensinger (RobbBB) · 2019-05-10T23:13:24.150Z · score: 7 (3 votes) · LW(p) · GW(p)

The wiki glossary for the sequences / Rationality: A-Z ( https://wiki.lesswrong.com/wiki/RAZ_Glossary ) is updated now with the glossary entries from the print edition of vol. 1-2.

New entries from Map and Territory:

anthropics, availability heuristic, Bayes's theorem, Bayesian, Bayesian updating, bit, Blue and Green, calibration, causal decision theory, cognitive bias, conditional probability, confirmation bias, conjunction fallacy, deontology, directed acyclic graph, elan vital, Everett branch, expected value, Fermi paradox, foozality, hindsight bias, inductive bias, instrumental, intentionality, isomorphism, Kolmogorov complexity, likelihood, maximum-entropy probability distribution, probability distribution, statistical bias, two-boxing

New entries from How to Actually Change Your Mind:

affect heuristic, causal graph, correspondence bias, epistemology, existential risk, frequentism, Friendly AI, group selection, halo effect, humility, intelligence explosion, joint probability distribution, just-world fallacy, koan, many-worlds interpretation, modesty, transhuman

A bunch of other entries from the M&T and HACYM glossaries were already on the wiki; most of these have been improved a bit or made more concise.

comment by Said Achmiz (SaidAchmiz) · 2019-05-11T07:41:08.370Z · score: 6 (3 votes) · LW(p) · GW(p)

This reminds me of something I’ve been meaning to ask:

Last I checked, the contents of the Less Wrong Wiki were licensed under the GNU Free Documentation License, which is… rather inconvenient. Is it at all possible to re-license it (ideally as CC BY-NC-SA, to match R:AZ itself)?

(My interest in this comes from the fact that the Glossary is mirrored on ReadTheSequences.com, and I’d prefer not to have to deal with two different licenses, as I currently have to.)

comment by habryka (habryka4) · 2019-05-11T08:24:11.348Z · score: 4 (2 votes) · LW(p) · GW(p)

I can reach out to Trike Apps about this, but can we actually do this? Seems plausible that we would have to ask for permission from all editors involved in a page before we can change the license.

comment by Said Achmiz (SaidAchmiz) · 2019-05-11T08:50:40.021Z · score: 4 (2 votes) · LW(p) · GW(p)

I have no idea; I cannot claim to really understand the GFDL well enough to know… but if doable, this seems worthwhile, as there’s a lot of material on the wiki which I and others could do various useful/interesting things with, if it were released under a convenient license.

comment by Rob Bensinger (RobbBB) · 2019-05-10T23:19:00.628Z · score: 4 (2 votes) · LW(p) · GW(p)

Are there any other OK-quality rationalist glossaries out there? https://wiki.lesswrong.com/wiki/Jargon is the only one I know of. I vaguely recall there being one on http://www.bayrationality.com/ at some point, but I might be misremembering.

comment by Said Achmiz (SaidAchmiz) · 2019-05-11T07:36:03.246Z · score: 10 (2 votes) · LW(p) · GW(p)

https://namespace.obormot.net/Jargon/Jargon

comment by Rob Bensinger (RobbBB) · 2019-05-11T20:21:03.506Z · score: 2 (1 votes) · LW(p) · GW(p)

Fantastic!

comment by jimrandomh · 2019-05-11T00:31:56.540Z · score: 6 (3 votes) · LW(p) · GW(p)

It's optimized on a *very* different axis, but there's the Rationality Cardinality card database.

comment by Rob Bensinger (RobbBB) · 2019-05-11T20:18:55.445Z · score: 2 (1 votes) · LW(p) · GW(p)

That counts! :) Part of why I'm asking is in case we want to build a proper LW glossary, and Rationality Cardinality could at least provide ideas for terms we might be missing.

comment by Pattern · 2019-05-18T17:03:02.089Z · score: 3 (2 votes) · LW(p) · GW(p)

How would you feel about the creation of a Sequence of Shortform Feeds? (Including this one?) (Not a mod.)

comment by Raemon · 2019-05-18T22:22:58.014Z · score: 3 (1 votes) · LW(p) · GW(p)

I can't speak for Rob but I'd be fine with my own shortform feed [LW · GW] being included.