jayterwahl's Shortform

post by jayterwahl · 2022-11-22T21:34:14.680Z · LW · GW · 4 comments

Contents

4 comments

4 comments

Comments sorted by top scores.

comment by jayterwahl · 2022-11-22T21:34:14.934Z · LW(p) · GW(p)

Move Over, Insect Suffering: The Insect Vibing Hypothesis

I’m pretty bullish on “insect suffering” actually being the hedonic safe-haven for the planet’s moral portfolio

So as a K-selected species, our lives are pretty valuable, in terms of parental investment, time-to-reproductive-fruition, and how long we expect to live.  As such, the neuroscience of human motivation is heavily tilted towards avoiding-harm; I think the studies say that people feel gains/losses at about +1/-2.5 valences; so, loss is felt much more sharply. (And maybe the average human life is hedonically net-negative for this reason; I go back and forth on that)

But for an R-selected species, we see all these so-reckless-they’re-suicidal behaviors. A fly is hellbent on landing on our food despite how huge and menacing we are.  It really wants that food! A single opportunity for food is huge, in the fly’s expected lifespan, and if the well-fed fly can go breed, then it’s gonna pop out a thousand kids. Evolutionary jackpot! 

But how much must the fly enjoy that food; and how little must it fear death, for us to see the behaviors we see?

I suspect the R-selected species are actually experiencing hedonically positive lives, and, serendipitously, outnumber us a bajillion to one. 

Earth is a happy glowing ball of joyously screwing insects, and no sad apes can push that into the negative.

comment by jayterwahl · 2023-02-14T22:13:37.332Z · LW(p) · GW(p)

"Things we can't talk about" in a relationship is another form of technical debt

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-02-14T22:36:54.196Z · LW(p) · GW(p)

Maybe "social debt" would be a more appropriate phrase?

comment by jayterwahl · 2023-01-27T19:38:20.337Z · LW(p) · GW(p)

I was a negative utilitarian for two weeks because of a math error

So I was like, 

If the neuroscience of human hedonics is such that we experience pleasure at about a 1 valence and suffering at about a 2.5 valence, 

And therefore an AI building a glorious transhuman utopia would get us to 1 gigapleasure, and an endless S-risk hellscape would get us to 2.5 gigapain, 

And we don’t know what our future holds, 

And, although the most likely AI outcome is still overwhelmingly “paperclips”, 

If our odds are 1:1 between ending up in Friendship Is Optimal heaven versus UNSONG hell,

You should kill yourself (and everyone else) swiftly to avoid that EV-negative bet.

 

(noting the mistake is left as an exercise to the reader)