Posts

Benefitial habits/personal rules with very minimal tradeoffs? 2024-05-13T06:06:14.471Z

Comments

Comment by Slapstick on How likely is it that AI will torture us until the end of time? · 2024-06-01T21:49:10.606Z · LW · GW

Thanks! No pressure to respond

I don't understand why you think suffering is primary outside of particular brain/mind wiring. I hope I'm misunderstanding you. That seems wildly unlikely to me, and like a very negative view of the world.

Basically I think within the space of all possible varieties and extents of conscious experience, suffering starts to become less and less Commensurable with positive experience the further you go towards the extremes.

If option (A) is to experience the worst possible suffering for 100 years, prior to experiencing the greatest possible pleasure for N number of years, and option (B) is non existence, I would choose option (B), regardless of the value of N.

It appears to be held by people who have suffered much more than they've enjoyed life.

Should this count as evidence against their views? It seems clear to me that if you're trying to understand the nature of qualitative states, first hand experience with extreme states is an asset.

I have personally experienced prolonged states of consciousness which were far worse than non-existence. Should that not play a part in informing my views? Currently I'm very happy, I fear death, I've experienced extraordinary prolonged pleasure states. Would you suggest I'm just not acquainted with levels of wellbeing which would cause me to meaningfully revaluate my view?

I think there's also a sort of meta issue where people with influence are systematically less acquainted with direct experience of the extremes of suffering. Meaning that discourse and decision making will tend to systematically underweight experiences of suffering as a direct data source.

I agree with your last paragraph.

Comment by Slapstick on How likely is it that AI will torture us until the end of time? · 2024-06-01T18:47:08.099Z · LW · GW

I appreciate the thoughtful response and that you seem to take the ideas seriously.

That is a fundamental aspect of how experience works now. That's also a result of evolution wiring us to pay more attention to bad things than good things.

I do think it's a fundamental aspect of how experience works, independently of how our brains are disposed to thinking about it, however I definitely think it's possible to prophylactically shield or consciousness against the depths of suffering by modifying the substrate. I can't tell whether we're disagreeing or not.

I don't know exactly how to phrase It, but I think a fundamental aspect of the universe is that as suffering increases in magnitude, it becomes less and less clear that there is (or can be) a commensurate value on the positive side which can negate it(trade off against it, even things out). I don't think it's true of the reverse.

Are you making the claim that this view is a faulty conclusion owing to the contingent disposition of my human brain?

Or are you making the claim that the disposition of my human brain can be modified so as to prevent exposure to the depths of suffering?

Comment by Slapstick on How likely is it that AI will torture us until the end of time? · 2024-06-01T17:09:03.134Z · LW · GW

We're in a Pascal’s mugging situation, but from a negative point of view, where the trade-off is between potential infinite years of suffering and suicide in order to avoid them for sure.

In the past I've struggled deeply with this thought process and I have reasoned my way out of that conclusion. It's not necessarily a more hopeful conclusion but it takes away the idea that I need to make a decision, which I find very comforting.

Ultimately it comes down to the illusory nature of identity.

A super powerful entity would have the power to create bespoke conscious entities for the purpose of inducing suffering.

The suffering of "Future you" is no more or less real than the suffering of future entities in general. The only difference is that your present mind projects a sense of identity and continuity which causes you to believe there's a difference.

The illusory sense that there is continuity of consciousness and identity is evolutionarily advantageous but it fully loses coherence and relevance in the domain you're talking about.

I'm happy to go into more detail if that isn't convincing.

You could think of identity in this case as a special type of sympathetic empathy for a version of yourself in the future which you wouldn't grant to future entities which aren't "yourself". This is just a feeling that present you is feeling, and has no actual meaningful connection to the entity you'd classify as your future self.

Comment by Slapstick on How likely is it that AI will torture us until the end of time? · 2024-06-01T16:09:58.980Z · LW · GW

Does this assume there is some symmetry between the unimaginably bad outcomes and the unimaginably good outcomes?

It seems very clear to me that the worst outcomes are just so much more negative than the best outcomes are positive. I think that is just a fundamental aspect of how experience works.

Comment by Slapstick on Are most people deeply confused about "love", or am I missing a human universal? · 2024-05-24T20:43:53.366Z · LW · GW

Would you be able to specify a scenario in which the general term for love would lead to dysfunction?

I think generally if people want to signal how they feel about someone they're typically able to do so.

A lot of dysfunction is caused by people being intentionally ambiguous about the extent and quality and conditions of their feelings. In that way people may hide behind the ambiguity of the word love. Communication helps but I'm not sure if the imprecise nature of the word love is a significant barrier to communication.

Comment by Slapstick on my note system · 2024-05-15T04:30:47.238Z · LW · GW

Have you ever used Obsidian? Sounds similar to the method you're describing. If so, what do you think of it? Especially with respect to your preferred workflow?

Comment by Slapstick on Monthly Roundup #18: May 2024 · 2024-05-14T01:33:28.433Z · LW · GW

On the lab grown meat section

For those who are instead principled libertarians who genuinely wouldn’t turn this around on a moment’s notice, well, I am sorry that others have ruined this and so many other principled stands.

I am not sure if I understand what is meant by this, but I'm interpreting it to imply that principled libertarians should be against a ban on meat derived from animals.

I think anyone claiming that ought to also provide a justification as to why non-human animals shouldn't be afforded some basic negative rights within libertarian principles?

To argue that one conscious being should be granted full license to do whatever they want with another conscious being doesn't really strike me a pro freedom stance.

Unless you have a reason why it's okay to have an out-group for whom you deny freedoms in order to maximize the freedoms of the in-group? Are libertarian principles just "might makes right" privileging the smallest number of individuals you can get away with?

Comment by Slapstick on Thoughts on seed oil · 2024-05-13T17:35:43.370Z · LW · GW

It's my understanding that the controversy is mostly manufactured by industries with large financial interests in selling foods with added sodium. They pay for misleading/inaccurate studies to be done in order to introduce uncertainty and doubt. Whereas it's my understanding there is a near consensus towards low sodium amongst scientists without direct/indirect industry ties.

I do think there are probably some cases where increasing salt beyond natural levels can be the healthier thing to do given specific health concerns.

Comment by Slapstick on Benefitial habits/personal rules with very minimal tradeoffs? · 2024-05-13T15:55:45.123Z · LW · GW

That one sounds good!

It wouldn't work for me personally because I have a pathological relationship with refined sugar so the only equilibrium which works for me is cutting it out entirely (which has been successful and rewarding though initially very difficult).

Thanks!

Comment by Slapstick on Benefitial habits/personal rules with very minimal tradeoffs? · 2024-05-13T15:41:58.635Z · LW · GW

Oh that's a good one! I mostly follow that one already although I do find value in some unsweetened teas and smoothies. I find personally that the immediate trade-offs to consuming alcohol are enough to ensure I only really drink when it's actually aligned with my interests.

Although I do have a rule for alcohol which is "don't consume any alcohol unless people who you're currently being social with are already drinking," I'm not sure exactly how much that rule has helped me because I've followed it all my life and I don't really like alcohol that much, but maybe that's partially because of the rule.

But yes I think the rule you gave is a really good one, especially when it comes to things like refined sugar. A sugar craving could be satisfied in other ways, so there's relatively small trade-offs in that sense, whereas it's very beneficial not to drink refined calories because it's so easy to consume so much that way while not bringing in any significant nutrition alongside it.

Thanks!

Comment by Slapstick on Dating Roundup #3: Third Time’s the Charm · 2024-05-08T20:28:36.952Z · LW · GW

Very interesting post! I enjoyed it! Just had some thoughts about the poly section.

If you are polyamorous, and you meet someone plausibly 25% better, or even someone 0% better (I mean the person you are with is pretty good, no?) you are honor bound to try and make it happen.

I'm not sure why you'd be honour bound to make that work. Maybe the phrasing is just being hyperbolic but I don't think refraining from pursuing a romantic relationship damages your poly honour.

Most people are not hyper-skilled in anything. Certainly they are not hyper-skilled in communication, emotional regulation and self-awareness.

If you define "hyper-skilled" as "way more skilled than average" then what you're saying is true by definition. If its not defined relative to everyone else in a given culture, I think you can certainly say most people are hyper skilled at communication, emotional regulation and self-awareness in ways which their culture requires of them.

For example, most people in highly religious/authoritarian cultures are adept at those social skills which prevent them from being ostracized and condemned. Not reacting violently to insults would be considered hyper skilled in some cultures whereas it's the minimum in others.

With that In mind I don't think polyamory is as unrealistic or as demanding in its requirements as you make it out to be. People tend to become hyper skilled socially when it's a requirement for what they're doing, and when it's normalized within their culture. If other structures are in place to replace the requirement for those particular skills, they won't develop.

Polyamory probability selects for people who are socially skilled in the ways that help with polyamory, but being polyamorous also helps to develop those skills.

I think it's fair to say that for many or most people it would be too costly to try to switch from monogamy towards polyamory when they've already been highly invested in developing their monogamy toolbox. I think that's very different from saying only a small percentage of people have the capacity/potential to flourish being poly.

Scott then follows up with a highlights from the comments, where the arguments against polyamory seem convincing

I read most of the comments and I think pretty much all of the arguments against polyamory are coming from monogamous people with very limited/no experience with polyamory or polyamorous people. Not to say that discredits their arguments, but I'm typically pretty sceptical of arguments about lifestyles that are widely considered distasteful, coming from people who are far removed from those lifestyles, based on a couple anecdotes, if any.

Monogamous people are also already having way fewer children, and the type of person deciding to be polyamorous probably correlates pretty strongly with the type of person already deciding not to have kids. I don't think there's really good arguments that kids of poly people will be worse off, most of those arguments refer to practices which aren't essential to being Poly. Many of the arguments appeal to reference classes that aren't particularly applicable to a scenario where things are being done with care intentionally as opposed to as a result of scarcity, neglect, and unforseen challenging circumstances.

Comment by Slapstick on Thoughts on seed oil · 2024-05-08T17:55:28.906Z · LW · GW

If you only eat potatoes you wouldn't die from lack of sodium, the average person would probably become healthier only eating potatoes, it's been done, though I'm not endorsing that. Potatoes and water already have sodium in them, maybe not quite at the ideal ratio per average calorie but it's pretty close or maybe in that range depending on the person.

We certainly need some sodium/salt but I think the extent to which most people crave salt is a result of miscalibration due to overexposure and adaptations which aren't aligned with our current environment.

I minimize added sodium and I don't really have any cravings for salt anymore, unless you count the cravings I have generally for the food/nutrition I need to sustain myself, which contains roughly enough sodium naturally.

If someone is eating a varied diet of whole foods with no added salt it's possible that adding a very marginal amount of extra salt would be healthier in some cases, but that's very far from what is typical.

Comment by Slapstick on Thoughts on seed oil · 2024-05-08T17:12:30.744Z · LW · GW

I agree that seed oils should be avoided yes. I am skeptical of explanations pointing to some element particular to seed oils that is the main source of obesity and health problems, and I'd be worried this might lead people to be less concerned about consuming other unhealthy things.

Comment by Slapstick on So What's Up With PUFAs Chemically? · 2024-04-29T02:45:49.052Z · LW · GW

I'm unsure exactly what points you're making.

I'm saying the idea that it's healthiest to avoid virtually any refined oil is mainstream nutritional understanding. Do you dispute this? I'm not making a point about which refined oils/fats are better than others. I haven't seen anything that has convinced me mainstream nutrition is wrong about that, but I don't think its particularly important when they can all be avoided.

Typical doctors are not particularly reliable nutritional authorities. They have almost no nutrition training.

MacDonalds fries are clearly very unhealthy regardless of what they're fried in. Do you have evidence that they're healthier when fried in beef tallow?

Regardless, the point I was making was that the diets the original commenter mentioned all restrict things that mainstream nutrition already suggests cause health problems.

Refined sugar, refined grains, refined fats, and animal products are all things mainstream nutrition suggests cause health problems. All of the diets listed restrict at least one of those things, so it's not surprising that people would report temporary improvements in health relative to a diet that doesn't restrict any of them.

Comment by Slapstick on So What's Up With PUFAs Chemically? · 2024-04-27T16:19:22.129Z · LW · GW

I am confused by this sort of reasoning. As far as I'm aware, mainstream nutritional science/understanding already points towards avoiding refined oils (and refined sugars).

There's already explainations for why cutting out refined oil is be beneficial.

There are already reasonable explainations for why all of those diets might be reported to work, at least in the short term.

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T15:04:14.901Z · LW · GW

I would consider most bread sold in stores to be processed or ultra processed and I think that's a pretty standard view but it's true there might be some confusion.

Or take traditional soy sauce or cheese or beer or cured meats

I would consider all of those to be processed and unhealthy and I think thats a pretty standard view, but fair enough if there's some confusion around those things.

So as a natural category "ultra processed" is mostly hogwash.

I guess my view is that it's mostly not hogwash?

The least healthy things are clearly and broadly much more processed than the healthiest things.

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T14:51:18.149Z · LW · GW

I typically consume my greens with ground flax seeds in a smoothie.

I feel very confident that adding refined oil to vegetables shouldn't be considered healthy, in the sense that the opportunity cost of 1 Tablespoon of olive oil is 120 calories, which is over a pound of spinach for example. Certainly it's difficult to eat that much spinach and it's probably unwise, but I just say that to illustrate that you can get a lot more nutrition from 120 calories than the oil will be adding, even if it makes the greens more bioavailable.

That said "healthy" is a complicated concept. If adding some oil to greens helps something eat greens they otherwise wouldn't eat for example, that's great.

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T14:22:58.806Z · LW · GW

I am perhaps not speaking as precisely as I should be. I appreciate your comments.

I believe it's correct to say that if you consider all of the food/energy we consumed in the past 50+ million years, it's virtually all plants.

The past 2-2.5 million years had us introducing more animal products to greater or lesser extents. Some were able to subsist on mostly animal products. Some consumed them very rarely.

In that sense it is a relatively recent introduction. My main point is that given our evolutionary history, the idea that plants would be healthier for us than animal products when we have both in abundance, and the idea that plants are more suitable to maintaining health long past reproductive age, aren't immediately/obviously unreasonable ideas.

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T01:52:14.787Z · LW · GW

I would consider adding salt to something to be making that thing less healthy. If adding salt is essential to making something edible, I think it would be healthier to opt for something that doesn't require added salt. That's speaking generally though, someone might not be getting enough sodium, but typically there is adequate sodium in a diet of whole foods.

We often combine foods to make nutrients more accessible, like adding oil to greens with fat-soluble vitamins.

I would disagree that adding refined oil to greens would be healthy overall.

Not sure how much oil we're talking, but a tablespoon of oil has more calories than an entire pound of greens. Even if the oil increases the availability of vitamins, I am very sceptical that it would be healthier than greens or other whole plants with an equivalent caloric content to the added oil. I believe it's also the case that fats from whole foods can offer similar bioavailability effects.

At the same time, as far as I'm aware some kinds of vinegar might sometimes be a healthy addition to a meal, despite it's processing being undoubtedly contrary to the general guidelines I'm defending, so even if I don't agree about the oil I think the point still stands.

I do think you're offering some valid points that confound my idea of simple guidelines somewhat, but I still don't think they're very significant exceptions to my main point.

Appreciate the dialogue:)

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T00:43:16.476Z · LW · GW

I think we're pretty confident that refined oils are unhealthy (especially in larger quantities) , I believe there's just controversy about the magnitude of explanatory power given to seed oils.

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T00:32:23.596Z · LW · GW

There's some simple processes that make it easier/possible to digest whole foods that would otherwise be difficult/impossible to healthily digest, but I don't really think there's meaningful confusion as to whether that's being referred to by the term processed foods.

Could you offer some examples of healthy foods /better for us foods that are processed such that there would be meaningful confusion surrounding the idea of it being healthy to avoid processed foods, according to how that term is typically used?

I can think of some, but definitely not anything of enough consequence to help me to understand why people here seem so critical of the concept of reducing processed foods as a health guideline.

Comment by Slapstick on Thoughts on seed oil · 2024-04-25T00:13:28.630Z · LW · GW

I had just searched on google about ways to make olives edible and got some mixed results. The point I was trying to make was that the way that olives are typically processed to make them edible results in a product that isn't particularly healthy at least relatively speaking, due to having isolated chemical(s) added to it in its processing.

The main thing I'm trying to say is that eating an isolated component of something we're best adapted to eat, and/or adding isolated/refined components to that food, will generally make that food less healthy than it would be were we eating all of the components of the food rather than isolated parts.

I think that process, and more complex variations of that process, are essentially what's being referred to when referring to the process behind processed foods. I think it's a generally reasonable term with a solid basis.

Comment by Slapstick on Thoughts on seed oil · 2024-04-24T23:50:04.192Z · LW · GW

I don't know enough to dispute the ratios of animal products eaten by people in the paleolithic era, but it's still certainly true that throughout our evolutionary history plants made up the vast majority of our diets. The introduction of animal products representing a significant part of our diet is relatively recent thing.

The fact that fairly recently in our evolutionary history humans adapted to be able to exploit the energy and nutrition content of animal products well enough to get past reproductive age, is by no means overwhelming evidence that saturated fats "can't possibly be bad for you".

Although the connection between higher fat diets and negative health outcomes is then another inferential step that hasn't been strongly supported

How would you define strongly supported?

We don't have differential analysis of the resulting health

There is archeological evidence of Arctic people's subsisting on meat showing atherosclerosis.

Comment by Slapstick on Thoughts on seed oil · 2024-04-24T00:46:15.321Z · LW · GW

A cooked food could technically be called a processed food but I don't think that adds much meaningful confusion. I would say the same about soaking something in water.

Olives can be made edible by soaking them in water. If they're made edible by soaking in a salty brine (an isolated component that can be found in whole foods in more suitable quantities) then they're generally less healthy.

Local populations might adapt by finding things that can be heavily processed into edible foods which can allow them to survive, but these foods aren't necessarily ones which would be considered healthy in a wider context.

Comment by Slapstick on Thoughts on seed oil · 2024-04-23T20:33:08.421Z · LW · GW

It seems pretty straightforward to me but maybe I'm missing something in what you're saying or thinking about it differently.

Our bodies evolved to digest and utilize foods consisting of certain combinations/ratios of component parts.

Processed food typically refers to food that has been changed to have certain parts taken out of it, and/or isolated parts of other foods added to it (or more complex versions of that). Digesting sugar has very different impacts depending on what it's digested alongside with. Generally the more processed something is, the more it differs from the way that our bodies are optimized for.

To me "generally avoid processed foods" would be kinda like saying "generally avoid breathing in gasses/particulates that are different from typical earth atmosphere near sea level".

It makes sense to generally avoid inputs to our machinery to the extent that those inputs differ from those which our machinery is optimized to receive, unless we have specific good reasons.

Why should that not be the default, why should the default be requiring specific good reasons to filter out inputs to our machinery that our machinery wasn't optimized for?

Comment by Slapstick on Thoughts on seed oil · 2024-04-23T19:50:54.682Z · LW · GW

How can saturated fats, the main ingredients in breast milk and animal products, be bad for humans (an apex predator)? Was eating animals really giving our hunter gatherer ancestors heart attacks left and right?

I think there's a few issues with this reasoning.

For one thing, evolution wasn't really optimizing for the health of people around the age where people usually start having heart attacks. There wasn't a lot of selection pressure to make tradeoffs ensuring the health of people 20+ years after sexual maturity.

Another point is that animal sources of food represented a relatively small percentage of what we ate throughout our evolutionary history. We mostly ate plants, things like fruits and tubers. Of the groups who's diets consisted of mostly meat, there is evidence of health issues resulting.

The nutritional profile of breast milk is intended for a human who is growing extremely quickly, not for long term consumption by an adult. Very different nutritional needs.

Similarly for seed oils, through them we're eating such a ridiculous amounts of PUFA; something that would be quite impossible in the ancestral environment. How can our bodies possibly be adapted to cope with that?

I believe mainstream nutrition advises against consuming refined oils, including seed oils . I may be missing a point you're making.

Comment by Slapstick on Thoughts on seed oil · 2024-04-21T22:40:12.247Z · LW · GW

I'm not sure I understand why the experience you're describing gives an update towards these seed oil theories when it seems generally consistent with already understood health and nutrition knowledge.

Is it particularly surprising that someone experiences some health problems after switching from a diet low in refined/processed ingredients to one high in those ingredients, while also undergoing the stress of being drafted into the military? (I would be very stressed though I shouldn't assume)

Standard nutrition might be insufficient to explain the extent and speed at which the health issues occurred, but then likewise the seed oil theories would be insufficient to explain why more drafted soldiers aren't quickly developing those same health issues.

Comment by Slapstick on Transformative trustbuilding via advancements in decentralized lie detection · 2024-03-17T17:24:47.198Z · LW · GW

I am sceptical about the role of alcohol you describe and dynamics around it as a form of lie detector, but I know there's a range of social dynamics I haven't necessarily been exposed to in my culture.

I have been in various groups that heavily drink on occasion, but I've never seen any evidence of people being viewed as having something to hide were they not to drink.

I think alcohol might make people more honest but I think it's usually things they already wanted to divulge but for lack of some courage or sense of emotional intimacy that alcohol can provide. It's hard for me to imagine alcohol playing a similar role as a lie detector for significant factual information people strongly want to hide.

Could you offer any examples of where a real lie detector would be valuable in friendships or potential friendships?

A lot of the things I might want to know seem challenging to address via a lie detector. "Will you do anything violent or steal or intentionally damage my property," People likely to do those things might honestly intend not to.

I could see it potentially being useful for people having sex more on the casual side.

Comment by Slapstick on Transformative trustbuilding via advancements in decentralized lie detection · 2024-03-16T04:59:14.114Z · LW · GW

high-trust friend groups

I'm having a hard time imagining a scenario in which I would find this valuable in my friend groups. If I were ever unsure whether I could trust the word of a friend on an important matter, I'd think that would represent deeper issues than a mere lack of information a scan of their brain could provide. Perhaps I'm nieve or particular in some way in how I filter people.

Do you have examples for how this would aid friendships? Or the other domains you mentioned?

I could see it being very valuable but I also find the idea very frightening, and I am not someone who lies.

Comment by Slapstick on Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. · 2024-02-25T05:36:47.525Z · LW · GW

And it says something about EITHER the unreliability of intuitions beyond run-of-the-mill situations, or about the insane variance in utility functions across people (and likely time)

I don't think it's really all that complicated, I suspect that you haven't experienced a certain extent of negative valence which would be sufficient to update you towards understanding how bad suffering can get.

It would be like if you've never smelled anything worse than a fart, and you're trying to gauge the mass of value of positive smells against the mass of value of negative smells. If you were trying to estimate what it would be like in a small room full of decaying dead animals and ammonia, or how long you'd willingly stay in that room, your intuitions would completely fail you.

but only minor changes in value toward the tails.

I have experienced qualia that is just slightly net negative, feeling like non-existence would be preferable all else equal. Then I've experienced states of qualia that are immensely worse than that. The distance between those two states is certainly far greater than the distance between neutral and extreme pleasure/fulfillment/euphoria etc. Suffering can just keep getting worse and worse far beyond the point at which all you can desire is to cease existing.

Comment by Slapstick on Balancing Games · 2024-02-24T23:20:37.402Z · LW · GW

I think one reason I don't like that sort of thing is there's more ambiguity in "what it took to win the game"

It's hard to know whether an artificial advantage is proportional to the skill gap. If I win, I won't know the extent to which I should attribute that win to good play (that I ought to be proud of, and that will impress others), VS attributing the win to a potentially greater than 1/N chance of winning(that I came by artificially).

If the greater skill is the absolute advantage that leads me to a win , I will discount the achievement on account of having an absolute advantage, but I'll still feel satisfied that I have achieved a relatively higher skill level.

If an improperly calibrated handicap is the absolute advantage that leads me to a win, it's a win I'd discount on account of there being an absolute advantage, but in this case I'd garner no satisfaction from having an (artificial) absolute advantage.

Morestill the win might feel insulting or condescending if I was given a disproportionately large advantage due to my friends/competitors underestimation of my expected quality of play.

My win will also not necessarily give my competitors an update as to whether they underestimated my expected quality of play.

If the expectation is that I will win 1/N times, they won't update on my skill level if I win. (Maybe very slightly, and eventually as you play more games)

If I win when the odds are against me, people update significantly on my expected quality of play.

It feels good to know people are updating favourably on my expected quality of play.

Comment by Slapstick on Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. · 2024-02-24T21:39:41.616Z · LW · GW

Interesting. It is an abstract hypothetical, but I do think it's useful, and it reveals something about how far apart we are in our intuitions/priors.

I wouldn't choose to live a year in the worst possible hell for 1000 years in the greatest possible heaven. I don't think I would even take the deal in exchange for an infinite amount of time in the greatest possible heaven.

I would conclude that the experience of certain kinds of suffering reveals something significant about the nature of consciousness that can't be easily inferred, if it can be inferred at all.

I’m more confident that I’d spend a year as a bottom-5% happy human in order to get a year in the top-5%

I would guess that the difference between .001 percentile happy and 5th percentile happy is larger than the difference between the 5th percentile and 100th percentile. So in that sense it's difficult for me to consider that question.

None of these are actual choices, of course. So I’m skeptical of using these guesses for anything important

I think even if they're abstract semi-coherant questions they're very revealing, and I think they're very relevant to prioritization of s-risks, allocating resources, and issues such as animal welfare.

It makes it easier for me to understand how otherwise reasonable seeming people can display a kind of indifference to the state of animal agriculture. If someone isn't aware of the extent of possible suffering, I can see why they might not view the issue with the same urgency.

Comment by Slapstick on Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. · 2024-02-24T06:05:28.449Z · LW · GW

Would you spend a year in the worst possible hell in exchange for a year in the greatest possible heaven?

Comment by Slapstick on Why you, personally, should want a larger human population · 2024-02-24T00:23:11.368Z · LW · GW

I think this is a good summary of a lot of the arguments for increased population, even if my view is different.

I think most of the benefits you're describing flow from a very tiny fraction of all humans.

Given the returns to specialization, traditionally populations must grow in order to support the efforts of that tiny fraction. However it's not necessarily the case that in the coming years increasing population is the only way to increase the amount of specialized individuals producing massive value.

Automation will make it easier to specialize.

Comment by Slapstick on Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. · 2024-02-23T23:48:24.592Z · LW · GW

The rate of suicide is really quite low. You ARE being offered the choice between an unknown length of continued experiences, and cessation of such.

I think the expected value of the rest of my life is positive (I am currently pretty happy), especially considering impacts external to my own consciousness. If that stops being the case, I have the option.

There's also strong evolutionary reasons to expect suicide rates to not properly reflect the balance of qualia.

As embedded agents, our views are contingent on our experiences, and there is no single truth to this question.

It's hard to know exactly what this is implying. Sure it's based on personal experience that's difficult to extrapolate and aggregate etc. But I think it's a very important question. Potentially the most important question. Worth some serious consideration.

People are constantly making decisions based on the their marginal valuations of suffering and wellbeing, and the respective depths and heights of each end of the spectrum. These decisions can/do have massive ramifications.

So I can try to understand your view better, would you choose to spend one year in the worst possible hell if it meant you got to spend the next year in the greatest possible heaven?

Given my understanding of your expressed views, you would accept this offer. If I'm wrong about that, knowing that would help with my understanding of the topic. If you think it's an incoherent question, that would also improve my understanding.

Feel free to disengage, I just find limited opportunities to discuss this. If anyone else has anything to contribute I'd be happy to hear it.

Comment by Slapstick on Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. · 2024-02-23T22:31:38.187Z · LW · GW

Thanks for answering. I would personally expect this intuition and introspection to be sensitive to contingent factors like the range of experiences you've had, would you agree?

Personally my view leans more in the other direction, although it's possible I'm losing something in misunderstanding the complexity variable.

If my life experience leads me the view that 'suffering is worse than wellbeing is good', and your life experiences lend towards the opposite view, should those two data points be given equal weight? I personally would give more weight to accounts of the badness of suffering, because I see a fundamental asymmetry there, but would you say that's a product of bias given to my set of experiences?

If I were to be offered 300 years of overwhelmingly positive complex life in exchange for another ten years of severe anhedonic depression, I would not accept that offer. It wouldn't even be a difficult choice.

Assuming you would accept that offer for yourself, would you accept that offer on behalf of someone else?

Comment by Slapstick on Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do. · 2024-02-23T17:26:10.814Z · LW · GW

Why do you think/suspect that?

Comment by Slapstick on A Question For People Who Believe In God · 2023-11-24T18:54:21.089Z · LW · GW

I think I agree with this, but I also think it's really important to avoid making too many assumptions about what people believe when they say they're religious or practice religion . People often use similar language and labels to signify a very broad range of beliefs and views.

Comment by Slapstick on A Question For People Who Believe In God · 2023-11-24T16:50:58.569Z · LW · GW

I have a very religious background but currently I'm not sure whether you would consider me religious. (Also to be clear I watched most of the video but I don't know much about him otherwise)

I think when hearing people share about personal things in the category of religion, it's important to try to be careful when pattern matching or when making assumptions about what beliefs people hold. People can use very similar words to refer to vastly different metaphysical beliefs. Two people could also have very similar metaphysical priors, and one might use more religious coded language due to their cultural background, whereas the other might not.

When he says that prayer works, I don't think you should necessarily take him to mean that he is communicating with an omni*** being who is making changes to our reality based on that communication.

For all intents and purposes he would probably concede to reducing prayer to a form of psychologically theraputic meditation. However, I think part of adopting a religious attitude is a hesitancy towards being reductive in that way.

Anyway, it got me thinking, when someone says they "believe in God" does this mean something like "I assign a ≥ 50% probability to there being an omnipotent omnipresent and omniscient intelligence?"

This is a good question, but how someone responds will vary a lot person to person, and it will be very difficult to converge on a common enough understanding of the meaning of words sufficient to get a clear answer.

For many people, it's more about adopting a kind of mental attitude, rather than something that can easily be understood by trying to clarify a probability.

Then, many people would just unequivocally answer that they assign a greater than 50% probability. Many of those people would go further to say that they'd assign a 100% probability. There's certain kinds of experiences that have a sort of self evident transcendent seeming quality to them, I've had these experiences, so it's easy for me to understand why some religious people would interpret that as a kind of ontological evidence for their own specific views. I just think they're making an error.

Comment by Slapstick on It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral Patienthood · 2023-11-14T21:42:25.063Z · LW · GW

I don't think the title of this post is consistent with your self professed epistemic status, or the general claims you make.

You seem to be stating that in your (non expert) opinion, some EA's are overconfident in the probabilities they'd assign to shrimp having the capacity to experience qualia?

If we assumed that's correct, that doesn't imply that it's okay to eat shrimp. It just means there's more uncertainty.

Comment by Slapstick on It's OK to be biased towards humans · 2023-11-13T01:43:11.119Z · LW · GW

I don't think the thermometer is suffering.

I think it's not necessarily easy to know when something is suffering from the outside, but I still think it's the best standard.

most multicellular animals clearly have, but still we don't give them the right to copyright

I possibly should have clarified I'm moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don't think copyright rights will be relevant anymore.

We raise and kill certain animals for their meat, in large numbers

We shouldn't be doing this.

we just require that this is done without unnecessary cruelty.

This isn't true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals.

My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into 'suffering', and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights

Why Darwinian evolution? Because it's hard to know if it's suffering otherwise?

I think rights should be based on capacity for intelligence in certain circumstances where it's relevant. I don't think a pig should be able to vote in an election, because it wouldn't be able to comprehend that, but it should have the right not to be tortured and exploited.

Comment by Slapstick on It's OK to be biased towards humans · 2023-11-12T16:45:14.133Z · LW · GW

But very few people seem to go along the principle of "granting privileges to humans is fine, actually".

Because you're using "it's fine to arbitrarily prioritize humans morally" as the justification for this privilege. At least that's how I'm understanding you.

If you told me it's okay to smash a statue in the shape of a human, because "it's okay to arbitrarily grant humans the privilege of not being smashed, on account of their essence of humanness, and although this statue has some human qualities, it's okay to smash it because it doesn't have the essence of humanness"

I would take issue with your reasoning, even though I wouldn't necessarily have a moral problem with you smashing the statue. I would also just be very confused about why that type of reasoning would be relevant in this case. I would take issue with you smashing an elephant because it isn't a human.

I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it's me or them, I'd rather me survive.

I'm sure there are also humans that you cannot possibly coexist with.

I'm also just saying that's the point at which it would make sense to start morally considering an art generator. But even so, I reject the idea that the moral permissibility of creating art is based on some privilege granted to those containing some essential trait.

I don't think the moral status of a process will ever be relevant to the question of whether art made from that process meets some standard of originality sufficient to repel accusations of copyright infringement.

Comment by Slapstick on It's OK to be biased towards humans · 2023-11-12T15:25:38.679Z · LW · GW

We shouldn't create it, and if we do, we should end it's existence. Or reprogram it if possible. I don't think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.

Comment by Slapstick on It's OK to be biased towards humans · 2023-11-12T01:54:03.130Z · LW · GW

Well, the distinction never mattered until now, so we can't really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct

Even if we assume that this is some privilege granted to humans because they're human, it doesn't make sense to debate whether a human-like process should be granted the same privilege on account of the similar process. Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn't necessarily have an interest no matter how similar the process is to a human process, so it doesn't make sense to grant it the privilege.

If the algorithmic process does have an interest, then it might make sense to grant it the privilege. At that point though it would seem like such a convoluted means of adjudicating copyright laws. Also, If we've advanced to the point at which AI's have actual subjective interests, I don't think copyright laws will matter much.

What moral consideration isn't on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like "intelligence" simply doesn't cut it.

I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.

Comment by Slapstick on It's OK to be biased towards humans · 2023-11-11T22:08:53.479Z · LW · GW

AIs have some property that is "human-like", therefore, they must be treated exactly as humans

Humans aren't permitted to make inspired art because they're human, we've just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.

The argument isn't that the AI is sufficiently "human-like", it's just that the process by which AI makes art is considered sufficiently similar to a process we already consider permissible.

I disagree that arbitrary moral consideration is okay, but I just don't think that issue is really that relevant here.

Comment by Slapstick on It's OK to be biased towards humans · 2023-11-11T21:30:06.557Z · LW · GW

Why not capacity to suffer?

Comment by Slapstick on Vote on Interesting Disagreements · 2023-11-09T18:52:57.483Z · LW · GW

I would love to see some sort of integration of the pol.is system, or similar features

Comment by Slapstick on Vote on Interesting Disagreements · 2023-11-09T16:49:09.577Z · LW · GW

All else equal, a unit of animal suffering should be accorded the same moral weight as an equivalent unit of human suffering. (i.e. equal consideration for equal interests)

Comment by Slapstick on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-11-01T22:10:50.723Z · LW · GW

When trying to model your disagreement with Martin and his position, I think the best sort of analogy I can think of is that of tobacco companies employing 'fear, uncertainty, and doubt' tactics in order to prevent people from seriously considering quitting smoking.

Smokers experience cognitive dissonance when they have strong desires to smoke, coupled with knowledge that smoking is likely not in their best interest. They can supress this cognitive dissonance by changing their behaviour and quitting smoking, or by finding something that introduces sufficient doubt about whether that behavior is in their self interest, the latter being much easier. They only need a marginal amount of uncertainty and doubt in order to suppress the dissonance, because their reasoning is heavily motivated, and that's all tobacco companies needed to offer.

I think Martin is essentially trying to make a case that your post(s) about veganism are functionally providing sufficient marginal 'uncertainty and doubt' for non-vegans to suppress any inclination that they ought to reconsider their behaviour. Even if that isn't at all the intention of the post(s), or a reasonable takeaway (for meat eaters).

I think this explains much or most of the confusing friction which came up around your posts involving veganism. Vegans have certain intuitions regarding the kinds of things that non-vegans will use to maintain the sufficient 'uncertainty and doubt' required to suppress the mental toll of their cognitive dissonance. So even though it was hard to find explicit disagreement, it also felt clear to a lot of people that the framing, rhetorical approach, and data selection entailed in the post(s) would mostly have the effect of readers permitting themselves license to forgo reckoning with the case for veganism.

So I think it's relevant whether one affords animals a higher magnitude of moral consideration, or internalized an attitude which places animals in the in-group, However I don't think that accounts for everything here.

Some public endeavors in truth seeking can satisfy the motivated anti-truth seeking of people encountering it. I interpret the top comment of this post as evidence of that.

I'm not sure if I conveyed everything I meant to here, but I think I should make sure the main point here makes sense before expanding.

Comment by Slapstick on Book Review: Going Infinite · 2023-10-27T18:52:14.336Z · LW · GW

Interesting topic

I think that unless we can find a specific causal relationship implying that the capacity to form social bonds increases overall well-being capacity, we should assume that attaching special importance to this capacity is merely a product of human bias.

Humans typically assign an animal's capacity for wellbeing and meaningful experience based on a perceived overlap, or shared experience. As though humans are this circle in a Ven diagram, and the extent to which our circle overlaps with an iguana's circle is the extent to which that iguana has meaningful experience.

I think this is clearly fallacious. An iguana has their own circle, maybe the circle is smaller, but there's a huge area of non-overlap that we can't just entirely discount because we're unable to relate to it. We can't define meaningful experience by how closely it resembles human experience.