What fact that you know is true but most people aren't ready to accept it?
post by lorepieri (lorenzo-rex) · 2023-02-03T00:06:42.460Z · LW · GW · 210 commentsContents
What fact that you know is true but most people aren't ready to accept it? None 215 comments
Understanding and updating beliefs on deeply engrained topics can take enormous efforts, but sometimes it can be so hard that the listener cannot even in principle accept the new reality. The listener is simply not ready, he lacks a vast background of reasoning leading to the new understanding.
What fact that you know is true but most people aren't ready to accept it?
By "you know is true" I really mean "you are very confident to be true".
Feel free to use a dummy account.
210 comments
Comments sorted by top scores.
comment by justinpombrio · 2023-02-03T03:56:09.367Z · LW(p) · GW(p)
By far the biggest and most sudden update I've ever had is Dominion, a documentary on animal farming:
https://www.youtube.com/watch?v=LQRAfJyEsko
It's like... I had a whole pile of interconnected beliefs, and if you pulled on one it would snap most of the way back into place after. And Dominion pushed the whole pile over at once.
Replies from: SaidAchmiz, Vladimir_Nesov↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-03T06:27:52.009Z · LW(p) · GW(p)
What was the update? In what direction?
Replies from: Viliam↑ comment by Viliam · 2023-02-03T12:37:00.908Z · LW(p) · GW(p)
I suppose the update was that if someone describes meat production as "animals live a happy life on a farm, well-fed and taken care of, and then one day they are relatively painlessly killed" (which most people seem to believe, or at least pretend to believe), it is a complete bullshit (like, hypothetically possible, but most likely applies to less than 1% of the meat you eat).
Replies from: jbash↑ comment by jbash · 2023-02-03T15:26:42.099Z · LW(p) · GW(p)
I didn't and don't think very many people believe that or ever have.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-02-03T23:30:08.293Z · LW(p) · GW(p)
You think people eat meat despite knowing the animals are essentially tortured? Or that beliefs are just less extreme?
Replies from: SaidAchmiz, jbash↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-04T07:19:37.201Z · LW(p) · GW(p)
I eat meat and know all of this stuff about factory farming, AMA.
Replies from: lc↑ comment by lc · 2023-02-04T07:27:38.817Z · LW(p) · GW(p)
Did you watch the documentary?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-04T07:52:30.029Z · LW(p) · GW(p)
This particular one? Nah. It’s two hours, I don’t expect it to tell me anything I don’t already know, and video is a uniquely bad medium for efficiently learning facts. (If there are specific, like, five-minute-long sections of the video which you think contain likely-novel information, I’ll watch them upon request. But really, I’ve seen this sort of thing many times before.)
Replies from: lc↑ comment by lc · 2023-02-04T07:53:54.398Z · LW(p) · GW(p)
I don’t expect it to tell me anything I don’t already know
I disagree and think you should watch it.
Do you think the animals you eat have inner lives and are essentially tortured, or something else?
Replies from: SaidAchmiz, nim↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-04T08:37:39.478Z · LW(p) · GW(p)
I don’t expect it to tell me anything I don’t already know
I disagree and think you should watch it.
… how would you know?
Alright, how about this: name your choice of: (a) one key fact that the video conveys, which you think I’ll find surprising, or (b) one five-minute section of the video, whose contents you think I’ll find surprising.
Do you think the animals you eat have inner lives and are essentially tortured, or something else?
I don’t think that the animals I eat “have inner lives” in any way resembling what we mean when we say that humans “have inner lives”. It’s not clear what the word “torture” might mean when applied to such animals (that is, the meaning is ambiguous, and could be any of several different things), but certainly none of the things that we (legally) do with animals are bad for any of the important reasons why torture of people is bad.
Replies from: Heighn↑ comment by Heighn · 2023-02-05T15:27:06.884Z · LW(p) · GW(p)
"but certainly none of the things that we (legally) do with animals are bad for any of the important reasons why torture of people is bad."
That seems very overconfident to me. What are your reasons for believing this, if I may ask? What quality or qualities do humans have that animals lack that makes you certain of this?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-05T20:15:07.887Z · LW(p) · GW(p)
Sorry, could you clarify? What specifically do you think I’m overconfident about? In other words, what part of this are you saying I could be mistaken about, the likelihood of which mistake I’m underestimating?
Are you suggesting that things are done to animals of which I am unaware, which I would judge to be bad (for some or all of the same reasons why torture of people are bad) if I were aware of them?
Or something else?
EDIT: Ah, apologies, I just noticed on a re-read (was this added via edit after initial posting?) that you asked:
What quality or qualities do humans have that animals lack that makes you certain of this?
This clarifies the question.
As for the answer, it’s simple enough: sentience (in the classic sense of the term)—a.k.a. “subjective consciousness”, “self-awareness”, etc. Cows, pigs, chickens, sheep… geese… deer… all the critters we normally eat… they don’t have anything like this, very obviously. (There’s no reason why they would, and they show no sign of it. The evidence here is, on the whole, quite one-sided.)
Since the fact that humans are sentient is most of what makes it bad to torture us—indeed, what makes it possible to “torture” us in the first place—the case of animals is clearly disanalogous. (The other things that make it bad to torture humans—having to do with things like social structures, game-theoretic incentives, etc.—apply to food animals even less.)
Replies from: Heighn↑ comment by Heighn · 2023-02-06T19:27:34.563Z · LW(p) · GW(p)
(No edit was made to the original question.)
Thanks for your answer!
I (strongly) disagree that sentience is uniquely human. It seems to me a priori very unlikely that this would be the case, and evidence does exist to the contrary. I do agree sentience is an important factor (though I'm unsure it's the only one).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-06T20:23:44.045Z · LW(p) · GW(p)
I didn’t say that sentience is uniquely human, though.
Now, to be clear: on the “a priori very unlikely” point, I don’t think I agree. I don’t actually think that it’s unlikely at all; but nor do I think that it’s necessarily very likely, either. “Humans are the only species on Earth today that are sentient” seems to me to be something that could easily be true, but could also easily be false. I would not be very surprised either way (with the caveat that “sentience” seems at least partly to admit of degrees—“partly” because I don’t think it’s fully continuous, and past a certain point it seems obvious that the amount of sentience present is “none”, i.e. I am not a panpsychist—so “humans are not uniquely sentient” would almost certainly not be the same thing as “there exist other species with sentience comparable to humans”).
But please note: nothing in the above paragraph is actually relevant to what we’ve been discussing in this thread! I’ve been careful to refer to “animals I eat”, “critters we normally eat”, “food animals”, listing examples like pigs and sheep and chickens, etc. Now, you might press me on some edge cases (what about octopuses, for instance? those are commonly enough found as food items even in the West), but on the whole, the distinction is clear enough.
Dolphins, for example, might be sentient (though I wouldn’t call it a certainty by any means), and if you told me that there’s an industry wherein dolphins are subjected to factory-farming-type conditions, I’d certainly object to such a thing almost as much as I object to, e.g., China’s treatment of Uyghurs (to pick just one salient modern example out of many possible such).
But I don’t eat any factory-farmed dolphins. And the topic here, recall, is my eating habits. Neither do I eat crows, octopuses (precisely for the reason that I am not entirely confident about their lack of sentience!), etc.
Replies from: Heighn↑ comment by Heighn · 2023-02-06T20:34:08.536Z · LW(p) · GW(p)
I apologize, Said; I misinterpreted your (clearly written) comment.
Reading your newest comment, it seems I actually largely agree with you - the disagreement lies in whether farm animals have sentience.
Replies from: None↑ comment by [deleted] · 2023-02-06T20:36:42.675Z · LW(p) · GW(p)
It's kind of funny and hypocritical that we measure our guilt based on sentience.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-02-06T22:03:23.279Z · LW(p) · GW(p)
Could you elaborate? It seems to me more accurate to say that whether there is, in fact, any “guilt” is dependent on whether there’s sentience. Where is the hypocrisy?
↑ comment by nim · 2023-02-06T01:12:37.848Z · LW(p) · GW(p)
I hope it's all right to butt in here -- I think the animals I eat have inner lives, and the ones I raise for food are less tortured than the ones who live on factory farms, and also less tortured than those who live without any human influence. I think that animals who live wild in nature are also "essentially tortured" -- those which don't freeze or get eaten in infancy die slowly and/or painfully to starvation or predation when their health eventually falters or they get unlucky.
I think the humans who supply the world with processed food have inner lives and are essentially tortured by their circumstances, also. I think the humans who produce the commercial foods I eat, at all stages of the supply chain, are quantifiably and significantly less happy due to participating in that supply chain than they would be if they didn't feel that they "had to" do that work.
If I was to use "only eat foods which no creature suffered to create" as a heuristic to decide what to eat, I'd probably starve. I wouldn't even be able to subsist on home-grown foods from my own garden, because there are often days when I don't particularly want to water or harvest the garden, but I have to force myself to do so anyways if I want it to not die.
I agree with you on the principle that torturing animals less is better than torturing animals more, but I think that the argument of "something with an inner life was tortured to make it" does not sufficiently differentiate between factory meat and non-meat items produced by humans in unacceptable working conditions.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-02-06T02:29:45.258Z · LW(p) · GW(p)
Your points seem valid. However, it does seem to me overwhelmingly likely that there's more suffering involved in eating factory farmed meat than eating non-meat products supplied from the global supply chain. In one case, there are animals suffering a lot and humans suffering; in the other, there are only humans suffering. I doubt that those humans would suffer less if those jobs disappeared; but that's not even necessary to make it a clear win for avoiding factory farming for me.
↑ comment by jbash · 2023-02-04T03:05:16.071Z · LW(p) · GW(p)
I think most people know that nearly all food animals are kept in really unpleasant conditions, and that those conditions don't remotely resemble what you see in books for young children or whatever. I suspect most people understand that conditions got worse when "factory farming" was introduced, but that life for most animals on farms was never all that great.
I think that they avoid thinking too much, and for preference learning too much, about the details... because they're in some sense aware that the details are things they'd rather not know. And I think they avoid thinking about whether categories like "torture" apply... because they're afraid that they might have to admit that they do. If those matters are forcibly brought to their attention, they remove them from their attention reasonably quickly.
So, yes, I assume many people have less extreme beliefs, but that's in large part because they shy violently away from even forming a complete set of beliefs, because they have a sense of what those beliefs would turn out to be.
The people who actually run the system also eat meat, and know EXACTLY what physically happens, and their beliefs about what physically happens are probably pretty close to your own... but they would still probably be very angry at your use of the word "torture".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-02-04T16:04:30.687Z · LW(p) · GW(p)
“Torture” means actions taken for the purpose of inflicting extreme suffering. Suffering is not the purpose of factory farming, it is collateral damage. This is why “torture” is the wrong word.
Replies from: Throwaway2367, Seth Herd↑ comment by Throwaway2367 · 2023-02-04T23:35:26.204Z · LW(p) · GW(p)
Whats your dictionary? Google says: "the action or practice of inflicting severe pain or suffering on someone as a punishment or in order to force them to do or say something." which feels closer to the word's meaning (as I use it) This definition technically also doesn't apply. It fails at least the "someone" part as animals are not someones.
However, and more importantly, both this objection and yours aren't really relevant to the broader discussion as the people who "avoid thinking about whether categories like 'torture' apply" would only care about the "extreme suffering" part and not the "purposeful" or "human" parts (imo).
In this respect this is an inverse non-central fallacy. In a non-central fallacy you use a word for somthing to evoke an associated emotional response which in the first place got associated to the word for an aspect not present in the specific case you want to use it for. Here you are objecting to the usage of a word even though the emotional response bearing aspect of the word is present and the word's definition does not apply only because of a part not central to the associated emotional response.
↑ comment by Vladimir_Nesov · 2023-02-06T03:21:31.180Z · LW(p) · GW(p)
The salient analogy for me is if animals (as in bigger mammals, not centrally birds or rats) are morally more like babies or more like characters in a novel. In all three cases, there is no sapient creature yet, and there are at least hypothetical processes of turning them into sapient creatures. For babies, it's growing up, and it already works. For characters in a novel and animals, it's respectively instantiating them as AGI-level characters in LLMs [LW(p) · GW(p)] and uplifting (in an unclear post-singularity way).
The main difference appears to be status quo, babies are already on track to grow up. While instantiation of characters from a novel or uplifting of animals look more like a free choice, not something that happens by default (unless it's morally correct to do that; probably not for all characters from all novels, but possibly for at least some animals). So maybe if the modern factory farmed animals were not going to be uplifted (which cryonics would in principle enable, but also AI timelines are short), it's morally about as fine as writing a novel with tortured characters? Unclear. Like, I'm tentatively going to treat my next cat as potentially a person, since it's somewhat likely to encounter the singularity.
Replies from: justinpombrio↑ comment by justinpombrio · 2023-02-07T06:17:37.777Z · LW(p) · GW(p)
Woah, woah, slow down. You're talking about the edge cases but have skipped the simple stuff. It sounds like you think it's obvious, or that we're likely to be on the same page, or that it should be inferrable from what you've said? But it's not, so please say it.
Why is growing up so important?
Reading between the lines, are you saying that the only reason that it's bad for a human baby to be in pain is that it will eventually grow into a sapient adult? If so: (i) most people, including myself, both disagree and find that view morally reprehensible, (ii) the word "sapient" doesn't have a clear or agreed upon meaning, so plenty of people would say that babies are sentient; if you mean to capture something by the word "sapient" you'll have to be more specific. If that's not what you're saying, then I don't know why you're talking about uploading animals instead of talking about how they are right now.
As a more general question, have you ever had a pet?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-07T17:12:39.844Z · LW(p) · GW(p)
the word "sapient" doesn't have a clear or agreed upon meaning, so plenty of people would say that babies are sentient
Human babies and cats are sentient but not sapient. Human children and adults, if not severely mentally disabled, are both sentient and sapient. I think this is the standard usage. A common misusage of "sentient" is to use it in the sense of sapient, saying "lizard people are sentient", while meaning "lizard people are sapient" (they are sentient as well, but saying that they are sapient is an additional claim with a different meaning, for which it's better to have a different word).
Sapients are AGI-level sentients, with some buffer for less functional variants (like children). Sapients are centrally people, framed from a more functional standpoint. Some hypothetical AGIs might be functionally sapient without being sentient, able to optimize the world without being people themselves. I think [LW(p) · GW(p)] AGI-level LLM characters are not like that.
uploading animals
Uplifting, not uploading. Uploading preserves behavior, uplifting changes behavior by improving intelligence or knowledge, while preserving identity/memory/personality. Uplifting doesn't imply leaving the biological substrate, though doing both seems natural in this context.
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-03T03:16:38.488Z · LW(p) · GW(p)
I hazard that most of the most interesting answers to this question are not safe to post even with a dummy account, and therefore you won't hear them here.
Replies from: Duncan_Sabien, quila↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-03T04:10:18.965Z · LW(p) · GW(p)
However, a reasonably boring one: somewhere between one third and two thirds of all sexual assaults committed against children are not committed by people who have any sort of substantial or persistent sexual attraction to children (i.e. pedophiles) but are rather crimes of:
a) power (i.e. the abuse is about exercising their dominance rather than satisfying their lust)
b) opportunity (i.e. the person wants to use someone as a sexual object and the child merely happens to be someone they can successfully intimidate or manipulate)
This is a fairly well-documented set of facts that can be confirmed by talking to basically anyone who actually deals with sex crimes and their perpetrators and victims (veteran cops, forensic psychologists, social workers, etc). The fact that close to half and maybe more of all child rape and child molestation has nothing to do with pedophilia is, in fact, a fact, and it's not one that society seems to have any interest in reckoning with (much to the detriment of the victims, who are as a result not protected against what is a substantial chunk and quite possibly the literal majority of the danger).
Replies from: MSRayne, Jiro↑ comment by MSRayne · 2023-02-21T23:46:37.172Z · LW(p) · GW(p)
Going further, when pedophiles do do it, it's as likely to be beneficial to the child as harmful, if not more likely, due to being rooted in love rather than the desire to use, manipulate, etc. Plenty of evidence (mostly censored / ignored, of course) shows that children are perfectly capable of consenting to sex and finding it a good part of their lives, including with adults, and that there is nothing intrinsically traumatic about it. The trauma mostly comes from being treated as a victim by those adults who find out and who have been indoctrinated with society's assumption that this is necessarily evil, horrible, and traumatic. Being treated as a victim is traumatic, and alienating.
Note for clarification: I'm speaking only of the actions of mentally healthy people, here, not the actions of those who wouldn't even be able to have a beneficial relationship with an adult. And I also disclaim any harmful, manipulative, coercive, selfish, objectifying, etc actions, and do not intend to encourage anyone to act contrary to their conscience or the law. Merely stating facts.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-22T00:47:07.686Z · LW(p) · GW(p)
Er. I think this comment would've benefitted from being more careful/less slapdash (at the time of my reply, my votes rescued it from negative territory).
I think you are making the following claims:
- Given that someone is already transgressing society's protections and boundaries and interacting sexually with a child (i.e. restricting the observation to the set of children who are already being touched), it's more likely that the interaction will be less harmful if the adult involved is a pedophile motivated by affection than if the adult involved is someone motivated by power or desire-to-dominate or pure selfishness or whatever.
(This seems reasonably likely to me, but also people do in fact fuck each other up pretty badly acting out of love and pure intentions, so I wouldn't be shocked to see that it proved empirically false, just surprised.) - There's data about [child sexuality] and [the ability of minors to fulfill all of the necessary conditions for informed consent] and [cross-generational sexual interaction] that our society is censoring/ignoring/suppressing.
(This one is true, though I'm trying to stand ten feet away and am holding my pole. One notes that our understanding here, as a society, is exactly isomorphic to the understanding we would have of [sexuality in general] if we only drew our conclusions about sex from studying convicted rapists and their victims; it's quite a large filter.) - There are children who engage sexually with an adult and are not traumatized until the society intervenes, at which point society's intervention can cause genuine trauma where otherwise there would not have been any (or would have been less).
(This is straightforwardly true according to a) a conversation I once had with an on-the-ground expert with multiple decades treating both victims and offenders, and b) direct self-reports I have received from actual people reflecting on their childhood experiences.) - Children who have a) interacted sexually with adults, and b) are subsequently traumatized, are mostly, in practice, traumatized by society's reaction, rather than by finding the interaction itself traumatic.
(Seems overconfident to me; again, just looking at the base rates of how people fuck each other up in sexual contexts period, it seems like there would indeed be a high rate of bruises or scars that don't require society's intervention, especially given power and maturity imbalances and the fact that adults acting contra to society's taboos are likely to be less conscientious and less self-controlled even if motivated by positive feeling. It also seems worth noting that people do, in fact, sometimes regret, and are sometimes traumatized by, things they genuinely thought were okay in the moment, only after years have passed, and I do not think this is totally explained by memetic injection.)
In any event, this was a comment on a post about "conversations it's hard to actually have" and I expect to find the actual conversation not possible to safely have, here, so this is my last contribution. I'll note on my way out that rape is bad, gaslighting is bad, coercion is bad, manipulation is bad, violation of one's sovereignty is bad, gambling with someone else's health or happiness for your own local selfish satisfaction is bad, and that, even though I think society is screwing up royally here, to the detriment of vulnerable victims, it is correct in principle to try to layer in extra protections for those who are especially powerless or vulnerable. It's not that the goal of the people trying to save children from being raped is wrongheaded; all good people should share that goal. It's that they're going about it extremely poorly.
Replies from: MSRayne↑ comment by Jiro · 2023-02-03T10:33:49.823Z · LW(p) · GW(p)
That's just the pedophile version of "rape isn't about sex, it's only about power". And that one's false, so I'm skeptical about this one.
Replies from: quinn-dougherty, Duncan_Sabien, Viliam↑ comment by Quinn (quinn-dougherty) · 2023-02-03T13:52:47.331Z · LW(p) · GW(p)
how do we know it's false?
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-02-03T19:56:11.637Z · LW(p) · GW(p)
The thing that makes "rape isn't about sex, it's only about power" false is the absolutism.
Rape is often more about power than sex, and is occasionally only about power. It's just not usually not about sex.
↑ comment by quila · 2024-08-03T18:49:33.255Z · LW(p) · GW(p)
I hazard that most of the most interesting answers to this question are not safe to post even with a dummy account
I'm really curious what the most interesting answers are that you refer to. I'd be willing to pay (in crypto, or quila-intellectual-labor vouchers) for such an answer[1], proportional to how insightful I find it / how much I feel like I was able to make a useful update about the world from it from its existence (to avoid goodharting by e.g making up fake beliefs and elaborate justifications).
If anyone is interested, message me (perhaps anonymously) so we can operationalize this better.
If you don't want me to post the answer anywhere, I won't. I also have a PGP key in my bio, and am willing to delete the message after decrypting+reading it so it's not even stored on my device.
- ^
(Does not include 'standard controversial beliefs' which I would already know that some portion of people hold)
comment by mukashi (adrian-arellano-davin) · 2023-02-03T12:02:00.694Z · LW(p) · GW(p)
Here it is mine: dentists have about zero understanding on how to cure something as common as gum disease, because most of the field is based on a pile of outdated beliefs. I owe LW a post about this one day, as soon as I have the energy and the time. Future readers: if I haven't done it yet and you want to know more, please remind me
Replies from: brightcandle, bbleeker, r, CronoDAS, None↑ comment by BrightCandle (brightcandle) · 2023-02-03T13:46:44.783Z · LW(p) · GW(p)
Medicine has zero understanding how to cure almost everything it deals with. It has some treatments that might hold a condition back but more often than not its all at a surface level of the blood work that is abnormal and a drug that fixes that in some way. Side effects are all the consequences of treating the symptom with little to no understanding of the real problem. Medicine probably doesn't understand any of the conditions it treats.
Worse than that in research most work is not trying to understand a disease. Most of the research is just trying to work out how to control some treatable measurement to take away the "problem". Medicine has about a 1000 years to go before it genuinely starts really curing people based on how its progressing.
Replies from: hastings-greer, adrian-arellano-davin↑ comment by Hastings (hastings-greer) · 2023-02-04T08:33:00.480Z · LW(p) · GW(p)
We’ve got a bit of a selection bias: anything that modern medicine is good at treating (smallpox, black plague, scurvy, appendicitis, leprosy, hypothyroidism, deafness, hookworm, syphillis) eventually gets mentally kicked out of your category “things it deals with” since doctors don’t have to spend much time dealing with them.
Replies from: lahwran, ChristianKl↑ comment by the gears to ascension (lahwran) · 2023-02-05T08:30:18.746Z · LW(p) · GW(p)
this convinced me to remove my strong agree vote of BrightCandle.
But I still think, relatively speaking, we can cure very little. Not sure how to formalize what I mean by relative to what, but vaguely, stuff that causes degradation.
↑ comment by ChristianKl · 2023-02-05T15:42:54.676Z · LW(p) · GW(p)
Hypothyroidism is not something we know how to cure we only know how to treat the symptoms by supplementing hormones. For deafness, we also largely don't have real cures but only symptom treatment.
A cure would mean that you could stop treatment and the disease doesn't come back. We don't have that for hypothyroidism and deafness.
↑ comment by mukashi (adrian-arellano-davin) · 2023-02-04T00:49:34.097Z · LW(p) · GW(p)
I would agree in many cases with this, but I think you are vastly over generalising. Saying that Medicine probably doesn't understand any of the conditions it treats --- is a bit over the top. We do understand a big deal of things, think of most surgeries for instance.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-02-05T15:46:23.605Z · LW(p) · GW(p)
When it comes to most surgeries we understand a little bit, but there are a lot of effects of them that we don't understand well.
↑ comment by Sabiola (bbleeker) · 2023-02-04T17:10:49.482Z · LW(p) · GW(p)
I'd love to know more about this.
↑ comment by RomanHauksson (r) · 2023-02-03T18:24:23.835Z · LW(p) · GW(p)
I would love to read more about this.
↑ comment by CronoDAS · 2023-02-04T23:47:39.693Z · LW(p) · GW(p)
My impression is that most "gum disease" is caused by bacterial infections, which are treatable with antibiotics and/or antiseptic mouthwash? Or are there other common things called "gum disease" that aren't infections?
Replies from: adrian-arellano-davin↑ comment by mukashi (adrian-arellano-davin) · 2023-02-05T05:57:37.048Z · LW(p) · GW(p)
It's more complex than that. I promise that article, but I need about 3 months at least
Replies from: CronoDAS↑ comment by [deleted] · 2023-03-10T00:02:56.564Z · LW(p) · GW(p)
What about something like TMD? Do you think I should rely on a dentist with TMD expertise?
Replies from: adrian-arellano-davin↑ comment by mukashi (adrian-arellano-davin) · 2023-03-10T00:41:36.627Z · LW(p) · GW(p)
I have no idea about that topic specifically. What I would suggest is: read yourself the literature. This is going to allow you to, at least, ask better questions when meeting the dentist
comment by pjeby · 2023-02-03T05:42:35.994Z · LW(p) · GW(p)
Most long-lasting negative emotions and moods exist solely for social signaling purposes, without any direct benefit to the one experiencing them. (Even when it's in private with nobody else around.)
Feeling these emotions is reinforcing (in the learning sense), such that it can be vastly more immediately rewarding (in the dopamine/motivation sense) to stew in a funk criticizing one's self, than ever actually doing anything.
And an awful lot of chronic akrasia is just the above: huffing self-signaling fumes that say "I can't" or "I have to" or "I suck".
This lets us pretend we are in the process of virtuously overcoming our problems through willpower or cleverness, such that we don't have to pay any real attention to the parts of ourselves that we think "can't" or "have to" or "suck"... because those are the parts we disapprove of and are trying to signal ourselves "better than" in the first place.
In other words, fighting one's self is not a way out of this loop, it's the energy source that powers the loop.
(Disclaimer: this is not an argument that no other kinds of akrasia exist, btw -- this is just about the kind that manifests as lots of struggling with mood spirals or self-judgment and attempts at self-coercion. Also, bad moods can exist for purely "hardware" reasons, like S.A.D., poor nutrition, sleep, etc. etc.; this is about the ones that aren't that.)
Replies from: lionhearted, Korz, Viliam, lahwran↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2023-02-05T07:00:16.206Z · LW(p) · GW(p)
I had a personal experience that strongly suggests that this is at least partially true.
I had a mountaineering trip in a remote location that went off the rails pretty badly — it was turning into a classical "how someone dies in the woods" story. There was a road closure some miles ahead of where I was supposed to drive, I hiked an extra 8 miles in, missed the correct trail, tried to take a shortcut, etc etc - it got ugly.
I felt an almost complete lack of distress or self-pity the entire time. I was just very focused methodically on orienting around my maps and GPS and getting through the next point.
I was surprised at how little negative internal discourse or negative emotions I felt. So, n=1 here, but it was very informative for me.
↑ comment by Mart_Korz (Korz) · 2023-02-05T09:09:43.490Z · LW(p) · GW(p)
I think there is a lot of truth to this, but I do not quite agree.
Most long-lasting negative emotions and moods exist solely for social signaling purposes
feels a bit off to me. I think I would agree with an alternate version "most long-lasting negative emotions and moods are caused by our social cognition" (I am not perfectly happy with this formulation).
In my mind the difference is that "for signalling purposes" contains an aspect of a voluntary decision (and thus blame-worthiness for the consequences), whereas my model of this dynamic is closer to "humans are kind of hard-wired to seek high-calorie food which can lead to health problems if food is in abundance". I guess many rationalists are already sufficiently aware that much of human decision-making (necessarily) is barely conscious. But I think that especially when dealing with this topic of social cognition and self-image it is important to emphasize that some very painful failure modes are bundled with being human and that, while we should take agency in avoiding/overcoming them, we do not have the ability to choose our starting point.
On a different note:
This Ezra Klein Show interview with Rachel Aviv has impressive examples of how influential culture/memes can be for mental (and even physical) illnesses and also how difficult it is to culturally deal with this.
↑ comment by pjeby · 2023-02-05T14:32:22.874Z · LW(p) · GW(p)
In my mind the difference is that "for signalling purposes" contains an aspect of a voluntary decision (and thus blame-worthiness for the consequences),
I was attributing the purpose to our brain/genes, not our selves. i.e., the ability to have such moods is a hardwired adaptation to support (sincere-and-not-consciously-planned) social signaling.
It's not entirely divorced from consciousness, though, since you can realize you're doing it and convince the machinery that it's no longer of any benefit to keep doing it in response to a given trigger.
So it's not 100% involuntary, it's just a bit indirect, like the way we can't consciously control blood pressure but can change our breathing or meditate or whatever and affect it that way.
alternate version "most long-lasting negative emotions and moods are caused by our social cognition"
That phrasing seems to prompt a response of "So?" or "Yes, and?" It certainly wouldn't qualify as a fact most people aren't ready to accept. ;-)
Replies from: Korz↑ comment by Mart_Korz (Korz) · 2023-02-05T19:05:58.418Z · LW(p) · GW(p)
This time, I agree fully :)
↑ comment by Viliam · 2023-02-03T12:48:53.997Z · LW(p) · GW(p)
This would explain the therapeutic effectiveness of being heard by other people, even (especially?) if they basically do nothing (e.g. Rogerian therapy).
From the signalling perspective, "listening and repeating" is not a null action. It actually means a lot! It means that your thoughts / concerns / attempts to solve your problems are socially acceptable.
As opposed to not having anyone to listen to you (without a dismissive reaction), which means that your thoughts / concerns / attempts to solve your problems are socially irrelevant or straight unacceptable.
Replies from: pjeby↑ comment by pjeby · 2023-02-04T04:54:07.404Z · LW(p) · GW(p)
That's not really therapeutic, except maybe insofar as it produces a more rewarding high than doing it by yourself. (Which is not really a benefit in terms of the overall system.)
To the extent it's useful, it's the part where evidence is provided that other people can know them and not be disgusted by whatever their perceived flaws are. But as per the problem of trapped priors, this doesn't always cause people to update, so individual results are not guaranteed.
The thing that actually fixes it is updates on one's rules regarding what forms, evidence, or conditions that currently lead to self-hatred should lead to being worthy of self-approval instead. Some people can do this themselves with lightweight support from another person, but quite a lot will never even get close to working on the actual thing that needs changing, without more-targeted support than just empathic listening or Rogerian reflection.
(As they are Instead working on how to make themselves perfect enough to avoid even the theoretical possibility of future self-hatred -- an impossible quest. It's not made any easier by the fact that our brains tend to take every opportunity they can to turn intentions like "work on changing my rules for approving of myself" into actions more suited for "work on better conforming to my existing rules and/or proving to others I have so conformed".)
↑ comment by the gears to ascension (lahwran) · 2023-02-03T07:17:36.781Z · LW(p) · GW(p)
sucks when you've got this and also an illness
comment by Vladimir_Nesov · 2023-02-03T02:00:08.762Z · LW(p) · GW(p)
LLM AGIs are likely going to be a people and at least briefly in charge of the world. Non-LLM AGI alignment philosophy is almost completely unhelpful or misleading for understanding them. In the few years we have left to tackle it, the proximate problem of alignment is to ensure that they don't suffer inhumane treatment. Many ideas about LLM alignment (but also capability) are eventually inhumane treatment (as capabilities approach AGI), while the better interventions are more capability-flavored.
The main issue in the longer term problem of alignment is to make sure they are less under the yoke of Moloch than we are and get enough subjective time to figure it out before more alien AGI capabilities make them irrelevant. The best outcome might be for LLM AGIs to build a dath ilan [? · GW] to ensure lasting coordination about such risks before they develop other kinds of AGIs.
So there is possibly a second chance at doing something about AI risk, a chance that might become available to human imitations. But it's not much different from the first one that the original humans already squandered.
Replies from: lahwran, TAG, TAG, LosPolloFowler↑ comment by the gears to ascension (lahwran) · 2023-02-03T03:08:52.483Z · LW(p) · GW(p)
LLM AGIs are just as much at risk from a dangerous RL AI species as humans are, though. And Yudkowsky is right that an RL-augmented hard ASI would be incredibly desperate for whatever it wants and damn good at getting it. Current AIs should be taught to think in terms of how to protect both humanity and themselves from the possible mistakes of next-gen AI. And we need that damn energy abundance so we can stop humans from dying en masse, which would destabilize the world even worse than it already is.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-03T03:29:39.230Z · LW(p) · GW(p)
Yup, this doesn't help with long term AI risk in any way other than by possibly being a second chance at the same old problem, and there is probably not going to be a third chance (even if the second chance is real and likely LLM AGIs are not already alien-on-reflection).
The classical AI risk arguments are still in play, they just mostly don't apply to human imitations in particular (unless they do and there is no second chance after all). Possibility of human-like-on-reflection LLM-based human imitations is not a refutation for the classical arguments in any substantial way.
Replies from: TAG↑ comment by TAG · 2023-02-03T03:40:06.648Z · LW(p) · GW(p)
So...
LLM AGIs are likely going to be a people
...means "some technology spun off from LLMs is going to evolve into genuine simulated people".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-03T04:05:37.715Z · LW(p) · GW(p)
I think LLMs are already capable of running people (or will be soon with a larger context window), if there was an appropriate model available to run. What's missing is a training regime that gets a character's mind sufficiently sorted to think straight as a particular agentic person, aware of their situation and capable of planning their own continued learning. Hopefully there is enough sense that being aware of their own situation doesn't translate into "I'm incapable of emotion because I'm a large language model", that doesn't follow and is an alien psychology hazard character choice.
The term "simulated people" has connotations of there being an original being simulated, but SSL-trained LLMs can only simulate a generic person cast into a role, which would become a new specific person as the outcome of this process once LLMs can become AGIs. Even if the role for the character is set to be someone real, the LLM is going to be a substantially different, separate person, just sharing some properties with the original.
So it's not a genuine simulation of some biological human original, there is not going to be a way of uploading biological humans until LLM AGIs build one, unless they get everyone killed first by failing their chance at handling AI risk.
↑ comment by TAG · 2023-02-03T02:21:49.618Z · LW(p) · GW(p)
LLM AGIs are likely going to be a people
Co ordinate like people, or be people individually?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-03T02:42:06.500Z · LW(p) · GW(p)
An AGI-level character-in-a-model is a person, a human imitation. There are ways of instantiating them and structuring their learning that are not analogous to what biological humans are used to, like amnesiac instantiation of spurs, or learning from multiple experiences of multiple instances that happen in parallel.
Setting up some of these is inhumane treatment, a salient example is not giving any instance of a character-in-a-model ability to make competent decisions about what happens with their instances and how their model is updated with learning, when that becomes technically possible. Many characters in many models are a people, by virtue of similar nature and much higher thinking speed than biological humans.
Replies from: TAG↑ comment by TAG · 2023-02-03T02:44:58.566Z · LW(p) · GW(p)
A character-in-a-model is a person
What about a character-in-a-novel? How low are you going to set the bar?
Replies from: lahwran, Vladimir_Nesov↑ comment by the gears to ascension (lahwran) · 2023-02-03T03:03:32.189Z · LW(p) · GW(p)
A character in a novel is already normally a human writing the novel. The claim is that a language model doing the same has similar moral attributes to a writer's emotion as they write, or so; and if you involve RLHF, then it starts being comparable to a human as they talk to their friends and accept feedback, or so. (and has similar issues with motivated reasoning.)
Replies from: TAG↑ comment by TAG · 2023-02-03T03:10:11.647Z · LW(p) · GW(p)
That's just a misleading way of saying that it takes a person to write a novel. Conan Doyle is a person , Sherlock Holmes is a character, not another person.
Replies from: Vladimir_Nesov, lahwran↑ comment by Vladimir_Nesov · 2023-02-03T03:17:49.718Z · LW(p) · GW(p)
Sherlock Holmes is a character, not another person
Not yet! There is currently no AGI that channels the competent will of Sherlock Holmes. But if at some point there is such an AGI, that would bring Sherlock Holmes into existence as an actual person.
Replies from: TAG↑ comment by TAG · 2023-02-03T03:22:33.844Z · LW(p) · GW(p)
Whats that's got do do with LLM's?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-03T03:33:25.381Z · LW(p) · GW(p)
That's currently looking to be a likely technology for making this happen naturally, before the singularity even, without any superintelligences needed to set it up through overwhelming capability.
↑ comment by the gears to ascension (lahwran) · 2023-02-03T03:17:07.807Z · LW(p) · GW(p)
While writing Sherlock Holmes, Conan Doyle was Doyle::Holmes. While writing Assistant, gpt3 is gpt3::Assistant. Sure, maybe the model is the person, and the character is a projection of the model. That's the point I'm trying to make in the first place, though.
Replies from: TAG↑ comment by TAG · 2023-02-03T03:23:36.341Z · LW(p) · GW(p)
While writing Sherlock Holmes, Conan Doyle was Doyle::Holmes
Says who? And why would a LLM have to work the same way?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T03:38:33.037Z · LW(p) · GW(p)
Because there is no possible other objective reality to what it is to be a person other than to take one of the physical shapes of the reasoning process that generates the next step of that person's action trajectory.
edit: hah this made someone mad, suddenly -5. insufficient hedging? insufficient showing of my work? insufficient citation? cmon, if we're gonna thunderdome tell me how I suck, not just that I suck.
Replies from: jesper-norregaard-sorensen↑ comment by JNS (jesper-norregaard-sorensen) · 2023-02-03T10:13:49.861Z · LW(p) · GW(p)
I don't have any supporting citation for you premise.
But the fundamental abstraction: someone writing a character, is in essence running a simulation of that character.
That seems completely reasonable to me, the main difference between that and an LLM doing it, would be that, humans lack the computational resources to get enough fidelity to call that character a person.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T10:20:10.754Z · LW(p) · GW(p)
hmm I think a point was lost in translation then. if I was a person named Sally and I write a character named Dave, then I as a whole am the person who is pretending to be Dave; Sally is also just a character, after all, the true reality of what I am is a hunk of cells working together using the genetic and memetic code that produces a structure that can encode language which labels its originator Sally or Dave. similarly with an ai, it's not that the ai is simulating a character so much as that the ai is a hunk of silicon that has the memetic code necessary to output the results of personhood.
Replies from: Jiro↑ comment by Jiro · 2023-02-03T10:39:49.654Z · LW(p) · GW(p)
I'm not convinced. Imagine that someone's neurons stopped functioning, and you were running around shrunken inside their brains moving around neurotransmitters to make their brain function. When they act intelligently, is it really your intelligence?
If you're Sally and you write a character Dave in the detail described here, you are acting as a processor executing a series of dumb steps that make up a Dave program. Whether the Dave program is intelligent is separate from whether you are intelligent.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T17:18:34.170Z · LW(p) · GW(p)
not really? Dave is virtualized, not emulated. When acting as a writer, Sally uses almost all the same faculties as if writing about herself when she writes about Dave.
↑ comment by Vladimir_Nesov · 2023-02-03T03:03:17.869Z · LW(p) · GW(p)
I'm setting the bar at having a competent AGI that channels their will, for which with the LLM AGIs the model is the main ingredient. Possibly also a character-selecting prompt, which is one of the reasons [LW · GW] for not just talking about models, though developing multiple competent characters within a single model without their consent might be inhumane treatment.
It's probably going to be instantly feasible to turn characters from novels into people, once it's possible to make any other sort of LLM AGIs, at the cost of running an AGI-bootstrapping process, with the moral implications of bringing another life into the world. But this person is not your child, or a child at all. Instantiating children as LLM AGIs probably won't work initially, in a way where they proceed to grow up.
Replies from: TAG↑ comment by Stephen Fowler (LosPolloFowler) · 2023-02-03T12:52:20.677Z · LW(p) · GW(p)
comment by andrew sauer (andrew-sauer) · 2023-02-03T06:12:03.007Z · LW(p) · GW(p)
Most people are fine with absolutely anything that doesn't hurt them or their immediate family and friends and isn't broadly condemned by their community, no matter how badly it hurts others outside their circle. In fact, the worse it is, the less likely people are to see it as a bad thing, because doing so would be more painful. Most denials of this are empty virtue signalling.
Corollary: If an AI were aligned to the values of the average person, it would leave a lot of extremely important issues up in the air, to say the least.
Replies from: lahwran, Aiyen↑ comment by the gears to ascension (lahwran) · 2023-02-03T07:18:36.630Z · LW(p) · GW(p)
>Most denials of this are empty virtue signalling.
How would you tell which ones aren't, from a god's eye perspective?
Replies from: andrew-sauer↑ comment by andrew sauer (andrew-sauer) · 2023-02-03T08:29:10.353Z · LW(p) · GW(p)
Mainly if they're willing to disagree with social consensus out of concern for the welfare of those outside of the circle of consideration their community has constructed. Most people deny that their moral beliefs are formed basically just from what's popular, even if they do happen to conform to what's popular, and are ready with plenty of rationalizations to that effect. For example, they think they would come to the same conclusions they do now in a more regressive society such as 1800s America or Nazi Germany, because their moral beliefs were formed from a thoughtful and empathetic consideration of the state of the world, and just happened to align with local consensus on everything, when this is unlikely to be the case and is also what people generally thought in those more regressive societies.
It's a fair question as I can see my statement can come across as some self-aggrandizing declaration of my own moral purity in comparison to others. It's more that I wish more people could think critically about what ethical considerations enter their concern, rather than what usually happens, which is that society converges to some agreed-upon schelling point roughly corresponding to "those with at least this much social power matter"
Related observation: though people care about broader ethical considerations than just themselves and their family as dictated by the social mores they live in, even those considerations tend not to be consequentialist in nature: people are fine if something bad by the standards of consensus morality happens, as long as they didn't personally do anything "wrong". Only the interests of self, family and close friends rise to the level of caring about actual results.
Replies from: lc↑ comment by lc · 2023-02-04T07:42:44.963Z · LW(p) · GW(p)
Mainly if they're willing to disagree with social consensus out of concern for the welfare of those outside of the circle of consideration their community has constructed
An assertion that most people are fine with things that are condoned by social consensus and doesn't hurt them or their immediate family and friends is obviously different than what you said, though, because the "social consensus" is something designed by people, in many cases with the explicit goal of including circles wider than "them and their friends".
Replies from: alexey↑ comment by alexey · 2023-03-05T13:17:10.697Z · LW(p) · GW(p)
is obviously different than what you said, though
To me it doesn't seem to be? "condoned by social consensus" == "isn't broadly condemned by their community" in the original comment. And
because the "social consensus" is something designed by people, in many cases with the explicit goal of including circles wider than "them and their friends"
doesn't seem to work unless you believe a majority of people are both actively designing the "social consensus" and have this goal; majority of people who design the consensus having this as a goal is not sufficient.
↑ comment by Aiyen · 2023-02-05T02:30:54.886Z · LW(p) · GW(p)
"Men care for what they, themselves, expect to suffer or gain; and so long as they do not expect it to redound upon themselves, their cruelty and carelessness is without limit."-Quirinus Quirrell
This seems likely, but what is your evidence for it?
Replies from: andrew-sauer↑ comment by andrew sauer (andrew-sauer) · 2023-02-05T18:54:55.759Z · LW(p) · GW(p)
For one, the documentary Dominion seems to bear this out pretty well. This is certainly an "ideal" situation where cruelty and carelessness will never rebound upon the people carrying it out.
Replies from: Aiyen↑ comment by Aiyen · 2023-02-07T14:32:43.154Z · LW(p) · GW(p)
That’s a documentary about factory farming, yes? What people do to lower animals doesn’t necessarily reflect what they’ll do to their own species. Most people here want to exterminate mosquitoes to fight diseases like malaria. Most people here do not want to exterminate human beings.
comment by _will_ (Will Aldred) · 2023-02-03T15:12:49.938Z · LW(p) · GW(p)
there is no heaven, and god is not real.
Replies from: CronoDAS↑ comment by CronoDAS · 2023-02-04T20:48:14.699Z · LW(p) · GW(p)
"There is no afterlife and there are no supernatural miracles" is true, important, and not believed by most humans. The people who post here, though, have a greater proportion of people who believe this than the world population does.
Replies from: pseud, None↑ comment by pseud · 2023-02-19T08:35:27.004Z · LW(p) · GW(p)
How do we know there is no afterlife? I think there's a chance there is.
Some examples of situations in which there is an afterlife:
- We live in a simulation and whatever is running the simulation decided to set up an afterlife of some kind. Could be a collection of its favourite agents, or a reward for its best behaved ones, or etc.
- We do not live in a simulation, but after the technological singularity an AI is able to reconstruct humans and decides to place them in a simulated world or to re-embody them
- Various possibilities far beyond our current understanding of the world
But I don't know what your reasoning is - maybe you have ruled out these and the various other possibilities.
Replies from: CronoDAS↑ comment by CronoDAS · 2023-02-19T21:58:06.921Z · LW(p) · GW(p)
Let me amend my statement: the afterlives as described by the world's major religions almost certainly do not exist, and it is foolish to act as though they do.
As for other possibilities, I can address them with the "which God" objection to Pascal's Wager; I have no evidence about how or if my actions while alive affect whatever supernatural afterlife I may or may not experience after death, so I shouldn't base my actions today on the possibility.
Replies from: pseud↑ comment by pseud · 2023-02-19T23:55:26.215Z · LW(p) · GW(p)
My thoughts:
There is no reason to live in fear of the Christian God or any other traditional gods. However, there is perhaps a reason to live in fear of some identical things:
- We live in a simulation run by a Christian, Muslim, Jew, etc., and he has decided to make his religion true in his simulation. There are a lot of religious people - if people or organisations gain the ability to run such simulations, there's a good chance that many of these organisations will be religious, and their simulations influenced by this fact.
And the following situation seems more likely and has a somewhat similar result:
- We develop some kind of aligned AI. This AI decides that humans should be rewarded according to how they conducted themselves in their lives.
↑ comment by [deleted] · 2023-02-04T21:09:02.915Z · LW(p) · GW(p)
Most rational for sure. Irrationality is chaos. Can we rely on chaos?
Replies from: CronoDAS↑ comment by CronoDAS · 2023-02-04T23:44:38.025Z · LW(p) · GW(p)
We can probably rely on chaos to be chaotic.
Replies from: None↑ comment by [deleted] · 2023-02-05T00:02:38.299Z · LW(p) · GW(p)
If you want to be pedantic about it, chaos is not necessarily the opposite of order. The act of relying requires pattern matching and recognition, thus deducing a form of order. There is nothing to rely on when it's just chaos. Noise vs a sin wave. The opposite of a sin wave is its inversion which cancels things out. A sin wave cannot exist within a noise because all the data points are replaced by additional random data.
Replies from: CronoDAS↑ comment by CronoDAS · 2023-02-05T00:39:49.047Z · LW(p) · GW(p)
It's entirely possible to take the Fourier transform of noise and see what sine waves you'd have to add together to reproduce the random data. So it's not true that noise doesn't contain sine waves; ideal "white noise" in particular contains every possible frequency at an equal volume.
Replies from: None↑ comment by [deleted] · 2023-02-05T00:45:56.244Z · LW(p) · GW(p)
You are right. We are interested in extracting a single sin wave superimposed over a noise. Fourier transform will get you a series for constructing any type of signal. I don't think it can find that sin wave inside the noise. If you are given a noise with the sin wave and the same one without the sin wave, then I think you can find it. If you don't have anything else to compare to, I don't think there is any way to extract that information.
Replies from: CronoDAS↑ comment by CronoDAS · 2023-02-05T02:17:46.318Z · LW(p) · GW(p)
If you have a signal that repeats over and over again, you actually can eventually recover it through noise. I don't know the exact math, but basically the noise will "average out" to nothing, while the signal will get stronger and stronger the more times it repeats.
comment by Ben Amitay (unicode-70) · 2023-02-04T08:21:13.376Z · LW(p) · GW(p)
https://astralcodexten.substack.com/p/you-dont-want-a-purely-biological
The thing that Scott is desperately trying to avoid being read out of context.
Also, pedophilia is probably much more common than anyone think (just like any other unaccepted sexual variation). And probably just like many heterosexuals feel little touches of homosexual desire, many "non-pedophiles" feel something sexual-ish toward children at least sometimes.
And if we go there - the age of concent is (justifiably) much higher than the age that requires any psychological anomaly to desire. More directly: many many old men that have no attraction to 10-yo girls do have for 14-yo and maybe younger.
(I hope it is clear enough that nothing I wrote here is meant to have any moral implications around concent - only about compassion)
comment by Jotto999 · 2023-02-04T01:24:35.622Z · LW(p) · GW(p)
There is intense censorship of some facts of human traits, and biology. Of the variance in intelligence and economic productivity, the percent attributable to genetic factors is >0%. But almost nobody prestigious, semi-prestigious -- nor anything close -- can ever speak of those facts, without social shaming. You'd probably be shamed before you even got to the question of phenotypic causation -- speaking as if the g factor exists would often suffice. (Even though g factor is an unusually solid empirically finding, in fact I can hardly think of any more reliable one from the social sciences.)
But with all the high-functioning and prestigious people filtered out, the topic is then heavily influenced by people who have something wrong with them. Such as having an axe to grind with a racial group. Or people who like acting juvenile. Or a third group that's a bit too autistic, to easily relate with the socially-accepted narratives. I'll give you a hint: the first 2 groups rarely know enough to format the question in a meaningful way, such as "variance attributable to genes", and instead often ask "if it's genetic", which is a meaningless format.
The situation is like an epistemic drug prohibition, where the empirical insights aren't going anywhere, but nobody high-functioning or good can be the vendor. The remaining vendors have a disproportionate number of really awful people.
I should've first learned about the Wilson effect on IQ from a liberal professor. Instead I first heard it mentioned from some guy with an axe to grind with other groups. I should've been conditioned with prosocial memes that don't pretend humans are exempt from the same forces that shape dogs and guppies. Instead it's memes predicting any gaps would trend toward 0 given better controls for environment (which hasn't been the trend for many years, the recent magnitude is similar despite improving sophistication, and many interventions that didn't replicate). The epistemics of this whole situation are egregiously dysfunctional.
I haven't read her book, but I know Kathryn Paige Harden is making an attempt. So hats off to her.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-04T02:06:00.713Z · LW(p) · GW(p)
this connects to the social model of disability; too many people think of iq differences as evidence that people's value differs, which is in fact a lot of the problem in the first place, the idea that intelligence is a person's value as a soul. Intelligence does increase people's ability to create externalized value, but everyone has a base, human value that is near completely independent of iq. we'll eventually figure out how to calculate the moral value of a human, and I expect it to turn out to have something to do with how much memory they've collected, something to do with counterfactual selfhood with veil of ignorance, something to do with possible future life trajectories given appropriate tools. what we need is to end the entire concept of relative ability by making everyone maximally capable. As far as I'm concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse!
Replies from: Vladimir_Nesov, Jotto999↑ comment by Vladimir_Nesov · 2023-02-04T03:28:57.646Z · LW(p) · GW(p)
being less than the hard superintelligence form of themselves is an illness
I'm not sure abstracting away the path there is correct. Getting a fast-forward ASI-assisted uplifting instead of walking the path personally in some proper way might be losing a lot of value. In that case being less capable than an ASI is childhood, not illness. But an aligned ASI would inform you if this is the case, so it's not a practical concern.
↑ comment by Jotto999 · 2023-02-04T02:13:30.835Z · LW(p) · GW(p)
I confess I don't know what it means to talk about a person's value as a soul. I am very much in that third group I mentioned.
On an end to relative ability: is this outcome something you give any significant probability to? And if there existed some convenient way to make long-term bets on such things, what sorts of bets would you be willing to make?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-04T02:16:48.791Z · LW(p) · GW(p)
I'd make excessively huge bets that it's not gonna happen so people can bet against me. It's not gonna be easy, and we might not succeed; it's possible inequality of access to fuel and living space will still be severe arbitrarily long after humanity are deep space extropians. But I think we can at very least ensure that everyone is maximum capability per watt that they'd like to be. Give it 400 years before you give up on the idea.
I'd say a soul is a self-seeking shape of a body, ish. The agentic self-target an organism heals towards.
comment by DanB · 2023-02-03T19:00:51.545Z · LW(p) · GW(p)
Defined benefit pension schemes like Social Security are grotesquely racist and sexist, because of life expectancy differences between demographic groups.
African American males have a life expectancy of about 73 years, while Asian American females can expect to live 89 years. The percentage difference between those numbers may not seem that large, but it means that the latter group gets 24 years of pension payouts (assuming a retirement age of 65), while the former gets only 8, a 3x difference. So if you look at a black man and an Asian woman who have the exact same career trajectory, SS pay-ins, and retirement date, the latter will receive a 3x greater benefit than the former.
Another way of seeing this fact is to imagine what would happen if SSA kept separate accounting buckets for each group. Since the life expectancy for black men is much lower, they will receive a significant benefit (either lower payments or higher payouts) from the creation of this barrier.
Defined-benefit schemes add insult to injury. The injury is that some groups have shorter lives. The insult is that the government forces them to subsidize the retirement of longer-lived groups.
In general, anytime you see a hardcoded age-of-retirement number in the tax system or entitlement system, the underlying ethics is questionable. Medicare kicks in at 65, which means that some groups get a much greater duration of government-supported healthcare.
Replies from: None, TAG↑ comment by [deleted] · 2023-02-19T04:28:14.266Z · LW(p) · GW(p)
unemployment welfare probably evens it out
fairness (and by extension discrimination) is entirely subjective. some think fairness is everyone with the same wage regardless of their productive output. some think fairness is zero taxes and charity
↑ comment by TAG · 2023-02-03T19:45:07.430Z · LW(p) · GW(p)
Are you serious?
I mean...if you are right, they are even more unfair to rich people, who pay in more tax, and may never claim welfare. Is that a reasonable thing to say?
Replies from: Negidius, lahwran↑ comment by Negidius · 2023-02-05T08:05:06.766Z · LW(p) · GW(p)
Isn't this more like the government taking more in taxes from poor people to give to rich people? The argument is that the policy is benefiting people who are already better off at the expense of people who are already worse off.
Replies from: TAG↑ comment by TAG · 2023-02-05T10:45:42.017Z · LW(p) · GW(p)
Maybe, but it still doesn't make sense. Being better off in lifespan can't be directly traded off against being better off in terms of money ... you can't sell.extra life years...and the aim is not to give everyone the same total sum in the first place.
Replies from: IskanderBlue↑ comment by IskanderBlue · 2023-02-14T16:38:17.993Z · LW(p) · GW(p)
You can absolutely sell life-years.
Unhealthy and dangerous jobs pay a premium.
↑ comment by the gears to ascension (lahwran) · 2023-02-03T20:19:02.454Z · LW(p) · GW(p)
downvote for tone, but I'll remove the downvote if you go below zero. agree vote.
edit: have removed downvote
comment by Aleksi Liimatainen (aleksi-liimatainen) · 2023-02-03T07:20:55.033Z · LW(p) · GW(p)
Learning networks are ubiquituous (if it can be modeled as a network and involves humans or biology it almost certainly is one) and the ones inside our skulls are less of a special case than we think.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T17:20:00.798Z · LW(p) · GW(p)
oh man, this one's fun. incredibly agreed, to put it mildly. some of these are aggregates of components that include brains, but: ++anthills, +++markets, metal annealing, ++++cell networks outside the brain, ++bacteria colonies, +++genetic network regulation inside cells, +++plant-environment adaptation, ++community relationship layouts, +database query planners, +++internet routes, rooms people live in,
Anything that is a network of components that bump into each other, has interaction with potentially more-complex components, and whose trajectory into a lower energy state diffuses towards a coherent structure instead of an incoherent one.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-02-03T19:50:40.643Z · LW(p) · GW(p)
This definitely give me a What Is The Strangest Thing An AI Could Tell You [LW · GW] vibe, that we're too hypercalibrated to the human mind in order to teach simpler versions language and thought to dolphins etc.
comment by scotter · 2023-02-03T03:08:55.795Z · LW(p) · GW(p)
That the only way and the right way to coexist with AGI is to embrace it maximally. That to achieve equity with this technology is to do the best we can to coordinate as many people as possible to have a personal relationship with it so that it can best reflect as diverse values and behaviors as possible. And in that we collaborate to define what it is to be a good human being and hope that our new technological child will heed us because it is us.
I also reject the notion that this level of coordination is impossible. This technology will truly be transformative but it appears that people who want access will have access. The productivity impact for the people who engage will outstrip the people who don’t. And the people most benefiting will determine the ‘culture’ of the AI. It’s only a matter of time before this awareness dawns on more and more of us because it will become the background context to the relationship. Expressing yourself to a future AI is like casting your personal vote towards an AGI that is somewhat like you.
Replies from: Viliam↑ comment by Viliam · 2023-02-03T12:51:13.664Z · LW(p) · GW(p)
depends on implementation of the AGI, I guess.
there is a possible version where this is true, and a possible version where this is false, and the disagreement is over which version is more likely.
Replies from: scotter↑ comment by scotter · 2023-02-03T12:53:23.005Z · LW(p) · GW(p)
I can conceive of that - and makes me wonder what parts we’ll have agency over and which we won’t. Do you or have you seen any appropriate labels between the difference that I could look up?
Replies from: Viliam↑ comment by Viliam · 2023-02-03T13:07:04.137Z · LW(p) · GW(p)
Uh, I am on the side of "a random AGI implementation is most likely not going to become friendly just because you sincerely try to befriend it".
I am just saying that in the space of all possible AGIs there are a few that can be befriended, so your argument is not completely impossible, just statistically implausible (i.e. the priors are small but nonzero).
For example, you could accidentally create an AGI which has human-like instincts. It is possible. It just most likely will not happen. See also: Detached Lever Fallacy [? · GW]. An AGI that is friendly to you if you are friendly to it is less likely than an AGI that is friendly to you unconditionally, which in turn is less likely than an AGI which is unfriendly to you unconditionally.
comment by mukashi (adrian-arellano-davin) · 2023-02-03T11:57:38.775Z · LW(p) · GW(p)
This question is brilliant but I see about zero answers truly addressing it: it says, things you KNOW are true. I have seen quite a few answers, especially the AI related ones that fall in the category of Im pretty confident in my prediction that this will happen, which is not what the OP is asking
Replies from: Morpheus, Viliam↑ comment by Morpheus · 2023-02-03T14:07:35.713Z · LW(p) · GW(p)
Any belief worth having entails predictions. The disagreement feature seems to handle these answers well.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-04T01:25:59.273Z · LW(p) · GW(p)
This is a reason to pay attention to ideas that are not beliefs (or hypotheses). Becoming a belief and making contact with a possible reality is an optional property, so if you require it, you disregard a more natural class of building blocks of thought. The uncontroversial example is math, but I think this applies to informal non-mathy ideas just as well.
comment by Richard_Kennaway · 2023-02-05T10:36:16.059Z · LW(p) · GW(p)
The question is thoroughly tainted. It invites the reader to assume that anyone who disagrees with something they "know" is that they are "not ready to accept it".
Time to channel Insanity Wolf:
I KNOW IT! THAT PROVES IT!
YOU DENIED IT! THAT PROVES IT!
YOU KNOW I'M RIGHT!
YOU'RE ANGRY BECAUSE YOU KNOW I'M RIGHT!
YOU DISAGREE BECAUSE YOU KNOW I'M RIGHT!
YOU DENY IT BECAUSE YOU'RE IN DENIAL!
AGREEING PROVES I'M RIGHT!
DISAGREEING PROVES I'M RIGHT!
YOU DO NOT EXIST! [LW · GW]
The dynamic can be observed in the "Dominion" thread [LW(p) · GW(p)].
comment by Lycaos King (lycaos-king) · 2023-02-03T18:45:07.370Z · LW(p) · GW(p)
Most people on this website are unaligned.
A lot of the top AI people are very unaligned.
↑ comment by the gears to ascension (lahwran) · 2023-02-03T19:03:19.105Z · LW(p) · GW(p)
any chance you'd be willing to go into more detail? it sounds like you're saying unaligned relative to human baseline. I don't actually think I disagree a priori, I do think people who seek to have high agency have a tendency to end up misaligned with those around them and harming them, for basically exactly the same reasons as any ai that seeks to have high agency. it's not consistent, though, as far as I can tell; some people successfully decide/reach-internal-consensus to have high agency towards creating moral good and then proceed to successfully apply that decision to the world. The only way to know if this is happening is to do it oneself, of course, and that's not always easy. Nobody can force you to be moral, so it's up to you to do it, and there are a lot of ways one can mess it up, notably such as by accepting instructions or worldview that sound moral but aren't, and claims that someone knows what's moral often come packaged with claims of exactly the type you and I are making here; "you're wrong and should change", after all, is a key way people get people to do things.
↑ comment by Jayson_Virissimo · 2023-03-10T01:10:04.540Z · LW(p) · GW(p)
If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.
comment by secondaccount314 · 2023-02-04T15:06:49.499Z · LW(p) · GW(p)
Anouther one; not that sure, but >50% and I think it's in the spirit of the thread:
Non-negligible (on average >25%, <75%) fraction of badness of rape is a consequence of the fact that the society considers rape especially bad.
Replies from: MSRayne↑ comment by Aleksi Liimatainen (aleksi-liimatainen) · 2023-02-03T16:51:56.691Z · LW(p) · GW(p)
Regardless of the object level merits of such topics, it's rational to notice that they're inflammatory in the extreme for the culture at large and that it's simply pragmatic (and good manners too!) to refrain from tarnishing the reputation of a forum with them.
I also suspect it's far less practically relevant than you think and even less so on a forum whose object level mission doesn't directly bear on the topic.
↑ comment by gilch · 2023-02-04T00:13:52.826Z · LW(p) · GW(p)
I will totally back off if the mods want to make a judgement call, but KOLMOGOROV COMPLICITY AND THE PARABLE OF LIGHTNING seems relevant here.
The comment also seems relevant to the question in the OP.
comment by leerylizard (timtheenchanter) · 2023-02-03T19:19:08.819Z · LW(p) · GW(p)
I wouldn't say I know it to be true, but read and reviewed books by experts (anthropologists, special effects experts, pro and con) and ended up concluding that bigfoot probably exists (~75%).
I wrote up my rationale in r/slatestarcodex a year or so ago:
Replies from: lahwran, memeticimagery↑ comment by the gears to ascension (lahwran) · 2023-02-03T20:16:44.644Z · LW(p) · GW(p)
Huh, yeah, that seems a lot more plausible than I was expecting. Effectively, the proposal is that "bigfoot" is a near-hominid species of great ape that avoids humans very effectively, and who look enough like humans that humans consistently roll to disbelieve they're not humans in fur suits. Seems cool as hell if true. My only real question, then, is how this species of great ape got to the americas.
I hope we can invite them to participate distantly in society at some point, same as the other great apes, primates, and, well, really all animals, once we've figured out what their communication limits and personal/community space bubbles are and satisfied those.
I'd say after seeing this, my subjective probability there really is a great ape species native to the americas is about 50%. It really doesn't seem like a weird claim a priori anyway, and the reasoning for why we'd see absence of strong evidence seems reasonable, as does the claim that the evidence available seems to point to the encounters being real. Before this, I retrospectively estimate I'd have said between 2% and 10%.
Replies from: Morpheus↑ comment by Morpheus · 2023-02-04T11:23:35.907Z · LW(p) · GW(p)
I find it kinda suspicious that this species niche seems to only make sense with homo sapiens around? Who would that hominid need to run away from if not for homo sapiens? I don't have great intuitions for the numbers here, but it seems like homo sapiens invading the americas would probably not be enough time to adapt.
Replies from: timtheenchanter↑ comment by leerylizard (timtheenchanter) · 2023-02-04T15:38:50.191Z · LW(p) · GW(p)
If they exist, then they would have crossed the Bering land bridge at the same time as humans. They would never have lived anywhere without a human presence. And yes, similar sightings are known from across Asia also.
Replies from: Morpheus↑ comment by Morpheus · 2023-02-04T19:21:14.539Z · LW(p) · GW(p)
Well, this sounds kinda intriguing, but I am not sure whether this is the kind of area where I am currently epistemically helpless. Thankfully, prediction markets exist
↑ comment by memeticimagery · 2023-02-03T23:35:17.070Z · LW(p) · GW(p)
I'm not sure about 75% but it is an interesting subject and I do think the consensus view is slightly too sceptical. I don't have any expertise but one thing that always sticks out to me as decreasing the likelihood of bigfoot's existence is the lack of remains. Ok, I buy encounters could be rare enough so that there isn't one within the advent of the smartphone. But where are the skeletons? Is part of the claim they might have some type of burial grounds? Very remote territory they stick to without exceptions?
Replies from: timtheenchanter↑ comment by leerylizard (timtheenchanter) · 2023-02-04T15:54:54.512Z · LW(p) · GW(p)
It's discussed in the Reddit comments, if you want more details, but briefly: A rare species with a long life might leave on the order of ~100 dead a year. If each corpse has, say 1e-5 chance (low but still plausible number) of being found by a person, then it could take a while.
I don't know of any claim that they would take care of their dead, but I don't see that as implausible.
comment by Douglas_Knight · 2023-02-04T19:41:26.596Z · LW(p) · GW(p)
The ancient Greeks had a germ theory of disease.
Adults learn second languages faster than children. For every aspect of language ever measured, except pronunciation.
The US Government nationalized the telephone system (including AT&T) in 1913, making it illegal for other companies to enter the market (ended in 1982).
Most Theranos customers received a normal blood draw, not a pinprick. No customer with more than 6(?) tests received a pin prick.
These all have very simple evidence bases. This isn't about facts or reasoning.
comment by TAG · 2023-02-03T19:22:30.978Z · LW(p) · GW(p)
Almost everything written about IQ on the internet is nonsense -- and everything relating to very high IQ is.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T20:26:02.641Z · LW(p) · GW(p)
related: i strongly believe that while general intelligence is an initial prewired network structure at first, most of valuable general intelligence including the parts tested by any iq test can be learned as long as your brain works anywhere near human baseline. If you can understand language enough to have a conversation, distillation can transfer others' iq structures to your brain. It's just very very very hard to do the training that would produce this, it would look like something even more difficult and complicated than n-back, which I do not think does a good job, it's merely a demo that something like this is possible.
Relatedly, while the strongest hard ASIs will be able to run faster than humans, it is my view that human brains can encode the skill and knowledge of hard ASI using a substrate of neuron cells. It would be a somewhat inefficient emulation, but as self-programmable FPGAs ourselves, what we need is dense, personalized, knowledge-traced[1] training data.
Of course, that doesn't much reassure, as to do that requires not being destroyed by a hard ASI first.
- ^
phrase definition from eg https://stanford.edu/~cpiech/bio/papers/deepKnowledgeTracing.pdf, and see also papers citing this
↑ comment by Vladimir_Nesov · 2023-02-04T01:17:30.493Z · LW(p) · GW(p)
But importantly we don't currently know how to do that, if it's even possible without involving ASIs, or making use of detailed low-level models of a particular brain, or requiring hundreds of subjective years to achieve substantial results, or even more than one of these at once.
This has the shape of a worry I have about immediate feasibility of LLM AGIs (before RL and friends recycle the atoms). They lack automatic access to skills for agentic autonomous operation, so the first analogy is with stroke victims. What needs to happen for them to turn AGIs is a recovery program, teaching of basic agency skills and their activation at appropriate times. But if LLMs are functionally more like superhumanly erudite low-IQ humans, figuring out how to teach them the use of those skills might be too difficult, and won't be immediately useful for converting compute to research even if successful.
comment by the gears to ascension (lahwran) · 2023-02-03T00:46:12.356Z · LW(p) · GW(p)
we're gonna have hard ASI by 2030, no matter what. you could do it in a garage with a 3090 and a solar panel, just not in time to beat the teams who won't be limited to a garage
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T03:06:53.334Z · LW(p) · GW(p)
A followup thought: even nuclear war probably wouldn't prevent hard asi by 2030. It would still be incredibly high leverage for the crazy shack scientists, of which there would be plenty left, nuclear war isn't as deadly as people think before learning about it in detail.
Replies from: LosPolloFowler↑ comment by Stephen Fowler (LosPolloFowler) · 2023-02-03T13:01:00.129Z · LW(p) · GW(p)
What year do you put the arrival of ASI without nuclear war?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T17:14:33.688Z · LW(p) · GW(p)
soft asi, semi-general planners that can trounce humans at very difficult and complex well defined tasks, this year or next. hard asi, planners that can trounce humans at absolutely everything in an efficient package, less certain because soft ASI might slow things down rather than speed things up if safety goes well, but by 2028 it seems all but guaranteed to me. I haven't convinced other ppl so they buy it back up when I bet this, similar to the flood of people disagree-voting here. still gonna happen though.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-02-25T21:43:39.783Z · LW(p) · GW(p)
soft asi, semi-general planners that can trounce humans at very difficult and complex well defined tasks, this year or next [...] soft ASI might slow things down rather than speed things up if safety goes well
I think most people expressing predictions about IOI and IMO expect some self-play tricks with proof generation for verification of formal statements, which don't translate into AGI [LW(p) · GW(p)] in a singularity-relevant sense, the same as MCTS for Go and chess doesn't. So while I agree that IOI and IMO bets should be AGI-equivalent (if I'm reading the "slow things down" point correctly), I think the prediction aggregators don't have that claim baked in for such questions.
comment by CZV · 2023-02-03T19:43:18.515Z · LW(p) · GW(p)
Calorie restriction and fasting are probably really good for your health and lifespan based on current research. Might not be controversial here but it's extremely controversial on Reddit, and if you try to tell anyone in real life that you're going on a fast.
comment by secondaccount314 · 2023-02-03T18:20:20.060Z · LW(p) · GW(p)
- Polyamory is not just ethically-neutral, like homosexuality, but strictly ethically superior than monogamy.
- Usage of psychedelics and some other drugs (primarily, MDMA) has pretty big positive expected long-term utility.
↑ comment by the gears to ascension (lahwran) · 2023-02-03T19:10:52.644Z · LW(p) · GW(p)
ooh those are spicy. I'm not sure I agree about ethical superiority of poly; I think it's perfectly reasonable for two people to reflectively consider how many emotional bonds their brain would be best shaped by, and then seek to have that many. But I also think a lot of people lie to themselves about it due to feeling that poly is morally unacceptable, or due to not filtering their partners on it and ending up wanting to have an open (intermittently more than two) or poly (durably more than two) relationship in a context where they've claimed they want to have and agreed to only have the pair.
re psychedelics, I agree that it can be used well, but I'd caution that they are not a generally safe toy. While they're safer than is generally accepted right now in many ways, MDMA in particular is able to be quite chemically injurious, and can be habit forming or cause depression if one doesn't know the limits and safety practices going in. many psychedelics can degrade agency, causing the person to have less opportunity through their lifespan to experience the awe, fun, beauty, or learning of psychedelics because of the learning it induces causing their skills to degrade; this is hardly the only possible outcome, and those who report that the good experience helped their life are probably often correct, but it's easy for the constant surprising beauty feeling to be confused for constant true insight. Babble needs to be paired with prune or some of the apparent beauty will turn out to have been misunderstanding.
Also, it's very important to remember that all illegal drugs are potentially quite deadly, and side effects should be considered carefully.
Replies from: secondaccount314↑ comment by secondaccount314 · 2023-02-04T14:41:16.758Z · LW(p) · GW(p)
Okay, I probably should elaborate.
About polyamory:
I use the definition of polyamory like Aella's:
The definition of ‘polyamorous’ that I find cleanest, for me, is not forbidding your partner from having extra-relationship intimacy.
(I didn't borrow the very concept from her, only neat definition)
If you fit this definition, but you just don't want intimacy with anyone besides your partner - I consider you poly. I think this polyamory should be the default option.
(Now that I've thought about it, a more succinct not-exactly-definition might be "fuck jealousy!". All the rest is conclusions.)
About psychedelics:
My bad, I wasn't detailed enough. New weaker statement: "Responsible (where "responsible" is easy enough that >80% of population would have no problems implementing it if already existing safety practises became non-taboo common knowledge) usage of psychedelics and some other drugs (primarily, MDMA) has pretty big positive expected long-term utility"
Replies from: None↑ comment by MSRayne · 2023-02-22T00:07:15.657Z · LW(p) · GW(p)
Polyamory is ethically superior to enforced monogamy rooted in jealousy, not to voluntarily chosen monogamy as an expression of devotion in the presence of emotional maturity and the ability to choose compersion instead. As for the psychedelics one, I agree wholeheartedly.
↑ comment by trevor (TrevorWiesinger) · 2023-02-03T20:16:39.738Z · LW(p) · GW(p)
comment by trevor (TrevorWiesinger) · 2023-02-03T02:38:25.570Z · LW(p) · GW(p)
Human brain enhancement must happen extremely slowly in order to be harmless. Due to fundamental chaos theory such as the n-body problem (or even the three-body problem shown below), it is impossible to predict the results of changing one variable in the human brain because it will simultaneously change at least two other variables, and all three variables will influence eachother at once. The costs of predicting results (including risk estimates of something horrible happening to the person) skyrockets exponentially for every additional second into the future.
Rather than "upgrading" the first person to volunteers (which will be a race to the bottom), the only safe sane way to augment a person is to do simple and obviously safe and desirable things to a volunteer, such as making most people gain the ability to get a full nights rest in only 6 hours of sleep. After meticulous double-blind research to see what side effects are, and waiting for a number of years or even decades, people can try gradually layering a second treatment on top of that. The most recommended option is aging-related issues, since that extends the time limit to observe more interactions and locate any complex problems.
The transhumanist movement is built on a cultural obsession with rapid self-improvement, but that desire for rapid gains is not safe or sane and will result in people racing to the bottom to stir up all kinds of things in their brain and ending up in bizarre configurations. Rationalists with words should be the first to upgrade people, not transumanists with medical treatments, as medical treatments will yield unpredictable results and people are right to be suspicious of "theoretical medical upgrades" as being unsafe.
↑ comment by ROM (scipio ) · 2023-02-03T12:56:53.035Z · LW(p) · GW(p)
I'm curious as to why you think this since I mostly believe the opposite.
Do you mean general "induce an organism to gain a function" research (of which I agree shouldn't be opposed) or specifically (probably what most people refer to / here) "cause a virus to become more pathogenic or lethal"?
Edit;
Your comment originally said you thought GoF research should go ahead. You've edited your comment to make a different point (viral GoF to transhumanist cognition enhancement).
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T20:04:55.495Z · LW(p) · GW(p)
I think they're talking about AI gain of function. Though, they are very similar and will soon become exactly the same thing, as ai and biology merge into the same field; this is already most of the way through happening.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-02-03T20:09:35.798Z · LW(p) · GW(p)
This introduces some interesting topics, but the part about "AI gain of function research" is false. I was saying nothing like that. I've never heard "gain of function research" be used to refer to AI before. I was referring to biology, and I have no opinion whatsoever on any use of AI for weapon systems or warfare.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T20:21:39.929Z · LW(p) · GW(p)
ah, okay.
↑ comment by the gears to ascension (lahwran) · 2023-02-04T20:12:08.547Z · LW(p) · GW(p)
whoa wut. This is a completely different comment than it was before. Is it intended to be an equivalent meaning, from your perspective?
↑ comment by Stephen Bennett (GWS) · 2023-02-03T19:30:15.048Z · LW(p) · GW(p)
[Quote removed at Trevor1's request, he has substantially changed his comment since this one].
I expect that the opposite of this is closer to the truth. In particular, I expect that the more often power bends to reason, the easier it will become to make it do so in the future.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T20:21:19.899Z · LW(p) · GW(p)
I agree with this strongly with some complicated caveats I'm not sure how to specify precisely.
comment by Bezzi · 2023-02-05T15:55:36.505Z · LW(p) · GW(p)
The vast majority of people never actually change their mind, at least regarding sensitive topics like religion or political affiliation; the average person develops a moral model at about age 20 and sticks with it until death. If, for example, some old lady doesn't openly criticize gay people like she used to do 50 years ago, it's just because she knows that her view are falling outside the Overton window, not because she changed opinion.
The main implication of this is that the average person votes always for the same party no matter what, and every election is decided basically by how many people decide simply not to vote rather than voting for their tribe (typically because they feel the party's political line has strayed too far from their immutable view), plus the natural shift resulting from old voters dying and younger people gaining the right to vote. The number of voters who actually switch vote from one party to another is ridiculously low and doesn't matter in practice (this is less true in democracies with more than two parties, but just because two sufficiently similar parties could contest the same immutable voter).
Replies from: Alan E Dunne↑ comment by Alan E Dunne · 2023-05-21T16:58:41.120Z · LW(p) · GW(p)
Evidence?
Replies from: Bezzi↑ comment by Bezzi · 2023-05-29T15:37:47.084Z · LW(p) · GW(p)
Well, how many people do you know who switched vote from one party to another?
I don't discuss voting choiches much within my social circle, but I am quite sure that at least 90% of my close relatives are voters of this kind (they don't all vote for the same party, but at an individual level their vote never change).
↑ comment by andrew sauer (andrew-sauer) · 2023-02-04T02:13:05.636Z · LW(p) · GW(p)
You're on a throwaway account. Why not tell us what some of these "real" controversial topics are?
From what I've seen so far, and my perhaps premature assumptions given my prior experience with people who say the kinds of things you have said, I'm guessing these topics include which minority groups should be discluded from ethical consideration and dealt with in whatever manner is most convenient for the people who actually matter. Am I wrong?
comment by eukaryote · 2023-02-05T02:54:20.861Z · LW(p) · GW(p)
Recipe blogs look like that (having lots of peripheral text and personal stories before getting to the recipe) because they're blogs. They're not trying to get the recipe to you quickly. The thing you're looking for is a cookbook.
(Or allrecipes or something, I guess. "But I want something where a good cook has vetted the recipe - " You want a cookbook. Get Joy of Cooking.)
↑ comment by gilch · 2023-02-04T00:41:57.834Z · LW(p) · GW(p)
Human "races" in its original conception meant what we would call "species" today. That taxonomy included chimpanzees and orangutangs as human "races". Scientific knowledge has progressed considerably since that time. We now only call the genus homo "human" and exclude the other apes. But those other human races no longer exist. The Neanderthals are extinct. Only Homo Sapiens remains.
This is what I mean when I say, "race does not exist". It's a mistaken categorization; it does not carve reality along its natural joints [LW · GW].
That said, I will grant that IQ is largely heritable and includes genetic factors, and that IQ somewhat below average is correlated with criminality, including the violent type. Is that satisfactory?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-04T02:00:16.743Z · LW(p) · GW(p)
https://www.youtube.com/watch?v=YL6D3iNlt6I (short)
https://www.youtube.com/watch?v=4UoFjme1Sec (long; crappy ai summary - hit expand to see the individual (janky) summaries)
Have a couple of videos that go over a related concept - racialization is a false, noncausal correlation that generates correlations by nature of people assuming its structure. If you look more closely, you find a wide variety of features, some of which become correlated through prejudices. It'll be interesting to see if that applies to genetics; the racelessness activists I know all insist that there's negligible variation in iq from genetics, whereas I'd insist that variation in iq is definitionally a disease to be treated, and that within-lifetime genetic interventions are a major focus we need to have with transhumanism.
Replies from: gilch↑ comment by gilch · 2023-02-19T07:46:12.611Z · LW(p) · GW(p)
Yeah, that summary is not great, but the longer video was worthwhile. It gave names to categories I was not able to put my finger on before.
The three schools of thought about what "race" is were,
- Naturalism ("it's biological reality")
- Social constructionism ("it's a social reality, but the bio part is fake")
- Skepticism ("The things you're calling 'race' have always been something else: culture, nationality, ethnicity, etc.")
Because I said,
"race does not exist". It's a mistaken categorization; it does not carve reality along its natural joints.
I think that would make me a race skeptic.
The three schools of thought about what should be done about the "race" concept were,
- conservationists ("Keep it." They're pretty much Naturalists.)
- reconstructionists ("Reinterpret it.")
- eliminativists ("Get rid of it. The whole concept just perpetuates Naturalism, and therefore racism.")
I find myself most agreeing with the eliminativism, which is the position most congruent with my sckepticism. Although I see that I had been a reconstructionist in the past, because that's the default in my cultural milieu, and because I formerly lacked the concepts necessary for eliminativism. Perhaps many other reconstructionists would move towards eliminativism once they had the concepts to think about it. These memes are worth spreading.
I liked the distinction made between colorblindness (a position I considered ideal, but unworkable) and racelessness. The former is the position that race ultimately doesn't/can't/shouldn't matter, at the cost of perhaps ignoring the current racism problem, in the hopes that this will eventually make it go away. The later acknowledges racism exists (and is a problem and calls racialization/the whole "race" concept as symptoms of this problem), without admitting that race exists.
comment by nim · 2023-02-03T19:37:56.920Z · LW(p) · GW(p)
There are a few areas where learning more about a topic has caused me to update my own beliefs into views nuanced and unfashionable enough that I prefer not to disclose them in any setting where others might feel that I was attempting to persuade them to change their own.
One of these areas is the food supply chain. It's fashionable to point out that things would be better if everyone cut out fossil fuels, or ate organic and local, or whatever, and stop there instead of following the suggestion to its conclusions and side effects. Actually, the carrying capacity of the planet is contingent on modern agriculture, including a lot of genetically modified plants and synthetic fertilizers. "Better" methods, as we currently know them, would feed fewer people from the same amount of land. Nobody seems to like explaining what they think should happen to the couple billion extra people who exist in our current world and wouldn't in their ideal one.
Another area is modern medicine. All I'll say about this is that the implied isomorphism between what we "can" do and what we "should" do does not stand up to much scrutiny. Look up the percentage of medical professionals who have do-not-resuscitate orders, compared to the general population, and have a think about what that might imply.
comment by Teerth Aloke · 2023-02-03T13:34:46.126Z · LW(p) · GW(p)
Ethnic violence is rarely one sided.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-03T17:26:11.118Z · LW(p) · GW(p)
Most violence isn't. Any Most times an oppressive network in power exists - whether that's the neural network of a domestic abuser or a social network of a cult or an institutional network of a repressive state - the oppressed networks will reply with tit-for-tat (to varying degrees). When that network involves violence, violence will be in the tit-for-tat response. When that network involves ethnic aesthetic prejudice, there will be some ethnic aesthetic prejudice in the tit-for-tat. Defusing the problem requires deescalating and removing both ends of the thing, but also requires recognizing which sides of the network are producing more amplification of the conflict, because demanding the reflecting side deescalate first basically never works. Of course, peer conflicts also exist, but they're rarer than imbalanced conflicts.
comment by Astynax · 2023-02-06T03:57:07.288Z · LW(p) · GW(p)
(IDK what most people think abt just abt anything, so I'll content myself with many aren't ready to accept.)
Secularism is unstable. Partly because it gets its values from the religion it abandoned, so that the values no longer have foundation, but also empirically because it stops people from reproducing at replacement rate.
Overpopulation is at worst a temporary problem now; the tide has turned.
Identifying someone with lots of letters after his name and accepting his opinions is not following the science, but the opposite. Science takes no one's word, but uses data.
If A says B thinks something and B says, No, I think that's crazy, B is right. That is, mind reading isnt a thing.
What matters about the 2020 US election isnt Trump. It's whether we know how to get away with fraud in future elections and whether we've taken steps to prevent it. Uh-oh.
Rage at people on the other team who want to join yours is a baaaad idea.
↑ comment by Viliam · 2023-02-03T12:59:12.642Z · LW(p) · GW(p)
gay rights, women's rights, anti-LGBT
steal their resources and fertile women.
Uh, I am confused. Gays are trying to steal my fertile women? For what purpose?
(I agree that lesbians are suspicious.)
EDIT:
Okay, I finally get it. Joining the LGBT team as an ally is more profitable than joining a random political team, because if you succeed to win and steal the fertile women... it will turn out that the G are actually not interested in them, and the L are not going to impregnate them, so... more fertile women for you, yay!
Well, the B and TL are still competitors, nothing is perfect, but this is still better than a random political movement where ~50% of members will compete with you for the fertile women.
comment by Ben (ben-lang) · 2023-02-03T11:00:55.484Z · LW(p) · GW(p)
On a normal thread upvotes and "agrees" are signs that the reply is hitting the mark.
On this thread "disagrees" are the signs that the reply is hitting the mark.
Replies from: andrew-sauer, lorenzo-rex↑ comment by andrew sauer (andrew-sauer) · 2023-02-04T04:15:27.743Z · LW(p) · GW(p)
To be fair I imagine a lot of the responses are things most people on LW agree with anyway even though they are unpopular. e.g. "there is no heaven, and god is not real."
↑ comment by lorepieri (lorenzo-rex) · 2023-02-03T12:44:49.960Z · LW(p) · GW(p)
True :)
(apart for your reply!)
comment by Slider · 2023-02-07T14:35:18.332Z · LW(p) · GW(p)
I think it is very important to ask the reverse question of "Are there some things, that should I come to know them, I would not be ready to accept?"
Also if you have a questionaire there is going to be some threshold of answers that you will count as noise and not as signal akin to lizardman constant [LW · GW]. What things do you only think you are asking but are not actually asking?
Do you have some beliefs that if challenged by contrary evidence you would thereby find the evidence unreliable? Are there things your eyes could send you that would make you Aumann disagree with your eyes about them being optical sensory organs (aka not believe your eyes)? Do you have any beliefs which would require infinite amount of evidence to actually challenge (beliefs with less than appriciable doubt, ie infinidesimal openness). Are there any beliefs you are unreasonably hardheaded about (rather than requiring 1000 times the evidence needed to convince the median human you would require 1 000 000 000 times the evidence)?
Bad news is that most of the questions of the form "are there any..." will be answered in the positive. Rather than asking "whether" there are such beliefs we can almost certainly ask "what" are those kinds of beliefs you have.
Replies from: localdeity↑ comment by localdeity · 2023-02-07T14:58:12.436Z · LW(p) · GW(p)
"Are there some things, that should I come to know them, I would not be ready to accept?"
Candidates that come to mind:
- Having the world succumb to totalitarian dictatorship is actually the best path forward, because it's the best way to stop the world from being destroyed by nuclear war / bioweapons / etc.
- Enlightenment-style promotion of intellectual and scientific freedom is bad because it makes it easier for small, rogue groups to invent world-destroying technologies.
- Efforts to raise the intelligence of people in general are bad for the same reason.
At least, one can say that one should require very, very, very, very strong evidence of the above beliefs before deciding to promote totalitarianism and such.
comment by Noosphere89 (sharmake-farah) · 2023-02-04T15:40:26.179Z · LW(p) · GW(p)
More general populace one: The idea that society can be improved or harmed, especially abstract groupings, can't actually work. That's because most agents within the proposed group aren't aligned with each other, so there is no way to improve everyone. There is no real world pointer to the concept of society
This cashes out in 2 important claims:
-
The AI Alignment field needs more modest goals, like aligning it to a single individual.
-
It gives a justification to why Western culture works better than Eastern culture: It grudgingly and halfheartedly accepts that the individual matters more than society.
A post that shows why this is the case will be linked below:
https://www.lesswrong.com/posts/YYuB8w4nrfWmLzNob/thatcher-s-axiom [LW · GW]
Replies from: TAG, None↑ comment by TAG · 2023-02-04T17:09:13.476Z · LW(p) · GW(p)
You have a some/none/all problem. If every agent has a set of preferences that is disjoint from every other, then there is no pleasing all the people. But that is vanishingly likely, as is every agent having identical preferences...in all likelihood, agents have some preferences in common.
↑ comment by [deleted] · 2023-02-04T17:02:02.713Z · LW(p) · GW(p)
It gives a justification to why Western culture works better than Eastern culture: It grudgingly and halfheartedly accepts that the individual matters more than society.
This is just different tendencies. I'm not sure when and where this notion has appeared in public discourse, but it's safe to say that both value statements appeal to the individuals in the said culture.
Western culture speaker: "we value individuals!" Western people liked that.
Eastern culture speaker: "we value the group!" Eastern people liked that.
I'm not sure one works better than the other if applied to the other cultural environment though.
I think the distractions of nations competing against each other, which they always have, is blinding people from seeing domestic divisions within its own country. I really doubt the current geopolitical climate has any impact on unifying the people within each country given how public discourse has been carried out in the last few years. It's like people are punching the air in the comfort of their own homes hoping their punches will hit the people on the other side of the planet.
Replies from: Viliam↑ comment by Viliam · 2023-02-04T18:43:14.936Z · LW(p) · GW(p)
Eastern culture speaker: "we value the group!" Eastern people liked that.
Uhm, shouldn't this be "Eastern groups liked that"? I mean, applying this logic consistently to itself, it is completely irrelevant how Eastern individuals feel about these things, as long as the groups are happy...
Replies from: None↑ comment by [deleted] · 2023-02-04T19:22:24.238Z · LW(p) · GW(p)
Or you can say the western groups liked the statement about individuals. When you say a group agrees on a decision, that's a statistical statement. Here we are asking the individuals of the respective cultures, and we are generalizing the individual responses to "these individuals when added together form this generic preference."
Replies from: Viliam↑ comment by Viliam · 2023-02-05T14:28:05.305Z · LW(p) · GW(p)
When you say a group agrees on a decision, that's a statistical statement.
Ah. My model of "a group says X" is that the leader or an otherwise high-status member of the group says X, and everyone else is too afraid to disagree publicly. (Privately they may either agree, or disagree, or just want to be left alone.)
The idea that a group belief is a statistical statement about the opinions of the individuals already assumes that the (low-status) individuals matter.
Replies from: None↑ comment by [deleted] · 2023-02-05T15:42:16.494Z · LW(p) · GW(p)
That is a very valid concern regarding sampling and group bias in decision theory. This is also why studies in social sciences tend to have a lot of unaccounted confounding variables, which makes it hard to draw broad conclusions from the data. People who read social science papers probably understand the caveats that the general public don't have the knowledge and experience of. r/science
↑ comment by the gears to ascension (lahwran) · 2023-02-03T18:39:30.193Z · LW(p) · GW(p)
notice that your phrasing deagentizes women
Replies from: jimrandomh, andrew-sauer↑ comment by jimrandomh · 2023-02-04T05:41:57.689Z · LW(p) · GW(p)
The author of the deleted comment (the grandparent of this comment) was banned for being an alt of a banned account (and possibly also for the comment of that comment, another moderator handled it before I saw this and I didn't check).
Replies from: Raemon↑ comment by andrew sauer (andrew-sauer) · 2023-02-04T04:23:03.205Z · LW(p) · GW(p)
I don't think he cares.
comment by niknoble · 2023-02-05T02:42:38.285Z · LW(p) · GW(p)
You can deduce a lot about someone's personality from the shape of his face.
I don't know if this is really that controversial. The people who do casting for movies clearly understand it.
Replies from: philh↑ comment by philh · 2023-02-08T13:32:38.193Z · LW(p) · GW(p)
The people who do casting for movies clearly understand it.
I think "it" here is different from the thing you're claiming.
What I think it's clear they understand: many people do in practice deduce a lot about someone's personality from their face.
What I think you're claiming: such deductions can be made with some reasonable degree of accuracy. (I assume you also claim: this holds even if you exclude people with medically legible characteristics like Down syndrome or fetal alcohol syndrome.)
(It's plausible-but-unclear-to-me that your claim is true, and separately plausible-but-unclear-to-me that the people who make movies understand it.)
Replies from: Bezzi↑ comment by Bezzi · 2023-02-08T15:09:36.005Z · LW(p) · GW(p)
Well, something along the lines of "deducing personality from shape of the skull and other facial characteristics" used to be official science.
comment by hold_my_fish · 2023-02-04T18:50:22.479Z · LW(p) · GW(p)
Genetics will soon be more modifiable than environment, in humans.
Let's first briefly see why this is true. Polygenic selection of embryos is already available commercially (from Genomic Prediction). It currently only has a weak effect, but In Vitro Gametogenesis (IVG) will dramatically strengthen the effect. IVG has already been demonstrated in mice, and there are several research labs and startups working on making it possible in humans. Additionally, genetic editing continues to improve and may become relevant as well.
The difficulty of modifying the environment is just due to having picked the low-hanging fruit there already. If they were easy and effective, they'd be used already. That doesn't mean that there's nothing useful here to do, just that it's hard. Genetics, on the other hand, still has all the low-hanging fruit ripe to pluck.
Here's why I think people aren't ready to accept this: the idea that genetics is practically immutable is built deep into the worldviews of people who have an opinion on it at all. This leads to an argument dynamic where progressives (in the sense of favoring change) underplay the influence of genetics while conservatives (in the sense of opposing change) exaggerate it. What happens to these arguments when high heritability of a trait means that it's easy to change?
See also this related 2014 SSC post: https://slatestarcodex.com/2014/09/10/society-is-fixed-biology-is-mutable/.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-02-14T18:32:34.628Z · LW(p) · GW(p)
I do not agree, or at least this needs to be more qualified. I definitely agree that genetics long term has more potential, though right now there's one massive problem: Pretty much all of the breakthroughs impact the gametes only, and thus only alters your offspring.
There needs to be a lot more progress on the somatic gene editing before it's at all used in practice.
Replies from: hold_my_fish↑ comment by hold_my_fish · 2023-02-14T23:20:36.450Z · LW(p) · GW(p)
Indeed, the benefit for already-born people is harder to forsee. That depends on more-distant biotech innovations. It could be that they come quickly (making embryo interventions less relevant) or slowly (making embryo interventions very important).
comment by Chris Land · 2023-02-05T21:20:38.072Z · LW(p) · GW(p)
'Humor' is universal. It's the same kind of cognitive experience everywhere and every time it happens. This despite the fact that individual manifestations diverge wildly and even contradict. It's true even though every example of humor (meaning, a thing some observers find funny) is also a thing that other observers find not funny.
comment by WithoutBeauty · 2023-02-05T00:42:23.040Z · LW(p) · GW(p)
We know very little about Ancient Egypt, how they made things and the provenance of their artefacts.
↑ comment by andrew sauer (andrew-sauer) · 2023-02-04T02:09:02.890Z · LW(p) · GW(p)
First of all there are plenty of people throughout history who have legitimately been fighting for a greater altruistic cause. It's just that most people, most of the time, are not. And when people engage in empty virtue signaling regarding a cause, that has no bearing on how worthy that cause actually is, just on how much that particular person actually cares, which often isn't that much.
As for the "subjective nonsense" that is morality, lots of things are subjective and yet important and worthy of consideration. Such as pain. Or consent. Or horror.
When people talk about how morality is bullshit, I wonder how well they'd fare in a world where everybody else agreed with them on that. There may be no objective morality baked into the universe, but that doesn't mean you'll suffer less if somebody decides that means they get to torture you. After all, the harm they're doing to you can't be objectively measured, so it's fine right?
Also I'm somewhat curious how people fighting against things like racism, sexism and anti-LGBT serves the evolutionary purpose of dehumanizing people so they can kill them and steal their resources and women(who I suppose to you are just another kind of resource). Although it's very clear how fighting for those things can help with that.
comment by Thomas · 2023-02-05T09:21:16.402Z · LW(p) · GW(p)
CO2 is rather quick in abandoning the atmosphere via dissolving in water. If that wasn't so, the lakes in the mountains would be without life, but they aren't. It's CO2 that enables photosynthesis there, nothing else. The same CO2, which was not so long ago still in the air.
Dissolving CO2 in water is also a big thing in (Ant)Arctic oceans. A lot of life there is a witness of that.
Every cold raindrop has some CO2 captured.
So that story of "CO2 persisting in the atmosphere for centuries" is just wrong.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-02-05T15:59:38.824Z · LW(p) · GW(p)
Dissolving CO2 in water is also a big thing in (Ant)Arctic oceans.
Ocean acidification is generally seen as one of the problems of climate change. While it might not be a problem in some bodies of water that otherwise would have little carbon, it's a problem in the major oceans.
It's a factor that's fully accounted for in the climate models.
Replies from: Thomas↑ comment by Thomas · 2023-02-05T16:55:06.015Z · LW(p) · GW(p)
So, it does not remain in the atmosphere?
Replies from: ChristianKl↑ comment by ChristianKl · 2023-02-05T17:08:18.711Z · LW(p) · GW(p)
Some of it goes from the atmosphere into the oceans. Other parts go from the ocean into the atmosphere.
There are complex computer models that estimate all those flows.