How much should we care about non-human animals?
post by bokov (bokov-1) · 2022-11-04T21:36:57.836Z · LW · GW · 8 commentsThis is a link post for https://www.lesswrong.com/posts/hRqop6A6ho2dyhpSC/open-letter-against-reckless-nuclear-escalation-and-use?commentId=AEJwJfvWF2MSpFiYP
Contents
8 comments
This is inspired by a long and passionate post [LW(p) · GW(p)] from Bernd Clemens Huber [LW · GW]. It was off-topic in its original context so I will respond to it here.
Briefly, Bernd makes a convincing case that most animals living in the wild spend most of their time experiencing fear, hunger, and suffering. He then goes on to say that this imposes an ethical obligation on humans to do everything possible to prevent seeding life on other planets to avoid spreading more suffering. Bernd, please respond and correct me if I'm not accurately summarizing your position.
I see two implicit axioms here that I would like to question explicitly.
- What if I don't buy the axiom that suffering is worse than non-existence? It would take a lot of fear and pain for me to consider my life not worth living. I would live as a brain in a jar rather than die. Probably most people place a lower value than I do on continuing to experience life no matter what, but that implies that the value of existence is subjective and you cannot choose for other individuals whether or not their life is worth living. Let alone entire species.
Perhaps the hope that one's descendants will someday escape scarcity and predation like humans have makes one's current suffering "worth it"
- What if don't buy the axiom that it's my ethical duty to prevent the suffering of all other beings? What if I'm comfortable with the idea of people in my limited monkey-sphere being the ones whom I'm truly concerned about, and that concern radiating out some distance to strangers because what happens to them could come back to haunt myself and those close to me. The more removed someone is from me, the fewer resources I should expend per unit of their suffering.
Some of LessWrong's famous dilemmas can be seen instead as a reductio ad absurdium argument for my distance-penalized concern model:- They eat their young?!!! [LW · GW] Let aliens be alien, as long as it's not our young.
- Someone is trying to get me to do something by helping/harming a large number of "copies" of me in some simulation? I let them enjoy their creepy video game, and this instance of me will continue living my life exactly as I have been in this instance of reality.
- I have to give up on the dream of my descendants exploring the galaxy and possible also be on the hook to solve the very difficult problem of providing a happy life for the billions of lifeforms already inhabiting this planet? [LW(p) · GW(p)] No, I do not. There are probably non-human organisms I'll need to protect for the sake of people I care about. But if there is no impact on people I care about then the only reason to care about non-human suffering is this axiom which I reject because it does not contribute to my sense of purpose and well-being while conflicting with axioms that do.
Note: If someone has already made these points, I'd be grateful for a link. Thanks.
8 comments
Comments sorted by top scores.
comment by Viliam · 2022-11-06T09:53:17.735Z · LW(p) · GW(p)
The more removed someone is from me, the fewer resources I should expend per unit of their suffering.
We could make this ethical theory quantifiable, by using some constant (a coefficient in the exponent of the distance-care function) such that E=1 means you care about everyone's suffering equally, E=0 means you do not care about anyone else's suffering at all, and then we could argue that the optimal value is e.g. E=0.7635554, and perhaps conclude that people with E > 0.7635554 are just virtue signalling, and people with E<0.7635554 are selfish assholes.
comment by Bernd Clemens Huber · 2022-11-06T03:12:10.417Z · LW(p) · GW(p)
Consider the following (for the purpose of avoiding unnecessary further delay) to be an extremely informal pre-editing version of my response and explanation attempts:
TO BE EDITED:
Please allow me to shortly explain how one ought to determine ethical value.
For the emergence of the existence of an ethically relevant value or summand in this universe, it takes 3 components: Something that gives rise to, generates the ethically relevant stimulus, then the presence of sentience capable of receiving experiential stimuli, and finally a connecting structure for communicating the ethically relevant stimulus from its generator to a sentience, so that it is received by at least some 1 sentience. And only when all of these 3 components function, are active together, then it is the case that a (eventually numerically representable) summand consisting of a to the process specific (generally seemingly finite) level of goodness or badness that is added to all the goodness and badness ever generated anywhere in anyone in the universe. Any combination of just 2 of these, or just 1 doesn't suffice for the process of generating ethically relevant positive or negative values. Parts of the brain appear to play the role of the sentience-providing receiver component of this interplay, neuron networks constitute the communicating, experiential stimuli transporting component, and finally, either by connection with neuro-transmitters chemically triggered or mechanically or thermally triggered receptors (as with nociceptors, i.e. pain-receptors or receptors at dopamine and serotonine releases) prepare the potentiality for existence of the experiential stimulus to be ready for acting on this world only when it is received by some being with sentience, similar to how the cone receptors in the retina of eyes prepare specific colour experiences to then be seen by someone.
The in its ethical axiomatic, fundamental position unique ethical maxim super-ordinate to all other principles then is to maximize the total well-being that this universe generates throughout its development. This means to behave such that in the end of the universe, as little as possible suffering and as much as possible joy has been generated. Obviously such task that requires accounting for the far-future and the completeness of consequences of actions comes with risks and uncertainties to be utmost careful, mindful about.
Now, on the question of if and how much animals (besides humans, homo sapiens, as species that has no clear cut boundary in its definition to separate it from other animals anyway) ought to be accounted for in ethical terms, let me lay out the following argument which I think makes a strong case for their moral relevance:
Assume that the presence and interplay of the above 3 components - pain/joy receptors, neuronal connections, and a sentience generating part of a brain - have not been present in animals already at the very beginning of the evolution of life but rather emerged throughout its development in accordance to nature's sorting process based on determining survival via species' fitness and in part also via pure chance.
Then for all animal species that do have pain-receptors of any of the many kinds, if there is not also a substantial portion (e.g. 50%) of them around that don't have any pain-receptors (in the case of humanity, that might be the vast minority of worldwide about 320.000 people with congenital insensitivity to pain), then that means the presence of those pain-receptors helped in evolution's arms race, but only for the portion of the species that had them, while those without them dying out or becoming a small minority.
And this in turn means that there had to be sentience present in those animals for there to be a receiving end that the pain-receptors could act on and to affect their behavior in ultimately towards survival guiding manner (i.e. paying as over hundreds of millions of years evolving species with further and further - until saturation of effect - increased levels of pain for survival), or otherwise the very first, initial emergence of pain-receptors would have made no difference in a species' fitness, which then would also have not meant any relative disadvantage for those individual animals of the same species that'd just still lack pain-receptors.
I suppose genetically modifying or changing a primitive enough lifeform or microbe such that it develops various kinds of pain-receptors without it then reacting in any way to that, not being influenced by that may mean/indicate lack of presence of consciousness in those.
"How much should we care about non-human animals?"
^> Besides that, who says evolution cannot lead to human-like species if kick-started by us elsewhere e.g. under ice moon ice shells...
-> and humans are animals, too, and where would one place a sharp or gradual cut-off then? And who says kick-starting evolution cannot lead to humans or human-like species?
^> first, I probably should apologize for just having even (kind of by chance, too) taken notice of the newly opened thread,... I'm new to the forum and its structure... and didn't plan to check it out in its full generality, nor at regular basis
^> there's a lot of considerations and explanations to go over for a response: neuro-chemistry as the physical causal tie to (so far seemingly all) ethically relevant experiences; distinguishing the case of never having existed yet to then not exist, versus existing already and asking the question of when and in what manner, course of life to eventually pass away 1 way or another, and in that regard, how dying may not be an escape from the universe, may not evade being forced a 2nd time to exist in it, the same way as one was forced the 1st time to exist in it;
^> if neuro-chemical processes not only weren't the sole, all-encompassing mechanism by which joy and pain can come into perceived, experienced existence, but other speculated states, situations, processes, too, like the state of being dead, then that could of course conceivably change the assessment, but there is no evidence for either direction, if it's enjoyable or uncomfortable to not yet have existed or to have existed but being dead already, nor would it be clear which side may dominate there or if it'd be a constant, same feeling, or variable, dependent on something else, yet, and if it were constantly bad, then that'd ruin this universe's chance of generating any not up to infinitely negative meaning in the world, since that'd accumulate for everyone for all the time during which they are dead, for how ever long the universe may last, which could be infinitely long, but just as much on the other side (since in face of lacking evidence for either side, one possibly should consider a statement and its negation as equally plausible), if up to infinitey joy were generated somehow for everyone for all their time of being dead, then all finite ethical experiential summands to this qualia dimension, if negative or positive, were to become irrelevant; nothing would matter anymore in terms of changing what would inevitably happen anyway, except for it being at most finitely far delayed, possibly. Personally, given that there is states of being that approximate what it may be like to be dead, namely deep sleep, numbed-ness via anesthetics, or being knocked out by inhaling gas for a surgical operation, without known anecdotal stories by people after such experiences to have felt joy or suffering (though dreams may behave differently in that case from being numbed or in coma). And since partial numbing also
[^> then of course there's the question of others even existing with the same qualitative implications of oneself, but that skepticism can be brought up about any kind of ethical framework and doesn't have to imply incompatibility to them or their implications for right courses of action]
^> regarding the "just having limited levels of care" point, that is just a common way of setting an arbitrary level of moral behavior above which to try to be/stay, which if set lower and lower can allow for up to literally anything to be acceptable, passing such threshold, and the true principle if well-being optimization either way, and lacking further positive well-being by one's own fault is as much bad (as well-being difference/malus) as an equal difference in more caused suffering.
"What if don't buy the axiom that it's my ethical duty to prevent the suffering of all other beings? "
^> That would ultimately just be an expression of or the extent of doubt in others' existence, as if they didn't count.
^> And here I could copypaste a long elaboration on that with a comparison to how one would manage parts of one's own body of which one has no doubts of them existing, and that others should be treated as equally real (despite [in this age so far] lacking technological capability to somehow connect oneself up with others, if at neuron networks, synapses, or otherwise, to help improving one's belief in others' actual existence).
Ja. Ich sehe das mit dem allg. Lebens-Sinn als Optimierung angenehmer Gefühle gegenüber Unangenehmen, mit Allen und Allem miteinbezogen, und alle Orte sowie die gesamte Zukunft. Und dazu muss man die Welt verstehen. Und die Kontributoren/Summanden dieser "Mathematik der Moral", bzw. Ethik, sind, bzw. entstehen allen anschein nach einerseits über Dopamin- & Serotonin-Ausschüttung etc., andererseits aber auch SchmerzRezeptoren (https://de.wikipedia.org/wiki/Nozizeptor), und die Summanden sind immer dann - in der Gewichtung nur von einer uniformen, allgemeinen Ursache-Wirkung-Beziehung abhängig - gültig, wenn es Jemanden gibt, der das spürt, was auch ein Tier sein kann.
Dabei ist es zumindest prinzipiell nicht ausgeschlossen, abhängig von der "Natur unseres Universums" , dass das herbeigesehnte Prinzip der allgemeinen Wohlergehens-Maximierung inkompatibel mit dem Gerechtigkeits-Prinzip sein kann. In dem Fall würde ich mich gegen Gerechtigkeit entscheiden, wenn es denn tatsächlich für eine bessere allgemeine Wohlergehens-maximierung nötig ist. Wenn eine ganze Menge an Optionen zur gleichwertigen Maximierung des allg. Wohlergehens zur Auswahl steht, dann sind sicherer, stabilere Optionen zu präferieren, und wenn der Aspekt auch noch gleich ist für eine Menge an übrig bleibenden Optionen, dann ist wohl die gleichzeitig gerechteste Option unter Diesen zu Wählen. Und wenn es dann immer noch mehrere in all diesen Aspekten gleichwertige Optionen gibt.... dann kann man vielleicht nach der Anzahl der daran beteiligten Lebewesen unterscheiden.
Warum ich die Gerechtigkeit "droppen" würde? Also wenn nur ein minimaler Gesamtwohlergehens-Vorteil durch extreme Ungerechtigkeit gegenüber einer gerechten Option entstehen könnte, dann wäre es verständlich trotzdem die gerechte Option zu präferieren (vorallem, wenn man mit Unbekannten und Unsicherheiten zu tun hat), aber im Extrem-Fall, wenn das allg. Wohlergehen beliebig katastrophal werden würde nur um gerechtigkeit erhalten zu können, dann ist selbst die Gerechtigkeit nicht mehr tragbar. Und man kann die persönliche Situation im Universum ja so sehen: Am Anfang des Lebens hat man einen Platz zugeteilt bekommen, der zu beliebig großem Maße ungerecht sein kann. So ein Platz entspricht dann in gewisser Weise einer Schiene aus dem berühmten Trolley-Problem. Diese Schiene kann dann den Umständen bedingt anfangs eine höhere Priorität/Wichtigkeit (nicht darüber gefahren zu werden) haben als andere Schienen auf denen andere (metaphorisch) sitzen, und - objektiv gesehen - weniger Priorität haben als andere Schienen. Im Laufe des Lebens entwickelt sich die Situation und Positionen können sich aufgrund der Entscheidungen der Beteiligten ändern. Aber dennoch gilt: Selbst im idealen Fall hat man "eine Schiene zugewiesen bekommen", und wenn man einsichtig ist kann man das akzeptieren. Wenn man sich dann so verhält, dass man mit dem Wissen, dass man evtl. auf einer "weniger wichtigen Schiene sitzt" sich trotzdem altruistisch so verhält bzw. dazu beiträgt, dass unter Betrachtung und Miteinbeziehung Aller auch eine allg. Wohlergehens-Maximierung ermöglicht wird, sich also nicht dagegen aus egoistischen Gründen anderen gegenüber querstellt, selbst wenn das Ungerechtigkeiten evtl. notgedrungenermaßen mit sich ziehen kann, dann kann man sich wenn man selbst von Ungerechtigkeit betroffen ist zumindest beim Universum beschweren darüber welchen anfänglichen Platz man bekommen hat.
An dem Anteil an Ungerechtigkeit ist dann das Universum schuld, und nicht man selbst, denn man hat ja keine Kontrolle wo, wann, und in welcher Situation, unter welchen umständen man zu leben beginnt.
Und wenn man Empathie wirklich vollkommen verinnerlicht hat, dann verhält man sich auch so als wären Andere so real wie die eigenen Körperteile. Und wenn man dann "als Fuß" geboren wird, dann kann es einfach blank faktisch den Umständen geschuldet sein, dass zum Erreichen des Besten für Alle man eben einiges mehr als Andere sprichwörtlich ertragen muss (aber auch umgekehrt, falls man - durch den intrinsisch an sich unfairen Zuordnungsmechanismus des Universums bestimmt - ein besseres Los gezogen hat, solange man sich seiner tatsächlichen Position gegenüber Anderen auch bewusst ist), für das Große ganze, sozusagen. Da hilft es dann nicht, wenn sich der Fuß (der Einzelne) dem Körper (Allen) in der Kooperation aus Fairness-Gründen (für die die Anderen ja auch nicht schuld sind, denn denen ist ihre Position auch vom Universum zugeteilt worden) verweigert.
^> And regarding the pseudo-dilemmas like the utility monster, that wouldn't contradict overall well-being maximization to be a wrong goal, it'd just be scenarios laying out roles or positions one can be involved in, in such scenarios, that one wouldn't like, but yes... that is not excluded, it's rather a test of one's strength in the belief of others' ethical relevance to account for. If being fed to some utility monster were to actually generate more well-being, and one fully understood that to be the case, it'd depend on one's ability, courage to cooperate, make sacrifices for the well-being of others, to accept the bad, unfair situation one can be born into, which... sadly, the universe creates such situations all the time. Coping isn't a way out of that, nor healthy to deal with it.
^> I possibly should also mention how similar, but weaker, less far elaborated and explicated anti-colonization arguments have been brought up by brian tomasik, persis eskander, and possibly others (well, I guess also including those that agreed to my emails' points).
[it's funny how my point is described as convincing with those 28 foolish downvotes...]
^> And I gues it'd also be an appropriate place and time then to explain that there's only 1 true, axiomatically true ethical principle, with other principles always being subordinate to it, and not exempt from ethical scrutiniy:
"alien rights are inalienable"
there is only 1 actually axiomatically valid, applying principle, and that's that whatever further maximizes total well-being across all time is preferable to what does less so.
All other principles in so far as they attempt to compete with this axiomatic principle are trash 🚮
they are heuristic guidelines, deontological, fallacies
because all of them can be easily lead into contradiction
with an abstract simple method into placeholders of which one can put those principles to force them to fail if those principles were to hit just the critical scenarios that make them fail
which cannot happen with the total well-being maximization principle, by definition and its axiomatic ties to ethics
well... okay 1 more further point to clarify on this
let's say total well-being maximization is the principle that works out no matter what, no matter what universe, what rules, what anything. is a principle that cannot be forced to stumble over its own shoes
but what is also possible to have is equivalently good principles, but only for specifications, i.e. when one's provided with further information
similar as in mathematical frameworks, one could either choose to start out with the very axioms
OR alternatively, one could define the starting point to be the full set of all the very first from these axioms deducible next statements
then one wouldn't need the initial axioms anymore, because for anything one would need them to prove smth to get to some deduction, there'd be a possible to find intermediate step towards that which would be part of these next closest to the axioms statements.
Or irl, if one for a given total well-being optimization problem would have certain knowledge of which specific course of action or decision were to actually do that
then one could for this specific situation substitute the total well-being principle with that other, more specified one.
And I mean for doing actual optimization, one would want to get to more and more concrete statements from the most safe abstract ones with most general applicability no matter what.
but for that, one usually needs to get information, data about the system one's dealing with, which specifies the situation away from the whole generality of situations one conceivably could have.
Once more and more are ruled out to be irrelevant or not part of the situation one's dealing with, portions of a statement that would address those impossible cases would be moot, irrelevant.
Obviously in practice, it gets complicated as soon as unknown important factors have to be considered, and when dealing with mere probabilities, not certainties anymore.
But that's a separate problem from the theory side of it.
well... technically, I cannot with absolute absolute certainty know that "human liberties are not inalienable", for that to not be deducible from the generally total well-being maximizing principle + all specific circumstantial world state conditions and natural laws (where with both of that, one could be enabled to further specify what in concrete it actually means to do optimize total well-being), but it seems I'd say near certain for that not to be the case.
all other principles.... are not principles but... secondiciples.... at best "disciples" of this principle
^> I probably should also point out my recent theoretical observations:
pain and pain receptor spread across and through animal bodies basically is like a currency for the evolutionary perspective
because they provide a way to get information, allowing for higher fitness, so species can pay for survival with more pain(receptors)
(or rather... those without them, without the guidance get sorted out, die more likely)
and as long as pain provides a bonus exploit or a "pay 2 win" bonus usable for survival... evolution's gonna make use of it extensively
"Evolution - a P2W, pain to win, game"
If you have pain receptors that make you feel your level of malnutrition or hunger all the time, that can guide you on timing of when one can rest or has to hunt as animal (or prehistoric people, too).
If you have pain receptors that are mechanically triggered, then you can notice when a body part is - due to constantly to all applying gravitation (depending on exoplanet gravity strength) - under pressure for too long for when it starts feeling uncomfortable, which is hard to get rid of as feeling that one kind of has to feel somewhere all the time. Same for lifting limbs up, then the pressure is in the interior that holds it up against gravitation.
pain receptors mechanically triggered but via scars, wounds help noticing that, to be more careful with that body part, lick over it for disinfection attempts by animals,... whatever can come to mind for the various ways of information to pay for that way.
Some pain related feelings are just constant costs.
(but again, wrong channel...)
If you can sense bad, uncomfortable smell, then that can help you avoid bacterial and viral infections.
And the fact that there is so many different kinds makes it seem like the probability of at least some to also come up at exoplanet evolutions to be not negligible.
Meanwhile, for positive stimuli, they can just need to do the bare minimum to know what to do, except for those associated to reproduction...
Animal species can't even laugh. That in and of itself is sad, if you think of it.
Then with receptors for heat and cold... similar deal as above.
But I was speaking generally, for surely most species
I guess humans are the furthest evolved death avoidance machines
though that doesn't mean pain avoidance machines
however, if one has congenital insensitivity to pain in a age like our high-tech age...
assisted/guided by technology, eventually they could be both
most evolved death avoidance... [well I shouldn't call humans machine but yeah] people while also being able to avoid (never feel) pain (except iirc they are still susceptible to some kinds of pain like psychological stress(?)), and that for a species far into evolution
and not near the beginning of evolution where one could assume there were epochs before any species was yet capable of feeling heat, or cold, or smelling smth disgusting
and epochs before scars hurt and before pressure on body parts got nasty over time or before hunger, etc.
wonder in what order those came to be
"there may be more to come or find out to exist already, though, who knows"
Would seem like cheating against evil nature
"knowing smth to be bad for one's survival without having to pay the pain to know it"
sticking it back to the grim mother nature, I like it
though people with congenital insensitivity to pain are a far minority, like 320.000 or so, iirc. But they may be the future of humanity eventually
if eventually, humanity might start trying to optimize on that front of experienced suffering.
For extremely dangerous missions, they could also be more inclined to agree to such or go further than other humans could go, or what for normal people would be harsh punishments for mistakes, they may... not have as hard feelings about it.
I'd gladly leave the future of the planet to them if they could somehow ensure/promise to not expand into space or play god, despite lacking immediate 1st hand, most direct understanding/awareness of a spectrum of kinds of pain/suffering, which they though by such ventures could bring upon others.
Humanity possibly should be glad we have them.
Wonder what the ratio of people with that "disease" to normal people would be for advanced civilizations. Technology may allow for them to have a high ratio without the disadvantages weighing in as much as it'd otherwise do.
Over time, for the population's portion of people with that insensitivity, their physiology might adjust further to naturally, "automatically" make it less likely that they'd by accident hurt themselves, even when unassisted by technology to avoid such.
That basically may come with or imply costs in terms of reduced agility, mobility, as we can (if we wanted) (mis-)bite our lip or bend fingers too far back or... idk
Wonder in what century or millennium humanity might - if ever - enter the phase where we "hand over the steering wheel" to the pain-insensitive ones among us.
Probably would be a slow, continuous process.
^> I could bring up so many further relevant aspects (but not sure how far, how complete I should take it with a response... depending on what may be needed), conceivably plausible points though, like the "holy grail of neuro-chemistry" pondering:
actually, more advanced civilizations thoughts:
I guess the temporary phantasy delusion that advanced alien civilization may be in the scientific hunt after (but seemingly without success by anyone or the universe wouldn't look the way it does) would be to search for the neuro-chemical holy grail, i.e. some kind of receptor which is capable of generating (and then via neuron network communicate) extremely pleasurable stimuli, i.e. it'd be a search for the highest joy intensity receptor (except that it in principle could also depend on how large such receptor if it existed would be and by how much its intensity level were higher than that of other such receptors, because if it were too heavy, were to require too much material, atoms, then there'd be a chance that different use of the same atomic material but differently arranged, namely possibly into a multitude of lower positive stimulus intensity could end up being preferable).
^> Or the point on hidden ethically dominant decision-determining factors, even for seemingly far detached banal decision problems:
And then a bit about decision-making:
Also, given that for the (overall total well-being optimization directed) decision-making on which decision or action amongs many to choose from for a given topic is dominantly determined by what the effects, implications of a given action or decision among multiple were to be with regard to direct & indirect forwards-contamination (or basically how likely it might barely, maybe after the 10th digit past the decimal point increase the percentage chance of further or extended, for the vast majority of animals gruesome evolutions of life), and that means that the further separated away an e.g. very banal, extremely low level decision topic were to be (like to take the bus or the car to get somewhere), the much less intuitive or clear or seemingly* associated to the action/decision the true, applicable, ethical logic, reasoning for what actually were the better decision can become. Though at some level of detachment of a low level decision problem, the more the butterfly effect, chaos theory applies, meaning that any ethical logic's and assessment's "grip" on being able to provide guidance for the right decision becomes more and more loose. But it seems to be an interesting remark to make anyway, and especially so since such reasoning could (the further it's detached from smth like the extremely important matter of playing god or better not doing that) meet less and less understanding of this logic by others, which can have important implications on their own (as it may lead to further arguments, problems), but that also means that if that's the case, then that has to also be accounted for beforehand, for the initial assessment of how to decide, and so at some point, probably the decision that avoids such conflict rooted in ignorance can be (ironically) that way forced to ultimately be better, even if another decision were better, assuming others had the needed understanding to allow for it to be better for when it maybe should be.
Or summarized differently: There can exist matters in the universe that have such extreme ethical level of importance (by being so large scale in how many beings' fate, quality of life, depends on it and for how long) so that the magnitude of this large-scale-ness can compensate for or outweigh the fact that some banal seemingly (sufficiently) far from the matter detached topic matter may only affect this extremely important matter at a really low level digit past the decimal point (or at extremely low probability changes/improvements associated to a banal matter's decision). So that's a way how the importance or access to finding the right logical path for correctly making decisions for banal matters can be hidden, can e.g. lead around many corners of possibly required lateral thinking in order to get there.
^> If I'd go really far with it, I could also lay out my proto-theory of the mind, but that'd "burst the frame"... be too much
^> For the AI. concerns... I mean they are legit, too, separate, parallel to the other concern, but I guess I could paste over my thoughts on that from PMs with pmbpanther...
^> though besides these points, there's many more lines of reasonings against already in the 21st century risking playing god... it may force such triggered evolution into far less good (or far worse than just bad) pathways which may - once initiated - be impossible or extremely complicated to revert. It's hard enough to deal with invasice species between continents on earth, especially once it spread rapidly, exponentially... at some point it may be at a size such that even the most effective possible rate of trying to get rid of an invasive species may not be able to compete anymore with the exponential growth once it spread far enough, locking humanity for all future out of better kinds of evolutions of life (though again there's reason to believe that, when counted from beginning to end, there is no such thing) for all hundreds, thousands of generations to come later and have to deal with the specific random, accidental kind of evolution of life that the 21st century humanity generations light-heartedly triggered, rather than having any further science done on this for which almost surely one could expect a ton of progress to still be possible for centuries, millennia, maybe even tens of thousands of years, given how complicated biospheres and evolution are..... and this argument holds independently of what the particular ethical level would be for evolution of life, independent of how good or bad it'd be, it's near certain that if humanity for god's sake could just bring up some patience on this matter, that completely unnecessary extremely large-scale catastrophes could be avoided... ; i.e. far weaker assumptions on the situation to start out with would suffice for deducing that it'd be a bad idea to risk kick-starting life not 100 years after having started becoming at all capable of doing such...
^> And then there's the "they wouldn't deserve a better god themselves if they'd play as recklessly god that octillions then would be subjected to, themselves" argument:
In a sense, those that irresponsibly risk kick-starting whole evolutions of life would be extreme hypocrites if they'd have wanted for some kind of responsible, caring god to have been some initial root cause of their own existence. They at that point kind of wouldn't have deserved that themselves, based on standards established by their very own behavior.
^> And then potential further implications or egoistical reasons even for deterrence from playing god, for if the generalized re-incarnation based on physicalism does happen to apply:
If it turns out the method of assignment of a person, an I, to a body were to be in part determined by particle identity, maybe "threatening" them (or rather kind of forcing them to consider a more empathic perspective) by telling them that some time in the future, some people among humanity might dig up their graves, get their what's remaining from their deceased brain and send those to the very same ice moons or (exo-)planets they forwards-contaminated centuries ago, so that they have a chance getting a taste of the consequences of their own doing if they were to re-incarnate as an animal once or a few times there, maybe just bringing this hypothetical scenario up may deter them from doing so.
At least there is physically plausible reason to believe in re-incarnation, albeit with likely or possibly very extremely low chance of occurrence, even if it may at first glance sound outlandish or absurd.
In response to that video, I once elaborated on the concept in detail (if anyone may be interested):
https://www.youtube.com/watch?v=h6fcK_fRYaI So on the topic of this video & reincarnation, actually (just in case people may not see much merit in at least seeing how it could actually exist in principle as a realistic, applying ''mechanic'' of the universe) I've come to some interesting considerations in regards to it.
Though I should mention first that by reincarnation I mean and think of it less and not necessarily just in a sense which the literal etymological origins of the word may allude to (if it's a reincarnation or living in another human body as therein carried consciousness later again), but just in a more generalized abstract sense in which the ''phases in which some given consciousness of a person is present has 1 or more significant gaps (in duration or difference in physical composition before and after, more-so than by sleeping or having been in a coma) in which the consciousness didn't exist'', just like e.g. the set that's the union of the intervals [0,1] & [2,3] has a gap in it.
The first consideration is that the universe (quite evidently so, it would seem to me) has proven to be capable of incarnating any consciousnesses that have ever existed at least once (let's say in this universe, that is), which is at least a good start if one were trying to demonstrate how this may have a decently much not entirely philosophically out-ruled chance to occur not only once but multiple times, since at least if one had been able already before one's consciousness started to exist to from that point of perspective think about the universe's capability to give rise to one's own consciousness (rather than that of someone else's, which seems to occur in the overwhelming number of cases) in the future and would have guessed or ''gambled'' against the universe being able to do so, then for any consciousness that actually was given rise to by the universe in some manner, one would have been wronged by the universe.
So for when it comes to trying to estimate one's future outlook (of emerging once more as consciousness, or not, with or possibly rather without remnants of previous memories) again, except with a perspective that corresponds to a (qualitatively) different situation than the situation one was in before, then if the rules that govern the in-time-dynamic of everything are themselves timelessly constant (which may seem plausible to assume for our universe), and additionally if (what may seem unlikely to be true for our universe) the dynamic of everything (and in particular all contents of the universe that undergo changes by laws that govern them) is in ever repeating manner time-loop-like cyclical (or may enter a loop at some point), so that everything relevant to the future behavior of the universe is at some point in time the same as at an unequal to that point in time, then the universe's proof of being able to make a consciousness exist once (alike but not quite like the start of a mathematical proof by induction, as the absolute certainty of the validity of mathematical proofs is nonexistent for its application in non-theoretical physics) would be provided with an induction step argument with which one would be enabled to prove not only 1 further instance of incarnation, but at least a countable infinity of them, for all those consciousnesses part of the loop.
However, without granting us any certainty about the above (just for the sake of speculative argument) assumed characteristic of the universe to eventually loop back to a state it already was in before (or doing so, but without timelessness of its governing laws), the induction step is missing, and hence the addressed question would be an open one which though may depend on the particular future behavior of the universe which may or may not in its future - in time and space locally - lead back to equivalent circumstances to those situations that gave rise to a given consciousness' previous or initial existence, in order to do so again. But at least (if one attempted to still try to argue in favor of the universe's capability to nonetheless do so), despite the universe in general undergoing a rather uniform but seemingly non-cyclical dynamic, at least this process appears to happen slowly across very large time-scales, and secondarily, the universe seems to have generated very many sub-systems of much smaller scale of which many of them may in their long term evolutions across time come into very similar particle arrangement states to each other, which - depending on the true conditions for giving rise to a previously present consciousness again - might be close enough to do it again, for example in the case of earth-like exoplanets with respect to humans.
But depending on which aspects part of the (to us unknown but necessarily existing) truth behind this phenomenon (of the universe being able to bring consciousnesses into existence when they weren't present before) are relevant for the differentiation between which consciousness it is (if any at all) that some process in the universe may generally give rise to in the future (like local conditions such as the formation of certain molecular structures to occur at the same time or at the same place again or with just for physical forces and behaviors indistinguishable particles, or with the very exact same particles and same arrangement they came together before), the chances for any kind of "reincarnation" of anyone to happen, conceivably can differ very much. And personally, intuitively I'd not at all expect the chances for such a scenario to repeat to be ''particularly'' high, nor am I sure if it'd even be a reasonable assessment's conclusion to hope for such. But either way, if anything, it'd be a line of reasoning of this type that'd make me believe in the principal capability of the universe to reincarnate in this sense, rather than reasons that Buddhism provides for this, even though they may share the same conclusion.
https://neurosciencenews.com/l5p-neuron-conscious-awareness-14997/
But so if some variant of this is true, that means there can be valid "roundabout egoistic reasons" for altruism, too.
"not even death may be a safe evasion from the furnace universe"
-> if we kick-start evolution elsewhere, all further generations (which can be up to VERY VERY MANY) are gonna be mad at us. The question then really is just how much, how bad and how many of mistakes will there be...
-> there really is enough reasons... (let alone higher priority problems we have on earth alone, no need creating copies of the graph of problems that the international science council put up 2021, which I could also bring up)
^> bokov's gonna have to wait until I can properly format my response, to be less informal...
^^^^^^^
I'll need multiple response posts for all this...
-> And then there'd be surely many more points of considerations I noted down over the years but cannot just find and pack in there... quickly. it'd need time; don't know exhaustingly all of them by heart either, is why I make notes.
The response to my comment down there indicates that the forum moderators apparently are unaware that due to urgent & far higher priority matters of informing and warning more people about impending forwards-contamination risk space projects, it'd either been no response at all, a proper response with significant cost of my time to use otherwise, or an informal quick response (which I ultimately decided on, because I think it should suffice for the time being), given how much I have to say and properly formulate on the topic. That this isn't considered but is reacted to so completely unnecessarily harshly without prior communication is unfortunate. Though it's possible that the informality of my post is just a pre-text reason, and in reality, the true reason is the many downvotes my posts have gotten, despite the great irony that lies in the fact that bokov, the person with the most upvoted comment from the thread bokov links to from here, finding the points made from my there by far most downvoted comment very convincing.
vvv
↑ comment by Ruby · 2022-11-10T02:42:07.907Z · LW(p) · GW(p)
Hi Bernd,
I'm very sorry but while I am sympathetic to your viewpoint and arguments, I feel your manner of communication (axe-grindy extremely inappropriately long comments on posts not quite relevant posts, this here comment even lapsing in German without explanation) is not a good fit for LessWrong. After chatting with Raemon, we feel it better if you weren't on LessWrong and I am regrettably disabling your ability to post and comment. Our experience with other users who had similar commenting patterns is that improvements are unlikely, so I do feel it's best not to drag things out and go for the full disabling. (Sorry about doing this in public – I like to be transparent about mod actions.
If you'd like to discuss, please use team@lesswrong.com
↑ comment by TekhneMakre · 2022-11-10T21:42:34.904Z · LW(p) · GW(p)
[I appreciate you and the team putting in the work to keep LW a good place, and don't want to make it a thing where any mod action is met with "well you should have done this other more arduous thing".]
Maybe there's a "simple" fix that adds a "small" amount of moderator effort, and gives people like Bernd a bit more of a shot? The thing that comes to mind is: add an additional status between banned and and not banned, which is "restricted". Then make a few "simple" changes to the behavior of restricted accounts. Maybe there's one or a few that would direct better-suited behavior without further mod action. E.g.: restricted accounts can't post comments greater than X thousand characters. E.g. restricted accounts can't post more than X times in an hour.
Replies from: Ruby↑ comment by Ruby · 2022-11-11T01:08:30.537Z · LW(p) · GW(p)
[I appreciate your preamble. Thank you for the feedback and suggestion! Appreciated.]
We've actually just recently built "rate limits" for accounts as actually something in-between no action and banning. I have a draft post about our moderation philosoph and approach I want to get out in the next few days.
In this case I felt that it was better to skip the intermediary steps though, just going on experience with different types of users and likely outcomes.
↑ comment by bokov (bokov-1) · 2022-11-10T09:30:22.678Z · LW(p) · GW(p)
I'm sad to see him go. I don't know enough about LWs history and have too little experience with forum moderation to agree or disagree with your decision. Though LW had been around for a very long time without imploding so that's evidence you guys know what you're doing.
Please don't take down his post though. I believe somewhere in there is a good faith opinion at odds with my own. I want to read and understand it. Just not ready for this much reading tonight.
I wish I could write so prolifically! Or maybe it's a curse rather than a blessing because then it becomes an obstacle to people understanding your point of view.
Replies from: Ruby↑ comment by Jarred Filmer (4thWayWastrel) · 2022-11-10T22:05:05.906Z · LW(p) · GW(p)
"pain and pain receptor spread across and through animal bodies basically is like a currency for the evolutionary perspective... so species can pay for survival with more pain(receptors)"
I found this interesting to mull on, an interesting property of pleasure and pain is acting as a universal measure of value, making trade-offs easier