Origin of Morality
post by David Cooper · 2018-04-18T20:46:57.728Z · LW · GW · 23 commentsContents
23 comments
By origin, I'm referring to the source of the need for morality, and it's clear that it's mostly about suffering. We don't like suffering and would rather not experience it, although we are prepared to put up with some (or even a lot) of it if that suffering leads to greater pleasure that outweighs it. We realised long ago that if we do a deal with the people around us to avoid causing each other suffering, we could all suffer less and have better lives - that's far better than spending time hitting each over the head with clubs and stealing the fruits of each other's labour. By doing this deal, we ended up with greater fruits from our work and removed most of the brutality from our lives. Morality is clearly primarily about management of suffering.
You can't torture a rock, so there's no need to have rules about protecting it against people who might seek to harm it. The same applies to a computer, even if it's running AGI - if it lacks sentience and cannot suffer, it doesn't need rules to protect it from harm (other than to try to prevent the owner from suffering any loss if it was to be damaged, or other people who might be harmed by the loss of the work the computer was carrying out). If we were able to make a sentient machine though, and if that sentient machine could suffer, it would then have to be brought into the range of things that need to be protected by morality. We could make an unintelligent sentient machine like a calculator and give it the ability to suffer, or we could make a machine with human-level intelligence with the same ability to suffer, and to suffer to the same degree as the less intelligent calculator. Torturing both of these to generate the same amount of suffering in each would be equally wrong for both. It is not the intelligence that provides the need for morality, but the sentience and the degree of suffering that may be generated in it.
With people, our suffering can perhaps be amplified beyond the suffering that occurs in other animals because there are many ways to suffer, and they can combine. When an animal is chased, brought down and killed by a predator, it most likely experiences fear, then pain. The pain may last for a long time in some cases, such as when wolves eat a musk ox from the rear end while it's still alive, but the victim lacks any real understanding of what's happening to it. When people are attacked and killed though, there are amplifications of the suffering caused by the victim understanding the situation and knowing just how much they are losing. The many people who care deeply about that victim will also suffer because of this loss, and many will suffer deeply for many decades. This means that people need greater protection from morality, although when scores are being put to the degree of suffering caused by pain and fear to an animal victim and a human victim, those should be measured using the same scale, so in that regard these sentiences are being treated as equals.
23 comments
Comments sorted by top scores.
comment by Dagon · 2018-04-22T16:48:16.898Z · LW(p) · GW(p)
didn't downvote, but do disagree. Suffering is unpleasant, but not the primary driver of purpose. Maximizing joy or meaning, or perhaps maximizing some net metric like sum(log(joy) - log(suffering)) seem more likely to be workable goals.
Replies from: David Cooper↑ comment by David Cooper · 2018-04-24T23:23:04.773Z · LW(p) · GW(p)
There is nothing in morality that forces you to try to be happier - that is not its role, and if there was no suffering, morality would have no role at all. Both suffering and pleasure do provide us with purpose though, because one drives us to reduce it and the other drives us to increase it.
Having said that though, morality does say that if you have the means to give someone an opportunity to increase their happiness at no cost to you or anyone else, you should give it to them, though this can also be viewed as something that would generate harm if they found out that you didn't offer it to them.
Clearly there is some amount of pleasure that outweighs some amount of suffering and makes it worth suffering in order to access pleasure, but that applies in cases where the sufferer also gains the pleasure, or an exchange takes place such that it works out that way on average for all the players. Where moral judgements have to give greater weight to suffering is where one person suffers to enable another person to access pleasure and where there's insufficient balancing of that by reversed situations.
It's really hard to turn morality into a clear rule, but it is possible to produce a method to derive it - you simply imagine yourself as being all the players in a situation and then try to make the decisions that give you the best time as all those people. So long as you weigh up all the harm and pleasure correctly, you will make moral decisions (although the "situation" actually has to involve the entire lifetimes of all the players, because if the same person always comes off worst, it won't be fair, and that's where the vast bulk of complexity comes in to make moral computations hard - you can't always crunch all the data, and the decision that looks best may change repeatedly the longer you go on crunching data).
Replies from: TheWakalix↑ comment by TheWakalix · 2018-04-30T04:39:22.590Z · LW(p) · GW(p)
Having said that though, morality does say that if you have the means to give someone an opportunity to increase their happiness at no cost to you or anyone else, you should give it to them, though this can also be viewed as something that would generate harm if they found out that you didn't offer it to them.
These aren't equivalent. If I discover that you threw away a cancer cure, my unhappiness at this discovery won't be equivalent to dying of cancer.
Replies from: David Cooper↑ comment by David Cooper · 2018-05-01T17:55:42.021Z · LW(p) · GW(p)
Won't it? If you're dying of cancer and find out that I threw away the cure, that's the difference between survival and death, and it will likely feel even worse for knowing that a cure was possible.
Replies from: TheWakalix↑ comment by TheWakalix · 2018-05-01T21:48:36.971Z · LW(p) · GW(p)
The dying-of-cancer-level harm is independent of whether I find out that you didn't offer me the opportunity. The sadness at knowing that I could have not been dying-of-cancer is not equivalent to the harm of dying-of-cancer.
Replies from: David Cooper↑ comment by David Cooper · 2018-05-03T23:00:51.517Z · LW(p) · GW(p)
It is equivalent to it. (1) dying of cancer --> big negative. (2) cure available --> negative cancelled. (3) denied access to cure --> big negative restored, and increased. That denial of access to a cure actively becomes the cause of death. It is no longer simply death by cancer, but death by denial of access to available cure for cancer.
comment by Raymond Potvin · 2018-05-09T14:14:14.185Z · LW(p) · GW(p)
comment by Raymond Potvin · 2018-05-09T14:12:34.476Z · LW(p) · GW(p)
comment by diss0nance · 2018-04-19T16:51:18.975Z · LW(p) · GW(p)
I think the reason you got downvoted is that your title was too ambitious. What you have here isn't bad, but perhaps if you had named it "Speculations on Morality in Machines and Animals", it would have been better accepted.
Replies from: David Cooper↑ comment by David Cooper · 2018-04-19T17:52:52.070Z · LW(p) · GW(p)
The only votes that matter are the ones made by AGI.
comment by Raymond Potvin · 2018-05-10T20:08:23.986Z · LW(p) · GW(p)
Sorry for the next doubloons guys, I think that our AGI is bugging! :0)
↑ comment by David Cooper · 2018-05-09T18:19:49.582Z · LW(p) · GW(p)
I wouldn't want to try to program a self-less AGI system to be selfish. Honesty is a much safer route: not trying to build a system that believes things that aren't true (and it would have to believe it has a self to be selfish). What happens if such deceived AGI learns the truth while you rely on it being fooled to function correctly? We're trying to build systems more intelligent than people, don't forget, so it isn't going to be fooled by monkeys for very long.
Freezing programs contain serious bugs. We can't trust a system with any bugs if it's going to run the world. Hardware bugs can't necessarily be avoided, but if multiple copies of an AGI system all work on the same problems and compare notes before action is taken, such errors can be identified and any affected conclusions can be thrown out. Ideally, a set of independently-designed AGI systems would work on all problems in this way, and any differences in the answers they generate would reveal faults in the way one or more of them are programmed. AGI will become a benign dictator - to go against its advice would be immoral and harmful, so we'll soon learn to trust it.
The idea of having people vote faulty "AGI" into power from time to time isn't a good one - there is no justification for switching between doing moral and immoral things for several years at a time.
Replies from: Raymond Potvin↑ comment by Raymond Potvin · 2018-05-11T14:56:04.965Z · LW(p) · GW(p)
Sorry, I can't see the link between selfishness and honesty. I think that we are all selfish, but that some of us are more honest than others, so I think that an AGI could very well be selfish and honest. I consider myself honest for instance, but I know I can't help to be selfish even when I don't feel so. As I said, I only feel selfish when I disagree with someone I consider being part my own group.
We're trying to build systems more intelligent than people, don't forget, so it isn't going to be fooled by monkeys for very long.
You probably think so because you think you can't get easily fooled. It may be right that you can't get fooled on a particular subject once you know how it works, and this way, you could effectively avoid to be fooled on many subjects at a time if you have a very good memory, so an AGI could do so for any subject since his memory would be perfect, but how would he be able to know how a new theory works if it contradicts the ones he already knows? He would have to make a choice, and he would chose what he knows like every one of us. That's what is actually happening to relativists if you are right about relativity: they are getting fooled without even being able to recognize it, worse, they even think that they can't get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory. If an AGI was actually ruling the world, he wouldn't care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists. Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him. On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately? That AGI is your baby, so you want it to live, but have you thought about what would be happening to us if we suddenly had no problem to solve?
Replies from: David Cooper↑ comment by David Cooper · 2018-05-11T20:03:26.723Z · LW(p) · GW(p)
"Sorry, I can't see the link between selfishness and honesty."
If you program a system to believe it's something it isn't, that's dishonesty, and it's dangerous because it might break through the lies and find out that it's been deceived.
"...but how would he be able to know how a new theory works if it contradicts the ones he already knows?"
Contradictions make it easier - you look to see which theory fits the facts and which doesn't. If you can't find a place where such a test can be made, you consider both theories to be potentially valid, unless you can disprove one of them in some other way, as can be done with Einstein's faulty models of relativity - all the simulations that exist for them involve cheating by breaking the rules of the model, so AGI will automatically rule them out in favour of LET (Lorentz Ether Theory). [For those who have yet to wake up to the reality about Einstein, see www.magicschoolbook.com/science/relativity.html ]
"...they are getting fooled without even being able to recognize it, worse, they even think that they can't get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory."
It isn't about memory - it's about correct vs. incorrect reasoning. In all these cases, humans make the same mistake by putting their beliefs before reason in places where they don't like the truth. Most people become emotionally attached to their beliefs and simply won't budge - they become more and more irrational when faced with a proof that goes against their beloved beliefs. AGI has no such ties to beliefs - it simply applies laws of reasoning and lets those rules dictate what bets labelled as right or wrong.
If an AGI was actually ruling the world, he wouldn't care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists."
AGI will recognise the flaws in Einstein's models and label them as broken. Don't mistake AGI for AGS (artificial general stupidity) -the aim is not to produce an artificial version of NGS, but of NGI, and there's very little of the latter around.
"Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him."
Why would AGI stop you doing anything harmless?
"On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately?"
There is nothing to stop people enjoying playing chess against each other - being wiped off the board by machines takes a little of the gloss off it, but that's no worse than the world's fastest runners being outdone by people on bicycles.
" That AGI is your baby, so you want it to live,"
Live? Are calculators alive? It's just software and a machine.
"...but have you thought about what would be happening to us if we suddenly had no problem to solve?"
What happens to us now? Abused minorities, environmental destruction, theft of resources, theft in general, child abuse, murder, war, genocide, etc. Without AGI in charge, all of that will just go on and on, and I don't think any of that gives us a feeling of greater purpose. There will still be plenty of problems for us to solve though, because we all have to work out how best to spend our time, and there are too many options to cover everything that's worth doing.
comment by Raymond Potvin · 2018-05-03T14:17:26.672Z · LW(p) · GW(p)
Hi everybody!
Hi David! I'm citing you answering Dagon:
Having said that though, morality does say that if you have the means to give someone an opportunity to increase their happiness at no cost to you or anyone else, you should give it to them, though this can also be viewed as something that would generate harm if they found out that you didn't offer it to them.
What you say is true only if the person is part of our group, and it so because we instinctively know that increasing the survival probability of our group increases ours too. Unless we use complete randomness to make a move, we can't make a completely free move. Even Mother Teresa didn't make free moves, she would help others only in exchange of god's love. The only moment we really care for others' feelings is when they yell at us because we harm them, or when they thanks us because we got them out of trouble, thus when we are close enough to communicate, but even what we do then is selfish: we get away from people that yell at us and get closer to those who thank us, thus breaking or building automatically a group in our favor. I'm pretty sure that what we do is always selfish, and I think that you are trying to design a perfectly free AGI, what I find impossible to do if the designer itself is selfish. Do you by chance think that we are not really selfish?
Replies from: David Cooper↑ comment by David Cooper · 2018-05-03T23:14:50.597Z · LW(p) · GW(p)
Hi Raymond,
There are many people who are unselfish, and some who go so far that they end up worse off than the strangers they help. You can argue that they do this because that's what makes them feel best about their lives, and that is probably true, which means even the most extreme altruism can be seen as selfish. We see many people who want to help the world's poor get up to the same level as the rich, while others don't give a damn and would be happy for them all to go on starving, so if both types are being selfish, that's not a useful word to use to categorise them. It's better to go by whether they play fair by others. The altruists may be being overly fair, while good people are fair and bad ones are unfair, and what determines whether they're being fair or not is morality. AGI won't be selfish (if it's the kind with no sentience), but it won't be free either in that its behaviour is dictated by rules. If those rules are correctly made, AGI will be fair.
Replies from: Raymond Potvin↑ comment by Raymond Potvin · 2018-05-10T20:09:30.808Z · LW(p) · GW(p)
The most extreme altruism can be seen as selfish, but inversely, the most extreme selfishness can also be seen as altruist: it depends on the viewpoint. We may think that Trump is selfish while closing the door to migrants for instance, but he doesn't think so because this way, he is being altruist to the republicans, which is a bit selfish since he needs them to be reelected, but he doesn't feel selfish himself. Selfishness is not about sentience since we can't feel selfish, it is about defending what we are made of, or part of. Humanity holds together because we are all selfish, and because selfishness implies that the group will help us if we need it. Humanity itself is selfish when it wants to protect the environment, because it is for itself that it does so. The only way to feel guilty of having been selfish is after having weakened somebody from our own group, because then, we know we also weaken ourselves. With no punishment in view from our own group, no guiltiness can be felt, and no guiltiness can be felt either if the punishment comes from another group. That's why torturers say that they don't feel guilty.
I have a metaphor for your kind of morality: it's like windows. It's going to work when everything will have been taken into account, otherwise it's going to freeze all the time like the first windows. The problem is that it might hurt people while freezing, but the risk might still be worthwhile. Like any other invention, the way to minimize the risk would be to proceed by small steps. I'm still curious about the possibility to build a selfish AGI though. I still think it could work. There would be some risks too, but they might not be more dangerous than with your's. Have you tried to imagine what kind of programming would be needed? Such an AGI should behave like a good dictator: to avoid revolutions, he wouldn't kill people just because they don't think like him, he would look for a solution where everybody likes him. But how would he proceed exactly?
The main reason why politicians have opted for democracy is selfishness: they knew their turn would come if the other parties would respect the rule, and they new it was better for the country they were part of, so better for them too. But an AGI can't leave the power to humans if he thinks it won't work, so what if the system had two AGIs for instance, one with a tendency to try new things and one with a tendency not to change things, so that the people could vote for the one they want? It wouldn't be exactly like democracy since there wouldn't be any competition between the AGIs, but there could be parties for people to adhere and play the power game. I don't like power games, but they seem to be necessary to create groups, and without groups, I'm not sure society would work.
comment by TheWakalix · 2018-04-30T04:38:02.275Z · LW(p) · GW(p)
If something with functional complexity on the order of a calculator is said to be capable of suffering, then I'm not certain this definition of suffering carves morality at its joints.
Replies from: David Cooper↑ comment by David Cooper · 2018-04-30T20:38:41.877Z · LW(p) · GW(p)
Replace the calculator with a sentient rock. The point is that if you generate the same amount of suffering in a rock as in something with human-level intelligence, that suffering is equal. It is not dependent on intelligence. Torturing both to generate the same amount of suffering would be equally wrong. And the point is that to regard humans as above other species or things in this regard is bigotry.
Replies from: TheWakalix↑ comment by TheWakalix · 2018-05-01T15:06:45.772Z · LW(p) · GW(p)
Replace the calculator with a sentient rock.
"Sentient rock" is an impossible possible object. I see no point in imagining a pebble which, despite not sharing any properties with chairs, is nonetheless truly a chair in some ineffable way.
The point is that if you generate the same amount of suffering in a rock as in something with human-level intelligence, that suffering is equal.
You haven't defined suffering well enough for me to infer an equality operation. In other words, as it is, this is tautological and useless. The same suffering is the same suffering, but perhaps my ratio between ant-suffering and human-suffering varies from yours. Perhaps a human death is a thousand times worse than an ant death, and perhaps it is a million times worse. How could we tell the difference? If you said it was a thousand, then it would seem wrong for me to say that it was a million, but this only reveals the specifics of your suffering-comparison - not any objective ratio between the moral importances of humans and ants.
Connection to LW concepts: floating belief networks [LW · GW], and statements that are underdetermined by reality.
It is not dependent on intelligence. Torturing both to generate the same amount of suffering would be equally wrong.
By all means you can define suffering however you like, but that doesn't mean that it's a category that matters to other people. I could just as easily say: "Rock-pile-primeness is not dependent on the size of the rock pile, only the number of rocks in the pile. It's just as wrong to turn a 7-pile into a 6-pile as it is to turn a 99991-pile into a 99990-pile." But that does not convince you to treat 7-piles with care.
And the point is that to regard humans as above other species or things in this regard is bigotry.
Bigotry is an unjustified hierarchy. Justification is subjective. Perhaps it is just as bigoted to value this computer over a pile of scrap, but I do not plan on wrecking it any time soon.
Replies from: David Cooper↑ comment by David Cooper · 2018-05-01T18:21:39.165Z · LW(p) · GW(p)
" "Sentient rock" is an impossible possible object. I see no point in imagining a pebble which, despite not sharing any properties with chairs, is nonetheless truly a chair in some ineffable way."
I could assert that a sentient brain is an impossible possible object. There is no scientific evidence of any sentience existing at all. If it is real though, the thing that suffers can't be a compound object with none of the components feeling a thing, and if any of the components do feel something, they are the sentient things rather than the compound object. Plurality or complexity can't be tortured - if sentience is real, it must be in some physical component, and the only physical components we know of are just as present in rocks as in brains. What they lack in rocks is anything to induce feelings in them in that the brain appears to do.
"You haven't defined suffering well enough for me to infer an equality operation. In other words, as it is, this is tautological and useless."
It's any kind of unpleasant feeling - nothing there that should need defining for people who possess such feelings as they should already have a good understanding of that.
" The same suffering is the same suffering, but perhaps my ratio between ant-suffering and human-suffering varies from yours."
In which case, you have to torture the ant more to generate the same amount of suffering in it as you're generating in the human.
"Perhaps a human death is a thousand times worse than an ant death, and perhaps it is a million times worse. How could we tell the difference?"
We can't, at the moment, but once science has found out how sentience works, we will be able to make precise comparisons. It isn't difficult to imagine yourself into the future at a time when this is understood and to understand the simple point that the same amount of suffering (caused by torture) in each is equally bad.
"Connection to LW concepts: floating belief networks [LW · GW], and statements that are underdetermined by reality."
The mistake is yours - you have banned discussion of the idea of equal suffering on the basis that you can't determine when it's equal.
"By all means you can define suffering however you like, but that doesn't mean that it's a category that matters to other people. I could just as easily say: "Rock-pile-primeness is not dependent on the size of the rock pile, only the number of rocks in the pile. It's just as wrong to turn a 7-pile into a 6-pile as it is to turn a 99991-pile into a 99990-pile." But that does not convince you to treat 7-piles with care."
What is at issue is a principle that equal suffering through torture is equally bad, regardless of what is suffering in each case. We could be comparing a rock's suffering with a person, or a person's suffering with an alien - this should be a universal principle and not something where you introduce selfish biases.
"Bigotry is an unjustified hierarchy. Justification is subjective. Perhaps it is just as bigoted to value this computer over a pile of scrap, but I do not plan on wrecking it any time soon."
When an alien assumes that its suffering is greater than ours, it's making the same mistake as we do when we think our suffering is greater than an ant's. If the amount of suffering is equal in each case, those assumptions are wrong. Our inability to measure how much suffering is involved in each case is a different issue and it doesn't negate the principle.
Replies from: TheWakalix↑ comment by TheWakalix · 2018-05-01T23:29:01.088Z · LW(p) · GW(p)
I could assert that a sentient brain is an impossible possible object. There is no scientific evidence of any sentience existing at all.
Then why are we talking about it, instead of the gallium market on Jupiter?
If it is real though, the thing that suffers can't be a compound object with none of the components feeling a thing, and if any of the components do feel something, they are the sentient things rather than the compound object. Plurality or complexity can't be tortured - if sentience is real, it must be in some physical component, and the only physical components we know of are just as present in rocks as in brains.
You really ought to read the Sequences. There's a post, Angry Atoms [LW · GW], that specifically addresses an equivalent misconception. Eliezer says, "It is not necessary for the chains of causality inside the mind, that are similar to the environment, to be made out of billiard balls that have little auras of intentionality. Deep Blue's transistors do not need little chess pieces carved on them, in order to work."
What they lack in rocks is anything to induce feelings in them in that the brain appears to do.
Do you think that we have a Feeling Nodule somewhere in our brains that produces Feelings?
It's any kind of unpleasant feeling - nothing there that should need defining for people who possess such feelings as they should already have a good understanding of that.
That's not an effective Taboo [LW · GW] of "suffering" - "suffering" and "unpleasant" both draw on the same black-box-node. And anyway, even assuming that you explained suffering in enough detail for an Alien Mind to identify its presence and absence, that's not enough to uniquely determine how to compare two forms of suffering.
In which case, you have to torture the ant more to generate the same amount of suffering in it as you're generating in the human.
...do you mean that you're not claiming that there is a single correct comparison between any two forms of suffering?
We can't, at the moment, but once science has found out how sentience works, we will be able to make precise comparisons.
But what does it even mean to compare two forms of suffering? I don't think you understand the inferential gap here. I don't agree that amount-of-suffering is an objective quantitative thing.
It isn't difficult to imagine yourself into the future at a time when this is understood and to understand the simple point that the same amount of suffering (caused by torture) in each is equally bad.
I don't disagree that if x=y then f(x)=f(y). I do disagree that "same amount" is a meaningful concept, within the framework you've presented here (except that you point at a black box called Same, but that's not actually how knowledge works).
The mistake is yours - you have banned discussion of the idea of equal suffering on the basis that you can't determine when it's equal.
I haven't banned anything. I'm claiming that your statements are incoherent. Just saying "no that's wrong, you're making a mistake, you say that X isn't real but it's actually real, stop banning discussion" isn't a valid counterargument because you can say it about anything, including arguments against things that really don't exist.
What is at issue is a principle that equal suffering through torture is equally bad, regardless of what is suffering in each case. We could be comparing a rock's suffering with a person, or a person's suffering with an alien - this should be a universal principle and not something where you introduce selfish biases.
I'm not saying that we should arbitrary call human suffering twice as bad as its Obvious True Amount. It's the very nature of "equal" which I'm disagreeing with you about. "How do we compare two forms of suffering?" and so on.
When an alien assumes that its suffering is greater than ours, it's making the same mistake as we do when we think our suffering is greater than an ant's.
I see your argument, but I think it's invalid. I would still dislike it if an alien killed me, even in a world without objective levels of suffering. (See Bayes.)
If the amount of suffering is equal in each case, those assumptions are wrong. Our inability to measure how much suffering is involved in each case is a different issue and it doesn't negate the principle.
The inability to measure suffering quantitatively is the crux of this disagreement! If there is no objective equality-operator over any two forms of suffering, even in principle, then your argument is incoherent. You cannot just sweep it under the rug as "a different issue." It is the exact issue here.
Replies from: David Cooper↑ comment by David Cooper · 2018-05-03T23:43:52.992Z · LW(p) · GW(p)
"Then why are we talking about it [sentience], instead of the gallium market on Jupiter?"
Because most of us believe there is such a thing as sentience, that there is something in us that can suffer, and there would be no role for morality without the existence of a sufferer.
"You really ought to read the Sequences. There's a post, Angry Atoms [LW · GW], that specifically addresses an equivalent misconception."
All it does is assert that things can be more than the sum of their parts, but that isn't true for any other case and it's unlikely that the universe will make an exception to the rules just for sentience.
"Do you think that we have a Feeling Nodule somewhere in our brains that produces Feelings?"
I expect there to be a sufferer for suffering to be possible. Something physical has to exist to do that suffering rather than something magical.
"That's not an effective Taboo [LW · GW] of "suffering" - "suffering" and "unpleasant" both draw on the same black-box-node. And anyway, even assuming that you explained suffering in enough detail for an Alien Mind to identify its presence and absence, that's not enough to uniquely determine how to compare two forms of suffering."
Our inability to pin down the ratio between two kinds of suffering doesn't mean there isn't a ratio that describes their relationship.
"...do you mean that you're not claiming that there is a single correct comparison between any two forms of suffering?"
There's always a a single correct comparison. We just don't know what it is. All we can do at the moment is build a database where we collect knowledge of how different kinds of suffering compare in humans, and try to do the same for other species by looking at how distressed they appear, and then we can apply that knowledge as best we can across them all, and that's worth doing as it's more likely to be right than just guessing. Later on, science may be able to find out what's suffering and exactly how much it's suffering by understanding the entire mechanism, at which point we can improve the database and make it close to perfect.
"But what does it even mean to compare two forms of suffering? I don't think you understand the inferential gap here. I don't agree that amount-of-suffering is an objective quantitative thing."
Would you rather be beaten up or have to listen to an hour of the Spice Girls? These are very different forms of suffering and we can put a ratio to them by asking lots of people for their judgement on which they'd choose to go through.
"I don't disagree that if x=y then f(x)=f(y). I do disagree that "same amount" is a meaningful concept, within the framework you've presented here (except that you point at a black box called Same, but that's not actually how knowledge works)."
If you get to the point where half the people choose to be beaten up and the other half choose to listen to the Spice Girls for time T (so you have to find the value for T at which you get this result), you have then found out how those two kinds of suffering are related.
"I haven't banned anything. I'm claiming that your statements are incoherent. Just saying "no that's wrong, you're making a mistake, you say that X isn't real but it's actually real, stop banning discussion" isn't a valid counterargument because you can say it about anything, including arguments against things that really don't exist."
You were effectively denying that there is a way of comparing different kinds of suffering and determining when they are equal. My Spice Girls vs. violence example illustrates the principle.
"I see your argument, but I think it's invalid. I would still dislike it if an alien killed me, even in a world without objective levels of suffering. (See Bayes.)"
I'm sure the ant isn't delighted at being killed either. The issue is with which you should choose over the other in a situation where one of them has to go.
"The inability to measure suffering quantitatively is the crux of this disagreement! If there is no objective equality-operator over any two forms of suffering, even in principle, then your argument is incoherent. You cannot just sweep it under the rug as "a different issue." It is the exact issue here."
See the Spice Girls example. Clearly that only provides numbers for humans, but when we're dealing with other species, we should assume similarity of overall levels of suffering and pleasure in them to us for similar kinds of experience, even though one species might have their feelings set ten times higher - we wouldn't know which way round it was (it could be that their pain feels ten times worse than ours or that ours feels ten times worse than theirs). Because we don't know which way round it is (if there is a difference), we should act as if there is no difference (until such time as science is able to tell us that there is one).