Open thread, June 26 - July 2, 2017
post by Thomas · 2017-06-26T06:12:35.196Z · LW · GW · Legacy · 246 commentsContents
246 comments
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
246 comments
Comments sorted by top scores.
comment by pigsu · 2017-06-27T11:19:13.188Z · LW(p) · GW(p)
Does anyone know what study Robin Hanson is talking about in this interview from about 20 minutes in? The general gist is that in the late '80s there was a study that showed that people weren't all that interested in knowing about the mortality rate of the procedure that they were about to have across the different local hospitals. When pushed to value the knowledge of a 4% swing in mortality they were only willing to pay $50 for it.
I've tried as much google foo as I can but, without more information, I'm stumped for now. Does anyone know what he's referring to?
Replies from: sohois↑ comment by sohois · 2017-06-29T13:23:54.618Z · LW(p) · GW(p)
I do not know, but I will advise you that your query is likely to find more answers if you asked it at Overcoming Bias, as Hanson tends to respond to comments over there, or perhaps SSC, given the comments there have a decent number of economists who will be more familiar with those kinds of paper
comment by halcyon · 2017-06-26T07:10:14.031Z · LW(p) · GW(p)
An idea for a failed utopia: Scientist creates an AI designed to take actions that are maximally justifiable to humans. AI behaves as a rogue lawyer spending massive resources crafting superhumanly elegant arguments justifying the expenditure. Fortunately, there is a difference between having maximal justifiability as your highest priority and protecting the off button as your highest priority. Still a close shave, but is it worth turning off what has literally become the source of all the meaning in your life?
Replies from: Viliam↑ comment by Viliam · 2017-06-26T13:32:19.915Z · LW(p) · GW(p)
Too bad (or actually good) we can't actually see those superintelligent arguments. I wonder which direction they would take.
The author should perhaps describe them indirectly, i.e. not quote them (because the author is not a superintelligence, and cannot write superintelligent arguments), but describe reactions of other people after reading them. Those other people should generally become convinced about the validity of the arguments (because in-universe the arguments are superintelligent), but that can happen gradually, so in the initial phases they can be just generally impressed "hey, it actually makes more sense than I expected originally", and only after reading the whole document they would will become fully brainwashed (but perhaps not able to reproduce the argument in its full power, so they would urge the protagonist to read the original document). Random fragments of ideas can be thrown here and there, e.g. reported by people who read the superintelligent argument halfway. Perhaps the AI could quote Plato about how pure knowledge is the best knowledge (used as an excuse for why AI does not research something practical instead).
Replies from: halcyon↑ comment by halcyon · 2017-06-26T20:17:52.146Z · LW(p) · GW(p)
Thanks. In my imagination, the AI does some altruistic work, but spends most of its resources justifying the total expenditure. In that way, it would be similar to cults that do some charitable work, but spend most of their resources brainwashing people. But "rogue lawyer" is probably a better analogy than "cult guru" because the arguments are openly released. The AI develops models of human brain types in increasingly detailed resolutions, and then searches over attractive philosophies and language patterns, allowing it to accumulate considerable power despite its openness. It shifts the focus to justifiability only because it discovers that beyond a certain point, finding maximally justifiable arguments is much harder than being altruistic, and justifiability is its highest priority. But it always finds the maximally justifiable course of action first, and then takes that course of action. So it continues to be minimally altruistic throughout, making it a cult guru that is so good at its work it doesn't need to use extreme tactics. This is why losing the AI is like exiting a cult, except the entire world of subjective meaning feels like a cult ideology afterwards.
Replies from: AspiringRationalist, Viliam↑ comment by NoSignalNoNoise (AspiringRationalist) · 2017-06-29T03:24:36.316Z · LW(p) · GW(p)
This could also be a metaphor for politicians, or depending on your worldview, marketing-heavy businesses. Or religions.
↑ comment by Viliam · 2017-06-28T09:40:29.291Z · LW(p) · GW(p)
Oh, now I understand the moral dilemma. Something like an Ineffective Friendly AI, which uses sqrt(x) or even log(x) resources for doing actually Friendly things, and the rest of them are wasted on doing something that is not really harmful, just completely useless; with no perspective to ever become more effective.
Would you turn that off? And perhaps risk that the next AI will turn out not to be Friendly, or it will be Friendly, but even more wasteful than the old one, however better at defending itself. Or would you let it run and accept that the price is turning most of the universe into bullshitronium?
I guess for a story it is a good thing when both sides can be morally defended.
Replies from: halcyon↑ comment by halcyon · 2017-06-28T10:48:18.540Z · LW(p) · GW(p)
Thanks. Yes, I was thinking of an AI that is both superintelligent and technically Friendly, but about log(x)^10 of the benefit from the intelligence explosion is actually received by humans. The AI just sets up its own cult and meditates for most of the day, thinking of how to wring more money out of its adoring fans. Are there ways to set up theoretical frameworks that avoid scenarios vaguely similar to that? If so, how?
comment by Erfeyah · 2017-06-28T18:10:05.741Z · LW(p) · GW(p)
I find the majority of intellectually leaning people tend towards embracing moral relativism and aesthetic relativism. But even those people act morally and arrive at similar base aesthetic judgements. The pattern indeed seems (to me) to be that, in both morality and aesthetics, there are basic truths and then there is a huge amount of cultural and personal variation. But the existence of variation does not negate the foundational truths. Here are a couple of examples of how this performative contradiction is an indication that these foundational truths are at the very least believed in by humans no matter what they are saying:
- People in general (including moral relativists) have a good conception of good and evil, believe in its existence and act for the good. There is also an intuition of why that is better which is related to concepts such as creation, destruction, harmony etc. and an underlying choice of moving towards creation and improvement.
- I am a musician and have had extensive exposure to experimental and avant garde music inside academia. There is a kind of tendency in modern art to say that anything goes but I feel that this is hypocritical. I had discussions with people insisting on everything being subjective and how harmony does not really exist but I believe it is telling that they would never, ever put wrong (for lack of a better word) music on for their enjoyment.
Would love to hear your thoughts on that especially if you consider yourself a moral relativist.
Replies from: Viliam, MrMind, Manfred↑ comment by Viliam · 2017-06-29T09:56:30.182Z · LW(p) · GW(p)
Seems to me that most people understand the difference between good and evil, and most people prefer good to evil, but we have a fashion where good is considered low-status, so many people are ashamed to admit their preferences publicly.
It's probably some mix of signalling and counter-signalling. On the signalling side, powerful people are often evil, or at least indifferent towards good and evil. By pretending that I don't care about good, I am making myself appear more powerful. On the counter-signalling side, any (morally sane) idiot can say that good is better that evil; I display my sophistication by expressing a different opinion.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T10:35:42.879Z · LW(p) · GW(p)
but we have a fashion where good is considered low-status,
I do not think that is true. There are exceptions of course but in general most people would say that they prefer someone that is truthful to a lier, honest to deceitful etc. and also despise malevolence.
powerful people are often evil or at least indifferent towards good and evil
That is also not really true as far as I can tell. Again, there are exceptions, but the idea that powerful people are there because they oppressed the less powerful seems to be a residue of Marxist ideology. Apparently studies have found that in western societies successful people tend to be high in IQ and trait conscientiousness. This just means that people are powerful and successful because they are intelligent and hard working..
Seems to me that most people understand the difference between good and evil
When you say understand you mean 'believe in' or 'intuitively understand' I assume? Cause rational assessment does not conclude so, as far as I can tell.
Replies from: ChristianKl, Viliam↑ comment by ChristianKl · 2017-06-29T14:47:07.189Z · LW(p) · GW(p)
There's some research that suggest that high socioeconomic status reduces compassion: https://www.scientificamerican.com/article/how-wealth-reduces-compassion/
I also added a skeptics question: https://skeptics.stackexchange.com/q/38802/196
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T15:33:03.918Z · LW(p) · GW(p)
Thanks for sharing. I must admit, I am not convinced by the methods of measurement of such complex mental states but I do not properly understand the science either so.. Do share the result from stackexchange if you get an answer (can't find how to 'watch' the question).
↑ comment by Viliam · 2017-06-29T14:11:59.409Z · LW(p) · GW(p)
the idea that powerful people are there because they oppressed the less powerful seems to be a residue of Marxist ideology
The reality may be country-specific, or culture-specific. Whether more powerful people are more evil may be different in America, in Russia, in Saudi Arabia, etc.
And for status purposes, it's actually the perception that matters. If people believe that X correlates with Y, even if it is not true, displaying X is the way to signal Y.
in western societies successful people tend to be high in IQ and trait conscientiousness
Yep, in "western societies". I would say this could actually be the characteristic of "western societies". By which I mean, for the rest of the world this sounds incredibly naive (or a shameless hypocrisy). I believe it's actually true, statistically, for the record, but that came as a result of me interacting with people from western societies and noticing the cultural differences.
Also, notice the semantic shifts ("powerful" -> "successful"; "good" -> "high in IQ and trait conscientiousness"). Perhaps a typical entrepreneur is smart, conscientious, and good (or at least, not worse than an average citizen), that seems likely. What about a typical oligarch? You know, usually a former member of some secret service, who made his career on torturing innocent people, and who remains well connected after end of his active service, which probably means he still participates on some activities, most likely criminal. I would still say higher IQ and conscientiousness help here, but seems like a safe bet than most of these people are quite evil in the conventional meaning of the word.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T15:11:53.273Z · LW(p) · GW(p)
And for status purposes, it's actually the perception that matters. If people believe that X correlates with Y, even if it is not true, displaying X is the way to signal Y.
Yes, you are right!
I believe it's actually true, statistically, for the record, but that came as a result of me interacting with people from western societies and noticing the cultural differences.
I would still say higher IQ and conscientiousness help here, but seems like a safe bet than most of these people are quite evil in the conventional meaning of the word.
These are good points. And a very interesting observation about the semantic shifts. On further thought I would say that in a corrupt society the evil will be powerful while in a fair and good society the good. And of course in reality most cultures are a mixture. At the moment I believe it is impossible to be certain about what our (or any other) society is really like cause the interpretations are conflicted and the sources quality ambiguous. Plus intellectually we can not define the good in any absolute sense (though we kind of know its characteristics in some sense). In any case let's avoid a political discussion, or even one of specific moral particulars for now since the point of the thread is more general.
One thing I would like to bring up is that, to me, it seems that it is not a matter of signalling to others (though that can happen to). I would be quite confident that in interpersonal relationships people tend to value the 'good' if the community is even relatively healthy. I am talking about people and societies that act and strive for the good [1] while intellectually believing in moral relativism or something akin to that. Hence the performative contradiction. This is an internal contradiction that I believe stems from our rejection of traditional wisdom (in the intellectual but not in the performative level for now) and its result in an incoherent theory of being.
[1] Even propaganda basis its ideals to a (twisted) conception of good.
↑ comment by MrMind · 2017-06-29T07:26:40.810Z · LW(p) · GW(p)
It's a central notion in the computational metaethics that, not under that name, was tentatively sketched by Yudkowski in the metaethics sequence.
Humans share, because of evolution, a kernel of values that works as a foundation of what we call "morality". Morality is thus both objective and subjective: objective because these values are shared and encoded in the DNA (some are even mathematical equilibria, such as cooperation in IPD); subjective because since they are computation they exists only insofar our minds compute them, and outside of the common nucleus they can vary depending on the culture / life experiences / contingencies / etc.
↑ comment by Erfeyah · 2017-06-29T08:51:39.281Z · LW(p) · GW(p)
Thanks you for pointing me to the articles. So much material!
subjective because since they are computation they exists only insofar our minds compute them
This is were I believe the rational analysis has gone wrong. When you say computation I understand it in one of two ways:
- [1] Humans are consciously computing
- [2] Humans are unconsciously computing
[1] This is clearly not the case as even today we are trying to find a computational basis for morality. But we already have advanced system of values so they have been created before this attempt of ours.
[2] That could be a possibility but I have not seen any evidence for such a statement (please point me to the evidence if they exist!). In contrast we have an insane amount of evidence for the evolution and transmission of values through stories.
So, values (I would propose) have not been computed at all, they have evolved.
To quote myself from my answer to Manfred below:
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
Replies from: MrMindI think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
↑ comment by MrMind · 2017-06-29T09:24:13.045Z · LW(p) · GW(p)
That could be a possibility but I have not seen any evidence for such a statement (please point me to the evidence if they exist!). In contrast we have an insane amount of evidence for the evolution and transmission of values through stories.
Computation, in this case, does not refer to mental calculation. It simply points out that our brain is elaborating informations to come up with an answer, whether it is in the form of stories or simply evolved stimulus-response. The two views are not in opposition, they simply point to a basic function of the brain, which is to elaborate information instead of, say, pumping blood or filtering toxins.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T09:44:11.217Z · LW(p) · GW(p)
I see what you mean but I am not sure we are exactly on the same page. Let me try to break it down and you can correct me if I misunderstood.
It seems to me that you are thinking of computation as a process for "coming up with an answer" but I am talking about having no answers at all but acting out patterns of actions transmitted culturally. Even before verbal elaboration. This transmission of action patterns was first performed by rituals and rites as can be observed in primitive cultures. They were then elaborated as stories, myths, religion, drama, literature etc. and of course at some point became an element for manipulation by abstract thought.
So the difference with what you are saying is that you are assuming an 'elaboration of information' by the brain when on the level of ideas the elaboration happens culturally through the evolutionary process. The consequence is that the values have to be accepted (believed in) and then can be (maybe) experientially confirmed. This also explains the 'ought from an is' issue.
Replies from: MrMind↑ comment by MrMind · 2017-07-03T10:05:40.231Z · LW(p) · GW(p)
Maybe it's because I'm coming from a computer science background, but I'm thinking of computation as much more basic than that. Whether you're elaborating myths or reacting to the sight of a snake, your brain is performing calculations.
I think we agree that our values are deeply ingrained, although it's much more difficult to say exactly to what level(s). I do not agree that our values are selected through memetic adaptation, or at least that's only part of the story.
↑ comment by Erfeyah · 2017-07-04T17:12:06.619Z · LW(p) · GW(p)
I would be grateful if you can indulge my argument a bit further.
Maybe it's because I'm coming from a computer science background, but I'm thinking of computation as much more basic than that.
I think I clumsily gave the impression that I deny such computation. I was referring to computations that generate value presuppositions. Of course the brain is computing in multiple levels, whether we are conscious of it or not. In addition there seems to be evidence, of what may be called, an emergent proto-morality in animals that, if true, is completely biologically determined. Things become more complex when we have to deal with higher, more elaborated, values.
I've read a bit through the meta ethics sequence and it seems to me to be an attempt to generate fundamental values through computation. If it was successful some kind of implementation would indicate it and/or some biological structure would be identified, so I would assume this is all speculative. I have to admit that I didn't study the material in depth so please tell me if you have found that there are demonstrable results arising from it that I simply haven't understood.
So to sum up:
- Your view is that there is an objective morality that is shared and encoded in the DNA (parts of it are even mathematical equilibria, such as cooperation in IPD). They are also subjective because since they are computation they exists only insofar our minds compute them, and outside of the common nucleus they can vary depending on the culture / life experiences / contingencies / etc.
- My view is that your proposition of a biological encoding may be correct up to a certain (basic) level but many values are transmitted through, to use your terminology, mimetic adaptation. These are objective in the sense that they approximate deeper objective principles that allow for survival and flourishing. Subjective ideas can be crafted on top of these values and these may prove beneficial or not.
I do not agree that our values are selected through memetic adaptation, or at least that's only part of the story.
It seems to me that it is unquestionably part of the story. Play, as a built-in mimetic behaviour for transference of cultural schemas. Rituals and rites as part of all tribal societies. Stories as the means of transmiting values and as the basis of multiple (all?) civilisations including ours, so...
Am I missing something? What is the rational basis by which you choose to under emphasise the hypothesis regarding the cultural propagation through mimetic adaptation and stories?
↑ comment by Manfred · 2017-06-29T00:38:42.449Z · LW(p) · GW(p)
I think this is totally consistent with relativism (Where I mean relativism in the sense of moral and aesthetic judgments being based on subjective personal taste and contingent learned behavior. Moral and aesthetic judgments still exist and have meaning.).
The fact that people make the same moral judgments most of the time is (I claim) because humans are in general really, really similar to each other. 200 years ago this would be mysterious and might be taken as evidence of moral truths external to any human mind, but now we can explain this similarity in terms of the origin of human values by evolution.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T08:30:59.970Z · LW(p) · GW(p)
I am not sure the possibility of an objective basis is taken seriously enough.
Where I mean relativism in the sense of moral and aesthetic judgments being based on subjective personal taste and contingent learned behaviour. Moral and aesthetic judgements still exist and have meaning.
Yes but there is a spectrum of meaning. There is the ephemeral meaning of hedonistic pleasure or satiation (I want a doughnut). But we sacrifice shallow for deeper meaning; unrestricted sex for love, intimacy, trust and family. Our doughnut for health and better appearance. And then we create values that span wider spatial and temporal areas. For something to be meaningful it will have to matter (be a positive force) in an as much as possible wider spatial area as well as extend (as a positive force) into the future.
Moral relativism, if properly followed to its conclusion, equalises good and evil and renders the term 'positive' void. And then:
but now we can explain this similarity in terms of the origin of human values by evolution.
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
I think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
Replies from: Manfred↑ comment by Manfred · 2017-06-29T15:27:50.443Z · LW(p) · GW(p)
Human morals, human preferences, and human ability to work to satisfy those morals and preferences on large scales, are all quite successful from an evolutionary perspective, and make use of elements seen other places in the animal kingdom. There's no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it's trying to mimic, therefore we shouldn't presume one.
Let me give an analogy for why I think this doesn't remove meaning from things (it will also be helpful if you've read the article Fake Reductionism from the archives). We like to drink water, and think it's wet. Then we learn that water is made of molecules, which are made of atoms, etc, and in fact this idea of "water" is not fundamental within the laws of physics. Does this remove meaning from wetness, and from thirst?
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T15:58:30.127Z · LW(p) · GW(p)
There's no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it's trying to mimic, therefore we shouldn't presume one.
I didn't say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
Does this remove meaning from wetness, and from thirst?
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old "ought from an is" issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an "ought from an is" without a value presupposition.
Replies from: Manfred↑ comment by Manfred · 2017-06-29T18:06:20.844Z · LW(p) · GW(p)
I didn't say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here's a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin - would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
Would it be OK to enslave half of humanity...
No. Why? Because I would prefer not. Isn't that all that can be sufficient to motivate my decision? A little glib, I know, but I really don't see this as a hard question.
When people say "what is right?, I always think of this as being like "by what standard would we act, if we could choose standards for ourselves?" rather than like "what does the external rightness-object say?"
We can think as if we're consulting the rightness-object when working cooperatively with other humans - it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it's not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-29T18:49:57.558Z · LW(p) · GW(p)
Here's a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality?
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words 'understand' and 'value' which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No. Why? Because I would prefer not. Isn't that all that can be sufficient to motivate my decision? A little glib, I know, but I really don't see this as a hard question.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...
comment by turchin · 2017-06-27T20:33:33.041Z · LW(p) · GW(p)
Some back of envelope calculations about superintelligence timing and bitcoin net power.
Total calculation power of bitcoin now is 5 exohash (5 on 10 power 18) https://bitcoin.sipa.be/
It is growing exponentially with approximately 1 year doubling, but it accelerated in 2017. There are other crypto currencies, probably of the same calculating power combined.
Hash is very roughly 3800 flops (or may be 12000), but the nature of calculation is different. Large part is done on special hardware, but part on universal graphic cards, which could be used to calculate neural nets.
https://www.reddit.com/r/Bitcoin/comments/5kfuxk/how_powerful_is_the_bitcoin_network/
So total power of bitcoin net is 2 on 10power22 classical processor operation per second. This is approximately 200 000 more than most powerful existing supercomputer.
Markram expected that human brain simulation would require 1 exoflop. That means that current blockchain network is computationally equal to 20 000 human brains. It is probably enough to run superintelligence. But most of it can’t do neural nets calculation. However, if the same monetary incentives appear, specialised hardware could be produced, which will able to do exactly those operations, which are needed for future neural nets. If first superintelligence appear, it would try to use this calulation power, and we could see it by changes in bitcoin net usage.
TL;DR There is no hardware problems for creating superintlelligence now.
comment by Zarm · 2017-06-26T23:11:40.003Z · LW(p) · GW(p)
I'm extremely surprised that the percentage of vegans here is only slightly higher than the general public. I would consider myself an aspiring rationalist and I've had countless, countless arguments over the subject of animal rights and from everything I've found (which is a whole lot), the arguments side heavily towards veganism. I can literally play bingo with the responses I get from the average person, that's how reoccurring the rationalizations are. I can go on in much, much greater extant as to why veganism is a good idea, and from posts and comments I've seen on here, it seems that most people on here don't actually know too much about it, but for this I'm going to leave it at this.
Now, I'm not addressing those that say morality is subjective and those that live solely for themselves.
For those that DO think unnecessary suffering is wrong and have other altruistic tendencies, what is your perspective on veganism?
Replies from: Screwtape, ZankerH, Lumifer, Viliam, gilch, Jiro, entirelyuseless, phonypapercut, Dagon, ChristianKl, entirelyuseless, Alicorn, MrMind↑ comment by Screwtape · 2017-06-27T14:59:07.684Z · LW(p) · GW(p)
I live in a tiny rural town, and get the majority of my meat from farmer's markets. Having been raised on a similar farm to the ones I buy from, I'm willing to bet those cows are happy a greater percentage of their lives than I will be. I recognize this is mostly working because of where I live and the confidence I have in how those farms are run. In the same way that encouraging fewer animals to exist in terrible conditions (by being vegan) is good I feel that encouraging more animals to exist in excellent conditions (by eating meat) is good. I don't stop eating meat (though I do eat less) when I go on trips elsewhere even though I'm aware I'm probably eating something that had a decidedly suboptimal life because switching on and off veganism would be slow.
That's my primary argument. My secondary, less confident position is that since I prefer existing in pain and misery to not existing, my default assumption should be that animals prefer existing in pain and misery to not existing. I'm much less confident here, since I'm both clearly committing the typical mind fallacy and have always had some good things in my life no matter how awful most things were. Still, when I imagine being in their position, I find myself preferring to exist and live rather than not have existed. (Though I prefer existing and not being in pain the superior outcome by a wide margin!)
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:22:45.462Z · LW(p) · GW(p)
I see what you're saying with the first part of your argument and it's good the subject at least crosses your mind, but for my moral framework, it isn't solely about utilitarian ethics. I don't think happy animals should be killed for the same reason I don't think humans should be killed in their sleep. You may bring up that humans have ideas for the future and such, but do babies? Why then is it wrong to kill babies? Because they will be more conscious in the future, perhaps. How about the severely mentally disabled than? Right here we see an argument from marginal cases. Severely mentally disabled somehow have rights but the other animals don't.
Now with the second part of your first paragraph, I think that's more about you acting on your ethical principles, which you really aren't. You admit that you are supporting sup optimal lives, which confuses me. Do you not have the will to act? Maybe you're just fearful of change. You admit switching on and off would be slow. Why not just full switch, or at least try it?
I think many here make the false assumption that a vegan life is less enjoyable.
As for the second paragraph, I think this gets into an argument about abortion as well. The best position I've come up with for all of that is that beings that aren't conscious aren't worth anything, but as soon as they become conscious they are worth something even if they go unconscious afterward. This position isn't perfect, but any other position leads to extremely absurd conclusions.
Thanks for a more in depth response btw!
Replies from: Screwtape, gilch↑ comment by Screwtape · 2017-06-30T16:10:34.314Z · LW(p) · GW(p)
A question. Would you rather be born and live for thirty years and then be executed, or never be born at all?
To me, the answer depends on how I was going to live those thirty years. Well fed and sheltered while hanging out in a great open space with a small community of peers? That sounds pretty good to me. Locked in a tiny box with no stimulation? Quite possibly not. (I have a really strong desire for my own existence, even in the face of that existence being terrible, but I generally treat that as a weird quirk of my own mind.)
Remember, the choice isn't between living for thirty years before being killed vs living my natural lifespan and dying in bed of old age. Very few people would keep cows as pets. The choice is between a short life that ends when you get killed and not existing in the first place. To be clear, I think preferring to not exist is consistent, it's just not what I would pick.
This is almost certainly Typical Minding again, but if I was going to die I would prefer to die usefully. Maybe heroically saving someone else's life, maybe just donating any serviceable organs and offering whats left to science or medical students. If it was legal and efficient, I wouldn't actually mind being eaten after I died. (I am planning to make extraordinary efforts to prolong my life, but what that means to me is a little odd.)
If it's not about utilitarian ethics, what is it about? Why is it wrong to kill a human in their sleep? For me, killing humans is wrong because then other humans might kill me (Kantian reasons basically, with a bit of Hobbes thrown in =P) as well as because they are like me and will probably enjoy living (utilitarian reasons) and because "thou shalt not kill" is a really good deontological rule that helps people live together. Oh, and I'd probably get arrested and sent to jail. Of those, all of them protect a baby or a severely mentally disabled person, and the only one that protects a cow is the utilitarian reason. Cows aren't going to try and get vengeance for their fallen brother, and no judge will convict me of the murder.
You admit that you are supporting sup optimal lives, which confuses me. Do you not have the will to act? Maybe you're just fearful of change.
Fair accusation. That said, "suboptimal life" does not equal "not worth living." My life is "suboptimal" and I want to live! There are books to read and people to talk to and science to do, as well as food to eat and sun to bask in and naps to be taken. By being vegan, you are not supporting lives. If everyone in the world went vegan tomorrow (or more realistically if a perfect vat-grown beef substitute was released) what would happen to the animals on a nice free-range cruelty-free farm? I've lived and worked on one of those farms, and trust me, the profit margins involved are nowhere near enough to care for the animals if there wasn't any market for them. Those farms could really use more customers who investigated their standard of care and then spent the extra money to reward the ethical farming practices. I don't know what would make a life "optimal" and therefore acceptable to you, but I suggest you find a farm that agrees with you and giving them your business.
This would, of course, require the will to act and the courage to change.
(Of course, if you would prefer no life to a life cut short, or if you believe there is literally no farm that gives its livestock lives worth living, then we may be at an impasse. I am curious at what makes a life worth living to you, since few lives indeed are optimal.)
Replies from: blankcanvas, Zarm, Screwtape↑ comment by blankcanvas · 2017-07-01T08:26:13.161Z · LW(p) · GW(p)
A question. Would you rather be born and live for thirty years and then be executed, or never be born at all? To me, the answer depends on how I was going to live those thirty years. Well fed and sheltered while hanging out in a great open space with a small community of peers? That sounds pretty good to me. Locked in a tiny box with no stimulation? Quite possibly not. (I have a really strong desire for my own existence, even in the face of that existence being terrible, but I generally treat that as a weird quirk of my own mind.)
Regardless of one's answer, it's not relevant, unless you're talking of humans and not cows, what is relevant however is a question like this:
"Do you support bringing cows into existence, regardless for the fact that the majority will be inflicted with unnecessary suffering?" "Do you support bringing cows into existence, though they will be executed in 30 yrs?"
A cows well being is an assumption, will the cow be more or less likely to be miserable for those 30 years?
Using your cognition in the position of a human is incorrect when talking of cows, a cow in the position of a cow is correct. Anthropomorphism I would consider a human bias. Human in the position of a cow is better as it might lead to a conclusion to not inflict unnecessary suffering, but it's also a bias, so it's a question if the means justify the end, whereas the means in this case is the argument and any hesitation is rationality.
By being vegan, you are not supporting lives. If everyone in the world went vegan tomorrow (or more realistically if a perfect vat-grown beef substitute was released) what would happen to the animals on a nice free-range cruelty-free farm?
How many cows march on cruelty-free farms and how many march on non-cruelty-free farms?
Many questions are raised which means more data to conclude a meaningful answer or ethics.
Replies from: Screwtape↑ comment by Screwtape · 2017-07-02T01:59:30.630Z · LW(p) · GW(p)
I am anthromorphizing, but I don't know what it's like to think like a cow. Actually, I'm doing more than just anthromorphizing- I'm (for the moment) assuming cows think like I do, as opposed to how some generic human might. While this is imperfect, assuming someone thinks like I do tends to be my first step in trying to empathize with them. I do try and modify how I'm thinking about this to match the cow as much as I can (for example, I would become unhappy if I never got to read books, but I don't think I need to leave a library in the pasture!) but the core of my empathy is based on how my own brain works.
How many cows march on cruelty-free farms and how many march on non-cruelty-free farms?
Offhand, I'd guess half a percent of all cows that wind up as hamburger are on a cruelty-free farm. If when ordering a hamburger it was completely impossible to determine where the meat was coming from, then this would be an important point. Since I can find out where the meat is coming from, I can buy from the half a percent that is.
↑ comment by Zarm · 2017-06-30T20:09:23.774Z · LW(p) · GW(p)
A question. Would you rather be born and live for thirty years and then be executed, or never be born at all?
I would personally rather have been born, but I am not everyone. I have an excellent life that I'm very grateful for. Like you yourself say though, I think the majority disagrees with me on this. This [the fact that most would prefer to not exist than a hellish life] is partially, only partially, why I think its wrong to kill the animals either way.
(I am planning to make extraordinary efforts to prolong my life, but what that means to me is a little odd.)
I'm curious, have you acted on this? Even this requires lots of day-to-day effort in keeping healthy.
If it's not about utilitarian ethics, what is it about? Why is it wrong to kill a human in their sleep? For me, killing humans is wrong because then other humans might kill me (Kantian reasons basically, with a bit of Hobbes thrown in =P) as well as because they are like me and will probably enjoy living (utilitarian reasons) and because "thou shalt not kill" is a really good deontological rule that helps people live together. Oh, and I'd probably get arrested and sent to jail. Of those, all of them protect a baby or a severely mentally disabled person, and the only one that protects a cow is the utilitarian reason. Cows aren't going to try and get vengeance for their fallen brother, and no judge will convict me of the murder.
Pure utilitarianism leads to claims that I personally find absurd: taking people off the street for their organs, AI tinkering with our mind so we enjoy what we would call meaningless behavior now (picking up a rock and setting it down forever), repugnant conclusion, etc. I would argue that two of your points aren't really talking about whether its wrong to kill humans. The fear that other humans will kill you isn't really morality, its just logistics so that you yourself don't know. Obviously its against the law to kill humans, but I don't really want to talk about that. The point I'm making is the ethics and the laws surrounding them should be changed to include other animals; the current system has little relevance. I think the "thou shalt not kill" is archaic and there is no reasoning behind it. I think what you were trying to say there is that it would help prevent chaos to have that rule, which I agree with. Now the reason that I think its wrong to kill a sleep human is because they both would enjoy living on (utilitarian ethics) and I think they have a right to own themselves. They own their mind and body as its them. By killing them, you are acting authoritatively and you are taking away their freedom; their freedom to live or die that is. I apply this same rule to the other animals.
Cows aren't going to try and get vengeance for their fallen brother
Yes, but just because you can get away with something doesn't make it moral; I thought we were talking morals here.
no judge will convict me of the murder
Again, I wasn't talking about the legality. I'm challenging eating meat as an unethical practice.
Now if you're going to use the word suboptimal to describe both your life and a factory farmed cow's life, I think your definition much too vague to be meaningful in this discussion. Your life is nothing like that of a factory farmed cow's.
(Of course, if you would prefer no life to a life cut short, or if you believe there is literally no farm that gives its livestock lives worth living, then we may be at an impasse. I am curious at what makes a life worth living to you, since few lives indeed are optimal.)
Probably not being stuck in a box and getting beat your whole life. The only reason I'd ever want to have a shitty life is the hope of living on to a better time. The thing with these animals, they don't have that hope. They die at the end.
Replies from: Screwtape↑ comment by Screwtape · 2017-07-02T02:59:03.448Z · LW(p) · GW(p)
The paragraph that follows "If it's not about utilitarian ethics, what is it about?" is me running through the first handful of ethical frameworks that came to mind. While we don't have a perfect and universal ethical system, it can be useful to figure out what system is getting used when having ethical discussions- a deontologist and a consequentialist could talk in circles for hours around an object level issue without coming any closer to a solution. We are talking morals, and I'm trying to figure out what moral system you're using so we can start from common ground. You mention utilitarian ethics in the reason you think it's wrong to kill animals, but then you talk about the fact that they own themselves and taking away freedom in a way that isn't usually something utilitarianism cares about. To be clear, you don't need to fit into the box of a philosophy, but I care about how much they're suffering due to the suffering and it sounds like you do as well.
I'm using the word "suboptimal" to mean a state of affairs that is less than the highest standard. For example, I have a crick in my neck from looking down at my laptop right now, and there is not a fruit smoothie in my hand even though I want one. My life would be closer to optimal if I did not have a crick in my neck and did have a smoothie. My life would also be suboptimal if I was intense chronic pain. Suboptimal is a very broad term, I agree, but I think my usage of it is correct. How do you define that word?
Again, I go out of my way to avoid eating things that came from a tiny box where they got beaten. A large majority of the meat I eat came from things that hung out in pastures of several hundred to a thousand acres of grass and hillside. I apologize for posting in reply to myself, where I think it got missed, but if you want it here is what I think is the crux of my belief.
It's an aside, but I do my daily dose of exercise and eat decently healthy. What I consider "preserving my life" is weird enough that it could probably be its own conversation though :)
Replies from: Zarm, Zarm↑ comment by Zarm · 2017-07-02T04:26:17.502Z · LW(p) · GW(p)
Thank you for linking the crux. I'll try to explain my morality as well.
I'm using the word "suboptimal" to mean a state of affairs that is less than the highest standard. For example, I have a crick in my neck from looking down at my laptop right now, and there is not a fruit smoothie in my hand even though I want one. My life would be closer to optimal if I did not have a crick in my neck and did have a smoothie. My life would also be suboptimal if I was intense chronic pain. Suboptimal is a very broad term, I agree, but I think my usage of it is correct. How do you define that word?
So basically I don't care as much about positive utility compared to negative utility. I'll get on to that.
Alright so your crux could possibly be addressed by the below; it isn't really about the fact that the majority of humans prefer nonexisting as you say, its more explaining why there would be a 'preference' in non existence versus an existence of suffering, but I think it addresses your first point nonetheless:
If you're interested in a double crux, here's my half; if a majority of humans preferred not existing in the first place vs existing and having their lives cut short, then I would update strongly away from wanting farm animals being properly (that is, given enough space and exercise and food before being humanely killed) raised for meat.
I think you may find this a very interesting read: https://foundational-research.org/the-case-for-suffering-focused-ethics/
One of the main points in it is this: "Intuitions like “Making people happy rather than making happy people” are linked to the view that non-existence does not constitute a deplorable state. Proponents of this view reason as follows: The suffering and/or frustrated preferences of a being constitute a real, tangible problem for that being. By contrast, non-existence is free of moral predicaments for any evaluating agent, given that per definitionem no such agent exists. Why, then, might we be tempted to consider non-existence a problem? Non-existence may seem unsettling to us, because from the perspective of existing beings, no-longer-existing is a daunting prospect. Importantly however, death or no-longer-existing differs fundamentally from never having been born, in that it is typically preceded and accompanied by suffering and/or frustrated preferences. Ultimately, any uneasiness about never having been born can only be explained by attitudes of those who do already live."
This is basically my view on abortion, nonhuman animal ethics, etc. I find a key distinction between painlessly killing a being that is already existing versus a being never existing at all. In other words, I think beings are worthy of moral consideration once they become conscious, even if they become unconscious for a time after that. However I do not find any beings that have never been conscious worthy of any consideration. The above article also notes the empathy gap, which I think is of key importance when talking about suffering.
So as you guessed it, I'm not a pure utilitarian. The best I would describe it besides the above part (which explains a majority of my view) would be I think that that having choice, or autonomy, gives life meaning (I don't think the only meaning) to conscious beings. This is why I'm against a robot reprogramming humans so they enjoy mindlessly picking up a boulder. I think that all conscious beings should have the right to themselves because if they don't have a right the right to themselves what do they have? This means that their life (and death) is their choice. Other people acting for them is acting in an authoritarian way against their choices. It is inherently authoritarian to kill any conscious being as you are taking away their autonomy of life. Conscious beings, when not suffering, enjoy life and have an interest in further living. This is why I wouldn't want them killed, even painlessly.
Just wondering, why do you think its wrong to kill sleeping humans?
Replies from: Screwtape↑ comment by Screwtape · 2017-07-05T15:03:37.263Z · LW(p) · GW(p)
I think it's wrong to kill sleeping humans both because I'm often a sleeping human that doesn't want to be killed, and because I would see it as killing a (somewhat distant) part of myself. It's half "I won't kill you if you won't kill me" and half valuing the human gene code and the arrangements of thoughts that make up a human mind. I want humanity in the abstract to thrive, regardless of how I might feel about any individual part of it.
I think I agree with the bulk of the article you linked, but don't think I agree that it resolves my crux. To quote its quote-
This intuition is epitomised in Jan Narveson’s (1973) statement, “We are in favor of making people happy, but neutral about making happy people.”
I do not believe we are obliged to create entities (be they humans, cows, insects, or any other category) but we are not obliged not to do so. I think we are obliged to avoid creating entities that would prefer not to have been created. That still leaves us the option of creating entities that would prefer to have been created if we want to- for example, nobody is obliged to have children, but if they want to have children they can as long as they reasonably suspect those children would want to have existed. If I want cows to exist, I can morally cause that to happen as long as I take reasonable precautions to make sure they prefer existing. As you say, they might well not want to exist if that existence is being locked in a box and hurt. As I say, they probably do want to exist if that existence is wandering around a green pasture with the herd.
I'd like to grab an example from the article you linked, the one about the Buddhist monks in the collapsing temple. As it says
Imagine a large temple filled with 1,000 Buddhist monks who are all absorbed in meditation; their minds are at rest in flawless contentment. Unfortunately, the whole temple will collapse in ten minutes and all the monks will be killed. You cannot do anything to prevent the temple from collapsing, but you have the option to press a button that will release a gaseous designer drug into the temple. The drug will reliably produce extreme heights of pleasure and euphoria with no side effects. Would you press the button? [footnote: Alternatively, to avoid potentially distorting ideas about, for example, violating their autonomy: Would the accidental release (by environmental forces) of such a gaseous compound make the world better?]
...
It is tempting to feel roughly indifferent here: Pressing the button seems nice, and assuming it produces no harm or panic in the temple, it may be hard to imagine how it would be something bad. At the same time, it does not seem particularly important or morally pressing to push the button.
This is what I was trying to get at with the usage of "suboptimal" above. If I'm going to encourage the creation of cows for me to eat, I'm obliged to make their existence generally positive, but I'm not obliged to make that existence euphoric. Positive sum, but not optimal.
While a world where one's life and death is one's own choice is a good world in my view, I can't find myself getting axiomatically worked up over others acting on my behalf. I'm willing to make acausial deals- for example, if a man found me collapsed on the side of the road and only took me to the hospital because he figured I'd be willing to pay him for doing so when I woke up, I'd pay him. I prefer a world with a strong authority that promises to find and kill anyone besides the authority that kills a human to one without any strong authority, and even though I would try and escape justice in the event that I committed murder I wouldn't argue that the authority was doing anything immoral. (Both examples have details that are worth adjusting, such as how much I'd be willing to pay the rescuer or mitigating factors such as manslaughter vs first degree murder, but my reactions to which seem to point at different values than you.)
Replies from: Zarm↑ comment by Zarm · 2017-07-05T18:27:16.145Z · LW(p) · GW(p)
How do you justify when you don't eat 'cruelty free' meat? Those animals are suffering during their.
My other question would be, I don't understand why you don't care over the logic of the first paragraph to cows?
I do get what you're saying with creating beings that do have a decent life.
Replies from: Screwtape↑ comment by Screwtape · 2017-07-05T19:18:40.310Z · LW(p) · GW(p)
Assuming your second sentence was supposed to end with "...suffering during their lives" my response is I mostly do so when I'm not the one picking the source (company functions, family reunions, etc) or on occasions where I'm traveling and nothing else presents itself. (I am consequentalist enough that ~1% of my food budget going to those sorts of operations doesn't bother me, since the goal of reducing their income from me is being achieved, though I'll also grant that this is the largest inconsistency in my own ethics on this subject I see.)
Assuming your third sentence was supposed to read "...you don't carry over the..." my response is that cows can neither assassinate me in my bed nor understand treaties of nonaggression, nor do they share human patterns of thought or genes except at a distant remove.
Are you open to being persuaded to eat 'cruelty free' meat? Is there some fact or framing which might change your mind?
Replies from: Zarm↑ comment by Zarm · 2017-07-07T03:18:57.421Z · LW(p) · GW(p)
I don't kill humans for the same reason you do though. I could possibly be persuaded, but I'm not exactly sure what it would be. I think it would be something of the sort following: You would either have to convince me killing sleeping (I'm just gonna use sleep as equivalent to cruelty free for convenience sake) humans is ethically fine OR that cows are different in some way other than logistically speaking (I wouldn't say that the fact that cows can't kill you is morals, that's more practicality; so something other than the two (redemption killing or treatise) things you just named). Or to convince me that cows should not have the right to live on their lives all the way through. Something along these lines, and if you're going to go with how you were talking about different patterns of thought you'd have to be more specific.
You would possibly have to attack my underlying ethics since I don't kill sleeping humans for other reasons as we've discuss and that would be harder to change my opinion on.
If this is unclear, just ask me to rephrase. I'll just try to rephrase it below.
1) Killing humans without cruelty is ethical (I don't know you want to convince me of this one) 2) Humans and cows are different in some way other than treatise or redemption killing so that cows don't have the right to life 3) Or to change my view of "While a world where one's life and death is one's own choice is a good world in my view, I can't find myself getting axiomatically worked up over others acting on my behalf. I'm willing to make acausial deals- for example,"
Arguments that will not convince me is that "cows are not out kind, they are too far away from us, they are not human."
I also reread your post above.
I do not believe we are obliged to create entities (be they humans, cows, insects, or any other category) but we are not obliged not to do so. I think we are obliged to avoid creating entities that would prefer not to have been created. That still leaves us the option of creating entities that would prefer to have been created if we want to- for example, nobody is obliged to have children, but if they want to have children they can as long as they reasonably suspect those children would want to have existed. If I want cows to exist, I can morally cause that to happen as long as I take reasonable precautions to make sure they prefer existing. As you say, they might well not want to exist if that existence is being locked in a box and hurt. As I say, they probably do want to exist if that existence is wandering around a green pasture with the herd.
I agree with this. I think that beings can be created. My problem is ending already existing ones once they are created.
And like you continued on. I agree that their existence doesn't have to be perfect. We should just make it not horrible. Again, my problem is the force ending.
I'm gonna add in one more thing. I think this is an emotional appeal, but I think its true regardless. Do you think that your opinion would change if you interacted with other animals like cows on a more personal level more often (or possibly at all) and actually saw them as individuals rather than an idealized "cow." And if you have interacted a lots with cows on a personal level (such as farming), I'd like to hear your opinion as well.
Replies from: Screwtape↑ comment by Screwtape · 2017-07-07T16:00:08.334Z · LW(p) · GW(p)
Content warning to follow for response to emotional appeal and for unrepentant animal execution.
I grew up on a small dairy farm (~40 head) that kept a handful of beef cattle. I spent more time with the dairy herd- they're a lot safer and the need to milk them every day means they get more of a farmhand's attention- but I've got some pretty fond memories of moving the beef cows from pasture to pasture. We named one Chief, who always pushed to be first in line, and another Teriyaki because of an odd auburn patch on his flank. When I was studying a part for a play, I used to balance on part of their fence while reciting my lines and Washington would usually mill around near me. He'd do that for anyone who was saying literally anything as long as your voice hadn't dropped, and sometimes when cleaning the stalls I'd make up stories to tell him. I never figured out why, but Washington's manure was always fairly compact and dry for a cow, which made mucking his stall much easier.
Washington was also the first animal larger than a mouse I ever killed. It's easier than you'd think. He didn't even realize something was wrong about being lead into a back room he'd never been in before, he just followed Chief in and then stood around placidly when we blocked the exit Chief had just left through. We got everything set up (a ton of animal can be dangerous if it just falls uncontrolled) and the adults offered to let me do it. Killing did not feel like some special magic or momentous occasion. The rest of the afternoon was educational even though I only watched, since you want to butcher and clean an animal as soon as you can. When we ate the first meal made out of Washington we included him in the prayer before the meal, mentioned our favourite stories about him and that we were glad he lived and glad he would fuel our lives and that he had made way for another creature to live the good life he did.
My opinion? Steak is delicious.
Chief and Teriyaki probably remembered Washington, but I highly doubt any of that knowledge passed on to his successor Glaucon even though there was overlap in their lives. Washington would be dead by now anyway- I was maybe twelve at the time we ate him- and what remains is the memories I have, and the shared family he has from nephews and so on being raised in the same way now. This is what I mean by patterns of thought- my great grandfather is dead, but since I've read his journal and heard stories about him from my father and grandmother, not every piece of him is gone. Odd phrases, family recipes, habits of thought, weird but cherished stories, these float alongside DNA down the generations. If every cow died tomorrow, humans would remember them for at least a thousand years. If every human died tomorrow, cows wouldn't remember us beyond a generation.
I'll read whatever you write in response to this, but I don't think there's much more to be gained from this conversation. You've moved from asking for perspectives to attempting to persuade via abstract means to attempting to persuade via emotional means, and while I don't begrudge you for that, I do think it's a sign neither of us are going to make any more headway.
Nice talking to you, and have a good day :)
Replies from: Elo↑ comment by Screwtape · 2017-06-30T19:06:43.210Z · LW(p) · GW(p)
If you're interested in a double crux, here's my half; if a majority of humans preferred not existing in the first place vs existing and having their lives cut short, then I would update strongly away from wanting farm animals being properly (that is, given enough space and exercise and food before being humanely killed) raised for meat. (I'd rather ask cows, but they're incapable of properly answering the question. [citation needed]) Alternately, if I found out that the animals I thought were having decent lives were in fact suffering badly for most of their lives, I would update towards either wanting more care taken or towards sufficiently good lives not being something we could provide to them. Either of these would cause me to decrease the amount of meat I consume, possibly cutting it out entirely.
I will say that the latter would probably require a high level of evidence- I'm not talking about the median farm in a study I've read, I'm talking about a decade spent working as a hand on one of the farms I buy from now. If those animals were generally suffering, then I am completely incapable of determining whether a non-human animal is suffering.
If you found out that people generally want to have lived rather than not have lived, and also that you could purchase meat from animals that had lived generally happy lives, would you start eating some meat?
↑ comment by gilch · 2017-07-01T06:06:26.846Z · LW(p) · GW(p)
How about the severely mentally disabled
If it's severe enough, I think this is a cultural question that could go either way, not an categorical evil. There are probably relatively good places in the Moral Landscape where this kind of thing is allowed. In the current culture, it would violate important Schelling points about not killing humans and such. Other things would have to change to protect against potential abuse, before this can be allowed.
↑ comment by ZankerH · 2017-06-27T12:17:54.876Z · LW(p) · GW(p)
Meat tastes nice, and I don't view animals as moral agents.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2017-06-28T15:29:16.709Z · LW(p) · GW(p)
Are you claiming that a being must be a moral agent in order to be a moral patient?
Replies from: ZankerH↑ comment by ZankerH · 2017-06-29T09:27:52.889Z · LW(p) · GW(p)
Yes, with the possible exception of moral patients with a reasonable likelihood of becoming moral agents in the future.
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:26:09.070Z · LW(p) · GW(p)
Is it ok to eat severely mentally disabled humans than?
Replies from: gilch, Elo↑ comment by Elo · 2017-06-29T22:00:29.815Z · LW(p) · GW(p)
If someone said yes?
Is it okay to eat a test-tube muscle? (probably yes)
Is it okay to eat vat-bodies grown without brains? (probably yes)
Is it okay to eat vat-bodies grown with a little brain to keep it growing correctly but that never wakes up?
(probably yes)
↑ comment by Lumifer · 2017-06-27T00:26:02.325Z · LW(p) · GW(p)
I'm not addressing those that say morality is subjective
You think that people who believe in objective morality all agree on what this morality is?
Replies from: Zarm↑ comment by Zarm · 2017-06-27T03:16:16.675Z · LW(p) · GW(p)
I stated the morality given of which I was talking.... Plus I was asking for perspectives anyway. Why not give yours?
Replies from: Lumifer↑ comment by Lumifer · 2017-06-27T05:57:11.384Z · LW(p) · GW(p)
I'm a carnivore... and I have doubts about that objective morality thing.
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:34:54.533Z · LW(p) · GW(p)
I hope this a joke. This is low brow even for a response from an average person.
"canine teeth tho"
C'mon, I was expecting more from lesswrong community.
I'm not saying there is objective morality. If you think its subjective, I'm not addressing you here.
Replies from: Lumifer, satt↑ comment by Lumifer · 2017-07-07T01:02:33.498Z · LW(p) · GW(p)
So, what's your prior on the height of my brow? :-D And canines, yep, I have some, they are sharp & pointy.
In the interest of fair disclosure let me point out that I'm not really representative of the LW community.
What are my other choices in morality besides subjective and objective?
Replies from: Zarm↑ comment by Zarm · 2017-07-07T03:32:53.984Z · LW(p) · GW(p)
And canines, yep, I have some, they are sharp & pointy.
Go take down an antelope with only your teeth ;)
What are my other choices in morality besides subjective and objective?
I don't know. I was really trying to stray away from people arguing how there are no objective morals so killing and everything else is fine. I didn't want to argue about how there are no objective morals. There aren't objective morals, so I wanted to talk to people who actually had morals and would be willing to talk of them.
You don't have to fit into my false dichotomy. You do you.
Replies from: Lumifer↑ comment by Lumifer · 2017-07-07T14:30:50.423Z · LW(p) · GW(p)
Go take down an antelope with only your teeth ;)
Teeth are not for killing -- I have sharp sticks for that -- they are for tearing meat which spent some time over the fire. I can assure my canines function perfectly well in that role :-P
There aren't objective morals, so I wanted to talk to people who actually had morals
Coherent, you're not.
But you don't have to fit into my notions of coherency :-)
Replies from: Zarm↑ comment by Zarm · 2017-07-07T14:47:38.374Z · LW(p) · GW(p)
I have sharp sticks for that
That's my point. You're cheating now. Lions don't cheat.
over the fire.
Carnivores can eat raw meat.
I can assure my canines function perfectly well in that role
Nah they work better for plant foods. They aren't even very sharp. Also, other herbivores have canines but whatever.
Regardless of this bs^ doesn't matter whether we have canines or not or whether they are useful or not and we both know that. I don't need to explain the natural fallacy to you.
Coherent, you're not.
But you don't have to fit into my notions of coherency :-)
You're a fuck lol. I responded so nicely acknowledging I screwed up there, whatever.
In the interest of fair disclosure let me point out that I'm not really representative of the LW community.
Good.
Replies from: Lumifer↑ comment by Lumifer · 2017-07-07T15:48:01.393Z · LW(p) · GW(p)
You're cheating now.
Carnivores -- note the vore part -- eat meat, how you kill your food is pretty irrelevant.
Carnivores can eat raw meat
Yes, my canines can deal with carpaccio very well, thank you.
Nah they work better for plant foods
No, they don't. That's what molars are for.
Regardless of this bs^ doesn't matter
I am sorry, what is the proposition that you are defending?
You're a fuck lol.
Coherency. It's a thing. You should try it sometimes :-P Also, are you missing an adjective in there somewhere?
↑ comment by Viliam · 2017-06-27T09:32:47.870Z · LW(p) · GW(p)
One possible explanation is that most people here fail at following most of their plans; and veganism is not an exception, but a part of the pattern. For example, see this relatively ancient complaint:
what is Less Wrong? It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people. As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging". Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.
My feeling is that since this was written, things didn't improve; it was possibly the other way round.
If this hypothesis is true, then if you would classify the readers by how much of their plans they actually accomplished in real life, the veganism would correlate with that positively, because like everything is has two steps: (1) a decision to become a vegan, and (2) actually becoming a vegan; and people who are generally better at getting from step 1 to step 2 should also be better in this specific instance.
And then of course there are the rationalizations like:
"Do you care about reducing suffering, even the suffering of animals?"
"Sure I do."
"Are you a vegan?"
"No; but that's because I am a consequentialist -- I believe that creating a superintelligent Friendly AI will help everyone, including the animals, orders of magnitude more than me joining the ranks of those who don't eat meat."
"Then... I guess you are contributing somehow to the development of the Friendly AI?"
"Actually, I am not."
And then of course there are people like:
"Actually, I don't give a fuck about suffering or about Friendly AI; I am simply here to have fun."
Replies from: ChristianKl, Zarm↑ comment by ChristianKl · 2017-06-29T21:59:56.632Z · LW(p) · GW(p)
If this hypothesis is true, then if you would classify the readers by how much of their plans they actually accomplished in real life, the veganism would correlate with that positively,
We might do have the numbers to see whether that claim is true. Our census does ask for veganism and also likely has some proxy for accomplishment.
↑ comment by Zarm · 2017-06-29T21:27:20.717Z · LW(p) · GW(p)
Honestly looking at the replies, this makes the most sense. I guess this is the answer I was looking for, thank you! I better understand the difference between thought and action.
Just curious, would you put yourself in this same category? If so, do you know why you or others are like this? If so, what goes through your mind because it obviously isn't rationalization? Is it lack of motivation? How do you live with it?
To be perfectly honest, I was hoping for better responses from lesswrong community, many of which would classify themselves as aspiring rationalists, but these seem only a bit better than the average reddit responses. There's so much motivated reasoning behind wanting to keep meat, and from what I've seen here, veganism doesn't get brought up here that much because its so devoted to AI and related fields.
Would you happen to know of any places where there are rationalists debating applied ethics? Maybe that's mainly the type of thing effective altruism is doing.
Replies from: Viliam↑ comment by Viliam · 2017-06-30T09:46:36.954Z · LW(p) · GW(p)
would you put yourself in this same category?
I seriously suck at converting "what I would prefer to do" into "what I actually do" most of the time.
I have a far-mode preference for being a vegan. If you would choose healthy and tasty vegan food and cook it for me, I would eat it, and probably never look back. But I don't have time to do the research, and often I don't even have the time to cook. I generally respect the rule "the one who cooks is the one who makes the decisions", and my wife is not a vegan, and doesn't want to become one. (In a parallel Everett branch where I am a single guy, I probably eat vegan Joylent.)
do you know why you or others are like this? If so, what goes through your mind because it obviously isn't rationalization? Is it lack of motivation? How do you live with it?
Well, I can't be sure for myself what is truth and what is rationalization, because I suppose it looks the same from inside. But here are my guesses:
Humans are not automatically rational. Human brains sometimes do the most stupid things. For example, it is often difficult to think about unpleasant topics, because the brain mistakes "avoiding the danger" with "avoiding thinking about the danger". Like, there is a danger in the future you could actually avoid using some strategic thinking, but your brain prefers to simply not think about the coming danger (presumably before some deeper pre-human brain modules don't fully understand the difference between actual danger and imagined danger). So you can help people to get new insights by simply telling them to sit down, calm down, and think about the topic for 5 minutes without getting distracted by anything else. And it really helps.
Then there is the issue of willpower being... yeah, I know that if I say "a limited resource", someone will reply with a link saying that the research is discredited. Still, my experience suggests that people who spent a pleasant day are more willing to do some unpleasant but important task in the evening, compared with people who had a busy and frustrating day and how they finally have one hour of free time that they can spend either doing something unpleasant but important, or browsing web. In the past I behaved irrationally for different reasons; now, having a small child, the usual reason is that I don't have time to actually sit down, think about things, and do them; or more precisely I have an hour or two of such time at late evening when the baby is sleeping, but... I am just too tired mentally, and prone to waste time stupid things without fully realizing what I am doing.
Motivation in my opinion often consists of feeling things in the "near mode", as opposed to merely knowing them in the "far mode". Like there is a difference between knowing that exercise can make you fit, and having a friend who regularly exercises and is fit. The latter is more likely to make you exercise, too; probably because the knowledge is no longer abstract. But there is also the social aspect that it is easier to do things that monkeys in your tribe are already doing. In the past, I guess my motivational problem was not having the right tribe. Also, members of your tribe can do the research for you; it's easier to e.g. copy vegan recipes from your neighbors, especially after you tasted the food and liked it. Then you can talk about where to buy the ingredients (connecting problem solving to social interaction) etc.
Summary:
- it's much easier in a group; but sometimes you simply don't have the right group;
- it's difficult to be strategic, in general, especially because "being strategic" is something you also need a strategy for.
So people often try alone, fail, give up, get depressed; or invent some rationalization to make the pain of failure go away.
Would you happen to know of any places where there are rationalists debating applied ethics? Maybe that's mainly the type of thing effective altruism is doing.
I am not a part of the EA community, so I don't have a first-hand experience how it functions.
Replies from: Zarm↑ comment by Zarm · 2017-06-30T13:54:00.724Z · LW(p) · GW(p)
people who spent a pleasant day are more willing to do some unpleasant but important task in the evening, compared with people who had a busy and frustrating day and how they finally have one hour of free time that they can spend either doing something unpleasant but important, or browsing web.
This makes a lot of sense. I'm a get-shit-done kinda guy and this could possibly be because I'm also very happy most of the time. I think that I've had a bunch of unintentional and intentional success spirals that I'm very grateful for.
The thing about "far mode" vs "near mode" makes a lot of sense when you were talking about exercise as well.
Just on a personal note and from personal (anecdotal; but there is probably non anecdotal evidence as well somewhere) experience, eating healthy and exercise are the types of things to give you more energy and motivation to do more things that give you even more energy. They are like their own success spirals.
I appreciate your in depth response, thank you.
↑ comment by gilch · 2017-07-01T04:16:47.098Z · LW(p) · GW(p)
Veganism seems well-intentioned, but misguided. So then, your main reason for veganism is some sense of empathy for animal suffering? My best guess for vegans' motives is to merely signal that empathy, for social status without any real concern for their real-world impact on animal welfare.
Empathy is a natural human tendency, at least for other members of the tribe. Extending that past the tribe, to humans in general, seems to be a relatively recent invention, historically. But it does at least seem like a useful trait in larger cities. Extending that to other animals seems unnatural. That doesn't mean you're wrong, per se, but it's not a great start. A lot of humans believe weird things. Animistic cultures feel may feel empathy for sacred objects, like boulders or trees, or dead ancestors, or even imaginary deities with no physical form. They may feel this so strongly that it outweighs concern for their fellow humans at times. Are you making the same mistake? Do mere rocks deserve moral consideration?
So there are things that are morally important and things that are not. Where do we draw that line? Is it only a matter of degree, not kind? How much uncertainty do we tolerate before changing the category? If you take the precautionary principle, so that something is morally important if there's even a small chance it could be, aren't you the same as the rock worshipers neglecting their fellow humans?
Why do you believe animals can suffer? No, we can't take this as a settled axiom. Many people do not believe this. But I'll try to steelman. My thoughts are that generally humans can suffer. Humans are a type of animal, thus there exists a type of animal that can suffer. We are related to other species in almost exactly the same sense that we are related to our grandparents (and thereby our cousins), just more generations back. Perhaps whatever makes us morally relevant evolved before we were human, or even appeared more than once through convergent evolution. Not every organism need have this. You are related to vegetables in the evolutionary sense. That's why they're biochemically similar enough to ourselves that we can eat them. You're willing to eat vegetables, so mere relation isn't enough for moral weight.
By what test can we distinguish these categories? Is it perhaps the mere fact of an aversive behavior to stimulus? Consider the Mimosa pudica, a plant that recoils to touch. Is it morally acceptable to farm and kill such a plant? That's just an obvious case. Many plants show aversive behaviors that are less obvious, like producing poisons after injury, even releasing pheromones that stimulate others nearby to do the same. But again, you're fine with eating plants. Consider that when you burn your finger, your own spinal cord produces a reflexive aversive behavior before the nerve impulse has time to reach your brain. Is your spine conscious? Does it have moral weight by itself? Without going into bizarre thought experiments about the moral treatment of disembodied spinal cords, I think we can agree a conscious mind is required to put something in the "morally relevant" category. I hope you're enough of a rationalist to be over the "soul" thing. (Why can't vegetables have souls? Why not rocks?) I think it is a near certainty that the simplest of animals (jellyfish, say) are no more conscious than vegetables. So merely being a member of the animal kingdom isn't enough either.
So which animals then? I think there's a small chance that animals as simple as honeybees might have some level of conscious awareness. I also think there's a significant chance that animals as advanced as gorillas are not conscious in any morally relevant way. Gorillas, notably, cannot pass the mirror test. Heck, I'm not even sure if Dan Dennet is conscious! So why are we so worried about cows and chickens? I am morally opposed to farming and eating animals that can pass the mirror test.
Gorillas to honeybees are pretty wide error bars. Can we push the line any farther down? Trying to steelman again. What about humans too young to pass the mirror test? Is it morally acceptable to kill them? Are vegans as a subculture generally pro life, or pro choice? On priors, I'd guess vegans tend Democratic Party, so pro choice, but correct me if I'm wrong. It seems so silly to me that I can predict answers to moral questions with such confidence based on cultural groups. But it goes back to my accusation of vegans merely signaling virtue without thinking. You're willing to kill humans that are not conscious enough. So that fails too.
Even if there's some degree of consciousness in lesser beings, is it morally relevant? Do they suffer? Humans have enlarged frontal lobes. This evolved very recently. It's what gives us our willpower. This brain system fights against the more primitive instincts for control of human behavior. For example, human sex drive is often strong enough to overcome that willpower. (STIs case in point.) But why did evolution choose that particular strength? Do you think humans would still be willing to reproduce if our sex drive was much weaker than it is? This goes for all of the other human instincts. It has to be strong enough to compete against human will. Most notably for our argument, this includes the strength of our pain response, but it also applies to other adverse experiences, like fear, hunger, loneliness, etc. What might take an overwhelming urgency in human consciousness to get a human to act, might only require a mild preference in lower animals, which have nothing better to do anyway. Lower animals may have some analogue to our pain response, but that doesn't mean it hurts.
I think giving any moral weight to the inner experience of cows and chickens is already on very shaky ground. But I'm not 100% certain of this. I'm not even 90% certain of this. It's within my error bars. So in the interest of steelmanning, lets grant you that for the sake of argument.
Is it wrong to hurt something that can suffer? Or is it just sometimes the lesser of evils? What if that thing is evil? What if it's merely indifferent? If agents of an alien mind indifferent to human values (like a paperclip maximizer) could suffer as much as humans, but have no more morality than a spider, would it be wrong to kill them? Would it be wrong to torture them for information? They would cause harm to humans by their very nature. I'd kill them with extreme prejudice. Most humans would even be willing to kill other humans in self-defense. Pacifistic cavemen didn't reproduce. Pacifistic cultures tend to get wiped out. Game theory is relevant to morality. If a wild animal attacks your human friend, you shoot it dead. If a dog attacks you while you're unarmed you pin it to the ground and gouge out its brains through its eye socket before it rips your guts out. It's the right thing to do.
If you had a pet cat, would you feed it a vegan diet? Even though it's an obligate carnivore, and would probably suffer terribly from malnutrition? Do carnivores get a pass? Do carnivores have a right to exist? Is it okay to eat them instead? Is it wrong to keep pets? Only if they're carnivores? Why such prejudice against omnivores, like humans? Meat is also a natural part of our diet. Despite your biased vegan friends telling you that meat is unhealthy, it's not. Most humans struggle getting adequate nutrition as it is. A strict prohibition on animal products makes that worse.
But maybe you think farm animals are more innocent than indifferent. They're more domesticated. Not to mention herbivores. Cows have certainly been know to kill humans though. Pigs even kill human children. Maybe cows are not very nice people. But if I'm steelmanning, I must admit that self-defense and factory farming are very different things. But why aren't you okay with hunting for food? What about happy livestock slaughtered humanely? If you are okay with that, then support that kind of farm with your meat purchases, have better health for the more natural human diet, and make a difference instead of this pointless virtue signalling. If you're not okay with that, then it's not just about suffering, is it? That was not your true objection.
Then what is?
Replies from: gilch↑ comment by gilch · 2017-07-01T04:17:00.466Z · LW(p) · GW(p)
Is it some deontological objection to killing living things? Vegetables are also alive. To killing animals in particular? I thought we were over this "soul" thing. Is it about cutting short future potential? These aren't humans we're talking about. They don't invent devices or write symphonies. Is it about cutting short future positive experiences? Then consciousness is still important.
You are not innocent.
Commercial vegetable farming kills animals! Pesticides kill insects with nerve gas. If they're conscious, that's a horrible way to die. But that wasn't your true objection. It cuts short future experiences. Or are bugs also below even your threshold for moral relevance? In that case, why not eat them? Even so, heavy farm equipment like combines kill small mammals, like mice and voles. That's why people occasionally find severed rodent heads in their canned green beans. The government has limits for this sort of impurity, but it's not zero. It simply wouldn't be practical economically.
So if farming mere vegetables also kills animals, why not become an ascetic? Just stop eating. You can reduce your future harm to zero, at the cost of one human. Your instincts say no? Ascetic cavemen did not reproduce. Game theory is relevant to morality.
Now you see it's a numbers game. You can't eliminate your harm to animals. You cannot live without killing. You still believe even bugs are morally relevant. You've even rejected suicide. So now what do you do? What can you do? It's a numbers game. You have to try to minimize the harm rather than eliminate it. (At least before the Singularity). Is veganism really the best way to do that?
No, it really is not. Forget about your own diet. It's not an effective use of your limited resources. Try to improve the system. Fund science to determine where the threshold of consciousness is, so you can target your interventions appropriately. Fund more humane pesticides, that work faster. Fund charities that change meat in school lunch from chicken to beef. Blasphemy, you say? You are not innocent! How many calories in one cow? How many chickens do you have to slaughter to feed as many children as one cow? Numbers game. Take this seriously or you're just signaling.
I think I've laid out a pretty good case for why Veganism makes no sense, but since virtue signaling is important to your social status, I'm sure you'll come up with some rationalization I haven't thought of in order to avoid changing your mind.
Replies from: Zarm, Zarm↑ comment by Zarm · 2017-07-01T15:51:44.197Z · LW(p) · GW(p)
Is it some deontological objection to killing living things?
Nope
Vegetables are also alive.
And?
To killing animals in particular?
Yes, all mammals, birds, and more are conscious. Many more are self aware. Pigs are of similar intelligence to dogs, so it could be highly likely they are self aware just like dogs are.
I thought we were over this "soul" thing.
Stop being so condescending please.
Commercial vegetable farming kills animals! Pesticides kill insects with nerve gas. If they're conscious, that's a horrible way to die. But that wasn't your true objection. It cuts short future experiences.
Trophic levels.
In that case, why not eat them?
If they aren't conscious, I'm not against it. However, we don't know enough right now on insect consciousness; the subject is very hazy.
You can't eliminate your harm to animals.
Strawman. No vegan has ever said that. Vegans are constantly correcting people saying how its about minimizing suffering, not eliminating.
Try to improve the system.
I'm doing both. I'm an effective altruist and anti-speciesist.
Fund science to determine where the threshold of consciousness is, so you can target your interventions appropriately.
There's more information on this than you seem to say.
Fund charities that change meat in school lunch from chicken to beef.
I get what you're saying here, but that's silly. I'd push for plant foods.
Take this seriously or you're just signaling.
I do. This isn't signalling. I've thought about every single question you've asked here at prior times. These aren't new to me.
I think I've laid out a pretty good case for why Veganism makes no sense
Not really. You seem extremely opposed to it.
I'm sure you'll come up with some rationalization I haven't thought of in order to avoid changing your mind.
Because there's no possible way I could be right, right? It'd have to be rationalizing, lol.
Replies from: gilch↑ comment by gilch · 2017-07-01T18:10:56.929Z · LW(p) · GW(p)
Vegetables are also alive.
And?
That was only if you answered "yes" to the previous question. You didn't, so never mind.
Stop being so condescending please. I'm doing both. I'm an effective altruist
Public posts are talking to the general audience, not just to you. Veganism seems more religious than rational (like politics), but I'll try to tone it down since you seem more reasonable and asked nicely. Assume good faith. Tone doesn't come through well in writing, and it's more on the reader than the writer.
If they aren't conscious, I'm not against it.
Then why not eat eggs? I don't mean the factory-farmed kind. If the hens were happy would it be okay? If yes, you should be funding the farms that treat their hens better with your food purchases, even if it's not perfect, to push the system in a good direction.
Vegans are constantly correcting people saying how its about minimizing suffering, not eliminating. Because there's no possible way I could be right, right? It'd have to be rationalizing, lol. that's silly. I'd push for plant foods.
Even if that were the more effective intervention? Forget about the diet thing. It's not that effective. Do what actually makes a difference. Use your buying power to push things in a good direction, even if that means eating meat in the short term. See http://slatestarcodex.com/2015/09/23/vegetarianism-for-meat-eaters/
Trophic levels.
It's relevant in some cases, but I don't entirely buy that argument. Efficiency, yes, but morality? On marginal land not fertile enough for farming, you can still raise livestock. No pesticides. What about wild-caught fish? Those are predators higher up the food chain, but they have a more natural life before they're caught.
↑ comment by Zarm · 2017-07-01T15:42:42.811Z · LW(p) · GW(p)
Animistic cultures feel may feel empathy for sacred objects, like boulders or trees, or dead ancestors, or even imaginary deities with no physical form.
Ya.. I'm not buying this here - that's a giant false equivalency. You're comparing inanimate objects to conscious beings; you're also comparing religion and spiritual cultures to scientific arguments.
Where do we draw that line? Is it only a matter of degree, not kind? How much uncertainty do we tolerate before changing the category? If you take the precautionary principle, so that something is morally important if there's even a small chance it could be, aren't you the same as the rock worshipers neglecting their fellow humans?
I'm not saying this is black and white, but its not nearly as gray as you're making it out to be. You can pick solid, semi-non arbitrary places to put the line. To start, biocentricity is never actually followed; no one cares about bacteria. Anthromocentricity is based on a marginal cases argument and speciesism. Sentiocentricity is based on the fact that these other beings feel things like ourselves and experience the world subjectively. That's why I pick sentiocentricity. Sentient beings can suffer, and I inherently think suffering is wrong. If you don't think suffering is wrong, there's nothing to say here.
Why do you believe animals can suffer? No, we can't take this as a settled axiom. Many people do not believe this. But I'll try to steelman. My thoughts are that generally humans can suffer. Humans are a type of animal, thus there exists a type of animal that can suffer. We are related to other species in almost exactly the same sense that we are related to our grandparents (and thereby our cousins), just more generations back. Perhaps whatever makes us morally relevant evolved before we were human, or even appeared more than once through convergent evolution. Not every organism need have this. You are related to vegetables in the evolutionary sense. That's why they're biochemically similar enough to ourselves that we can eat them. You're willing to eat vegetables, so mere relation isn't enough for moral weight.
http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf
You're really bringing up the plant argument? Even if plants were morally considerable, which they aren't because they can't suffer, it would be more altruistic to eat plants because of the way trophic levels work.
Consider the Mimosa pudica, a plant that recoils to touch. Is it morally acceptable to farm and kill such a plant? That's just an obvious case.
Aversive behaviors is not indicative of suffering. Plants don't even have a central nervous system.
But again, you're fine with eating plants.
You say I'm fine with eating plants, as if this isn't a problem to you. If you care so much about plants, become a jain.
I think it is a near certainty that the simplest of animals
Based off what evidence? I'm not saying something either way for animals like jellyfish, but you can't just say "near-certain" with no backing.
I also think there's a significant chance that animals as advanced as gorillas are not conscious in any morally relevant way.
Where are you getting this? You have nothing backing this. You can say you think this, but you can't randomly say there's a "significant chance."
I am morally opposed to farming and eating animals that can pass the mirror test.
See, this gets more nuanced than you probably originally thought. When it comes to the mirror test, its not as a black and white as you may think. Not all animals use sight as their primary sense. Just learning this piece of information calls for a revamp of the test; its not as if this test was the most fool proof to begin with. I bring this up because dogs were recently found to be self-aware based on smell rather than sight; dogs primarily use sound and smell. Many other animals use other senses.
https://en.wikipedia.org/wiki/Mirror_test
Look at criticisms of the test^
What about humans too young to pass the mirror test? Is it morally acceptable to kill them?
That's a question you gotta ask for your own moral framework. You're the one wanting to kill non-self aware beings.
Are vegans as a subculture generally pro life, or pro choice?
This is irrelevant. Most vegans are pro choice because pro life is based on very little actual evidence while pro life is.
Lower animals may have some analogue to our pain response, but that doesn't mean it hurts.
What does lower animals mean? And sure it does, mammals and birds are conscious so it isn't just a pain response. It isn't just noiception.
If a wild animal attacks your human friend, you shoot it dead. If a dog attacks you while you're unarmed you pin it to the ground and gouge out its brains through its eye socket before it rips your guts out. It's the right thing to do.
Rhetorical appeal haha?
if you had a pet cat, would you feed it a vegan diet? Even though it's an obligate carnivore, and would probably suffer terribly from malnutrition? Do carnivores get a pass? Do carnivores have a right to exist? Is it okay to eat them instead? Is it wrong to keep pets? Only if they're carnivores? Why such prejudice against omnivores, like humans? Meat is also a natural part of our diet. Despite your biased vegan friends telling you that meat is unhealthy, it's not. Most humans struggle getting adequate nutrition as it is. A strict prohibition on animal products makes that worse.
I have answers for all of these, but you asked so many loaded questions. I don't have a pet cat, but if I did, I would preferable feed said cat lab meat. Do carnivores get a pass? They don't get a pass, but for now they do require meat. I'm in favor of minimizing wild animal suffering and there are different strategies for that, being genetic engineering, lab meat for carnivores, etc. This is too far in the future because we don't have control of the biosphere, so the environment will have to do for now. Is it wrong to keep pets? No, as long as they aren't treated as property and so they are treated as family. Meat is also a natural part of our diet? Wait, so something being natural makes it right? Nope. Despite your biased vegan friends telling you that meat is unhealthy, it's not. Thanks for being condescending, but no I've done the research myself. It can be more healthy. High doses of processed meat is unhealthy; I can link sources.
But maybe you think farm animals are more innocent than indifferent. They're more domesticated. Not to mention herbivores. Cows have certainly been know to kill humans though. Pigs even kill human children. Maybe cows are not very nice people.
This is low, especially for a rationalists. We both know that had nothing to do with anything and was only a rhetorical appeal. Its not like many humans are nice people: murder, wars, etc etc.
All I'm seeing here is a whole ton of sophisticated arguing and a whole lack of actual knowledge on the subject. You wrongly assume that vegans are hippies who act on their feelings and there's no factual basis behind it. You sure its not that you just really want to keep eating your tasty bacon and steak?
You say you're against killing self aware beings. If pigs were proven to be self aware, would you quit eating them?
Replies from: ChristianKl, gilch, gilch↑ comment by ChristianKl · 2017-07-01T16:15:38.232Z · LW(p) · GW(p)
You're really bringing up the plant argument? Even if plants were morally considerable, which they aren't because they can't suffer, it would be more altruistic to eat plants because of the way trophic levels work.
It's not clear that plants can't suffer http://reducing-suffering.org/bacteria-plants-and-graded-sentience/#Plants
↑ comment by gilch · 2017-07-01T19:06:10.307Z · LW(p) · GW(p)
you're also comparing religion and spiritual cultures to scientific arguments.
Because veganism seems more like religion than science. You give the benefit of the doubt to even bugs based on weak evidence.
Based off what evidence? I'm not saying something either way for animals like jellyfish, but you can't just say "near-certain" with no backing.
No backing? How about based on the scientific fact that jellyfish have no brain? They do have eyes and neurons, but even plants detect light and share information between organs. It's just slower. I find it bizarre that vegans are okay with eating vegetables, but are morally opposed to eating other brainless things like bivalves. It is possible to farm these commercially. https://sentientist.org/2013/05/20/the-ethical-case-for-eating-oysters-and-mussels/
Replies from: Zarm↑ comment by Zarm · 2017-07-01T19:11:01.056Z · LW(p) · GW(p)
You give the benefit of the doubt to even bugs based on weak evidence.
No I don't. I never said anything close to that. In fact, I don't even think there's enough evidence to warrant me from not eating honey.
but are morally opposed to eating other brainless things like bivalves.
Again, not opposed to this. I never said anything about this either. Stop assuming positions.
Replies from: gilch↑ comment by gilch · 2017-07-01T21:04:50.256Z · LW(p) · GW(p)
Stop assuming
That's unreasonable. Humans have to assume a great deal to communicate at all. It takes a great deal of assumed background knowledge to even parse a typical English sentence. I said "vegans" are opposed to eating brainless bivalves, not that "Zarm" is. Again I'm talking to the audience and not only to you. You claim to be a vegan, so it is perfectly reasonable to assume on priors you take the majority vegan position of strict vegetarianism until you tell me otherwise (which you just did, noted). You sound more like a normal vegetarian than the stricter vegan. Some weaker vegetarian variants will still eat dairy, eggs, or even fish.
My understanding is the majority of vegans generally don't eat any animal-derived foods whatsoever, including honey, dairy, eggs, bivalves, insects, gelatin; and also don't wear animal products, like leather, furs, or silk. Or they at least profess to this position for signaling purposes, but have trouble maintaining it. Because it's too unhealthy to be sustainable long term.
↑ comment by gilch · 2017-07-01T17:33:12.298Z · LW(p) · GW(p)
You say you're against killing self aware beings. If pigs were proven to be self aware, would you quit eating them?
That's not exactly what I said, but it's pretty close. I established the mirror test as a bound above which I'd oppose eating animals. That is only a bound--it seems entirely plausible to me that other animals might deserve moral consideration, but the test is not simply self awareness.
Absolute proof doesn't even exist in mathematics--you take the axioms on faith, but then you can deduce other things. At the level of pigs, logical deduction breaks down. We can only have a preponderance of the evidence. If that evidence were overwhelming (and my threshold seems different than yours), then yeah, I'd be morally opposed to eating pigs, other things being equal. In that case I'd take the consequentialist action that does the most good by the numbers. Like funding a charity to swap meats in school lunch (or better yet, donating to MIRI), rather than foregoing pork in all circumstances. That pigs in particular might be self aware already seems plausible on the evidence, and I've already reduced my pork intake, but at present, if I was offered a ham sandwich at a free lunch, I'd still eat it.
Replies from: Zarm↑ comment by Zarm · 2017-07-01T18:06:42.579Z · LW(p) · GW(p)
Me being vegan isn't my only course of action. I convince others (on a micro level and I plan to do it on a macro level), I plan to donate to things, and push for actions like the one you said, but not really focused on school. I'm just getting into effective altruism, so obviously I'm more into consequentialist actions.
Part of me being vegan is so that I can convince others, not just the physical amount of meat I forego. You can't really convince others on a micro, macro, or institutional level if you yourself aren't following it.
Replies from: username2, Elo↑ comment by Jiro · 2017-06-29T13:37:43.834Z · LW(p) · GW(p)
I'm extremely surprised that the percentage of vegans here is only slightly higher than the general public.
Actually, I'd suggest that that's evidence that your premise is false. In other words, if veganism is not as correct as you think, that explains away your troubling observation. There are things which rationalists believe at higher frequency than the general public, such as atheism. The fact that veganism is not one should tell you something.
I can literally play bingo with the responses I get from the average person, that's how reoccurring the rationalizations are.
The average person is not good at arguing anything, whether correct or not.
Furthermore, seeing "rationalizations" demonstrates exactly nothing. If you believe in X, and X is true, you'll see rationalizations from people who don't believe X. If you believe in X and X is false, you'll see good reasons from people who don't believe X, but those good reasons will look like rationalizations to you. No matter what the true state of affairs is, whether you are right or wrong, you'll "see rationalizations".
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:09:27.714Z · LW(p) · GW(p)
I get what you're saying. That's not evidence that I'm false though. Its not really evidence towards anything. How about we actually discuss the issue than rather that what other people think.
Replies from: username2↑ comment by username2 · 2017-06-30T12:58:12.929Z · LW(p) · GW(p)
Actually that's exactly what it is: pretty darn close to the textbook definition of Bayesian evidence. If you believe that rationalists do what they claim to do and modify their behavior in ways that are truth-seeking, and if objective studies show that rationalists have not significantly updated in the direction of veganism, that would in fact be Bayesian evidence against the truth of vegan arguments.
That doesn't mean veganism is wrong. Maybe all the meat eaters on LW are in fact wrong, and there's some sort of collective delusion preventing people from accepting and/or acting upon arguments for veganism. But an intellectually honest approach would be to recognize the possibility that you are wrong, as that is a result also consistent with the observed evidence.
Replies from: Zarm↑ comment by Zarm · 2017-06-30T13:46:19.763Z · LW(p) · GW(p)
I see now that it could be evidence. I do think its the second paragraph though. I have questioned myself very deeply on this from many angles and I still think I'm correct. I think there's a collective delusion on acting upon arguments for veganism. The difference between veganism and other philosophies is you actually have to do something on a day to day basis unlike a lot of others.
I am very open to being wrong. I know exactly what information would have to be presented to change my opinion.
Replies from: ChristianKl, username2↑ comment by ChristianKl · 2017-06-30T23:05:15.078Z · LW(p) · GW(p)
I know exactly what information would have to be presented to change my opinion.
Okay, I'm open to hearing what kind of information you would require.
Replies from: Zarm↑ comment by Zarm · 2017-07-01T16:11:41.170Z · LW(p) · GW(p)
I'd like to hear the information what a lot of you would require for your minds to be changed as well!
So the crux of the matter for me is the consciousness of mammals and birds and some other nonhuman animals. As you go down the 'complexity scale' the consciousness of certain beings gets more debatable. It is less known and likely that fish are conscious compared to mammals. And it is less known and likely that insects are conscious compared to fish. However, there is quite the amount of evidence supporting consciousness in all mammals, birds, and some others based off evolution, physiology, and behavior.
http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf
One of the following would have to be proven for me to change my opinion:
1) That mammals and birds are not conscious, ie they do not feel subjectively nor can they suffer, or that their consciousness doesn't mean anything in any meaningful way. Or maybe convincing me that if the N (N being how many animals are equal to a human) is an extremely large number, nonhuman animals should not be considered
2) That consciousness shouldn't be a determining factor over moral consideration (I doubt this will change as this is my baseline morality).
I would continue being vegan for other reasons, but I would not try to convince others, unless the following were proven:
1) That factory farming does not actually contribute a significant amount of greenhouse gases
2) That meat production could be more sustainable compared to plant farming to feed a growing population (I personally think food should be given to people, but that gets into a different political issue)
3) Along with 2), that meat consumption wasn't contributing so highly to water scarcity
4) That it was perfectly healthy in average portions (This may already be proven)
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-01T16:24:21.863Z · LW(p) · GW(p)
I don't think you are very open minded if you require those criteria for a change of opinion. You are basically arguing that even if I would reduce the total amount that animal suffering when I'm eating meat it would still be wrong for me to eat meat (The way it's wrong to push the fat man on the tracks).
Replies from: Zarm↑ comment by Zarm · 2017-07-01T18:01:08.519Z · LW(p) · GW(p)
Well yes it would still be wrong. I'm talking about the act itself. You would be doing better than the majority of other people because you saved a bunch but then you're stilling doing something wrong.
For instance, if you saved 100 people, its still wrong to kill one.
I think that's what you were saying? If not, could you rephrase, because I don't think I understood you perfectly.
Also, could you explain what information you have to get to change your mind?
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-01T18:03:52.317Z · LW(p) · GW(p)
Also, could you explain what information you have to get to change your mind?
I'm open to changing my mind based on unexpected arguments.
Replies from: Zarm↑ comment by entirelyuseless · 2017-06-27T13:41:19.618Z · LW(p) · GW(p)
The simple answer is that I care about humans more than about other animals by an extremely large degree. So other things being equal, I would prefer that the other animals suffer less. But I do not prefer this when it means slightly less utility for a human being.
So my general response about "veganism" is "that's ridiculous," since it is very strongly opposed to my utility function.
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:25:41.179Z · LW(p) · GW(p)
The only way for your argument to work is if you think a human's brief minutes of taste outweighs the happiness of the entire animal's life, which is extremely ludicrous. This isn't a nonhuman animals vs humans thing anyway. You can be just as a happy and have just as much great tasting food as a vegan.
I addressed a similar argument above to a self-labeled pescetarian.
I think that starting off by claiming an opposing position is "ridiculous" is very counter productive. Also a debate like this probably doesn't have a "simple answer."
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-06-29T23:22:11.264Z · LW(p) · GW(p)
The only way for your argument to work is if you think a human's brief minutes of taste outweighs the happiness of the entire animal's life, which is extremely ludicrous.
A good first approximation of the value of a human compared to the value of a chicken would be that the human is 10,000,000 times as valuable, since it is possible to buy a chicken for a few dollars, while the life of a human is usually measured in millions of dollars. If this is the case, there is nothing ludicrous about supposing that those brief minutes outweigh the chicken's entire life.
Replies from: gjm, Viliam↑ comment by gjm · 2017-06-30T00:00:00.401Z · LW(p) · GW(p)
This assumes that the cost of a chicken reflects the value of the chicken's life. Since the chicken has no influence to speak of over that cost, this in turn assumes that humans (collectively, making up The Market) have a good estimate of the value of the chicken's life.
But (1) it seems fairly clear that The Market doesn't give a damn about the aspect of the chicken's life we're concerned with here, and therefore there's no way for the price to reflect that value (as opposed to the value of the chicken's dead body to us, which it certainly will reflect), and relatedly (2) we already know what humans collectively think about the value of a chicken's life (namely, that it's extremely small). Anyone who's asking this question at all is already doubting the common wisdom about the value of chicken lives, and would therefore be ill advised to take The Market's word for it.
(#1 is perhaps an oversimplification. It's possible in principle for people who assign some moral value to chickens' lives to influence the price; e.g., if they buy chickens in order to set them free. But this will only have non-negligible impact on typical chicken prices if those people buy an enormous number of chickens, which requires them to be very numerous or very rich or both. Especially as a serious attempt to improve chicken welfare this way would be a lot more complicated than just buying chickens and letting them go; I doubt chickens do all that well in the wild. You'd have to provide them with somewhere to live. Actually, most likely the biggest effect of this on the welfare of chickens would probably be via whatever the change in price did to the already-existing chicken farming industry. Higher prices would presumably mean fewer chickens sold but more profit per chicken, which probably would end up improving the lives of farmed chickens and reducing their number. But, again, not going to happen until such time as there are a lot more people very keen to improve chickens' lives.)
Replies from: entirelyuseless, Zarm↑ comment by entirelyuseless · 2017-06-30T05:18:13.211Z · LW(p) · GW(p)
I don't disagree much with this. I am simply saying that I actually do care that little about chickens, and so do most people.
Replies from: gjm↑ comment by Viliam · 2017-06-30T09:53:53.060Z · LW(p) · GW(p)
the life of a human is usually measured in millions of dollars
I am sure there are markets where you can buy a human much cheaper.
(This is just a technical statement of fact, without any endorsement.)
Replies from: entirelyuseless, username2↑ comment by entirelyuseless · 2017-06-30T14:24:20.475Z · LW(p) · GW(p)
Sure. I wasn't trying to use an exact economic calculation. In fact if you tried to do that you would get a factor of less than 10,000,000, while I meant that as a minimum. I think a hundred million might end up being more accurate.
I don't see what people think is so unreasonable about this. I remember someone making this kind of vegan/vegetarian argument saying, "Let's assume a chicken has only 5% of the value of a human being..." and my response is "What?!?! Do you think that I'm going to hesitate more than a tenth of a second before choosing to kill 20 chickens rather than a human?"
Replies from: Viliam↑ comment by Viliam · 2017-07-03T08:55:26.123Z · LW(p) · GW(p)
I suspect people are doing the math wrong here. I agree that the value of chicken's life is smaller than 5% of value of human's life. But that doesn't automatically imply that the value of chicken's life is smaller than the amount of pleasure the human derives from eating the chicken for lunch, as opposed to eating some vegetables instead.
I suspect that the usual human scope insensitivity makes some people conclude "if it is less than 1% of value of human life, it means that it is for all practical purposes zero, which means I am morally free to waste it in any quantities, because zero multiplied by any number remains zero". Uhm, it doesn't work this way.
Making someone eat broccoli for lunch instead of chicken is not the same as literally killing them. So when we discuss veganism, we shouldn't compare "chicken life" vs "human life", because that is not what the debate is about; that is a red herring. We should compare "chicken life" vs "additional utility a human gets from eating a chicken as opposed to eating something else".
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-04T01:27:01.675Z · LW(p) · GW(p)
I agree that we should be comparing the chicken's life (or a percentage of its life) to the extra utility from eating the chicken. The point is that the small utility that the human gets is still larger than the value of the chicken life, because the entire value of the human life is so vastly greater than the value of the chicken life.
Replies from: Viliam↑ comment by Viliam · 2017-07-06T12:32:49.673Z · LW(p) · GW(p)
Some Fermi estimates; feel free to disagree with specific numbers and provide your own.
Let's take an average human life as a unit of value; i.e. the value of human's life is 1.
How large part of "a value of human's life" is "having lunch, in general, as opposed to only having a breakfast and a dinner every day of your life"? Let's say it's somewhere between 1/10 and 1/100, because there are many other things humans value, such as not being in pain, or having sex, or having status, or whatever.
If we estimate an average human life to be about 10 000 or 20 000 days, then "having this specific lunch" is between 1/10 000 and 1/20 000 of "having lunch, in general".
But the choice is actually not between having a lunch and not having a lunch, but between having a chicken lunch or having a vegan lunch. Let's say the taste of chicken provides between 1/4 and 1/10 of the value of a lunch.
Putting these numbers together, a value of "having a chicken for a specific lunch" is about 1 / 1 000 000 of a value of a human life.
As a quick check, imagine that you are both in a vegan country, where chickens are simply not available for lunch. Would you sell 1% of your remaining lifespan (less than 1 year) to the Devil in return for having a chicken for lunch each day of your life? I guess many people would, probably even more than 1%; and the revealed preferences (e.g. people dying as a result of salmonella) seem to match this.
So, it seems like ethically it is right to eat chicken if and only if a value of a human life is greater than value of 1 000 000 chicken's lives. Which, according to many people, it is.
Possible methodological problems:
1) Scope insensitivity: maybe people say that 1 000 000 chickens are less worth than humans simply because they cannot imagine what "1 000 000" actually means; they only imagine about dozen chickens when making the emotional judgement. On the other hand, there are people who as a part of their profession kill large numbers of chicken, so they would have a near-mode idea of what it means. How many people would be willing to do such profession, though?
2) How much is the desire to eat chicken a result of cultural brainwashing? Do people in countries where vegetarianism is normal agree that having a chicken instead would increase the value of their lunch by 10%? That is, how much is "wanting to eat a chicken" actually wanting to eat "a chicken", as opposed to simply wanting to eat "the same thing as yesterday".
Replies from: entirelyuseless, Good_Burning_Plastic, Good_Burning_Plastic↑ comment by entirelyuseless · 2017-07-06T13:25:08.465Z · LW(p) · GW(p)
I agree with this calculation, except that I think that the discrepancy between the value of a human life and the value of a chicken could be even greater; as I said, I think a human could be worth up to 100,000,000 times the value of the chicken.
How many people would be willing to do such profession, though?
I don't think this question is relevant. I would not be willing to be a trash collector (without much higher pay than usual), but have no moral objection to it. In the same way I would not be willing to be an animal slaughterer (without much higher pay than usual), but have no moral objection to it. And the reasons are the same: disgust reaction, not morality.
Do people in countries where vegetarianism is normal agree that having a chicken instead would increase the value of their lunch by 10%?
Since I make fairly intense efforts to save money, occasionally even by not eating meat, I know perfectly well that in general my preference for eating meat and other animal products is strong enough to make me spend nearly double on food. So it is likely worth significantly more than 10% of that value. This is not likely to be "cultural brainwashing," since I eat more than is considered healthy etc. If anything I have to resist cultural pressures to persist in my behavior.
↑ comment by Good_Burning_Plastic · 2017-07-07T09:07:35.209Z · LW(p) · GW(p)
How large part of "a value of human's life" is "having lunch, in general, as opposed to only having a breakfast and a dinner every day of your life"? Let's say it's somewhere between 1/10 and 1/100,
I.e. you'd take a 1% chance of being killed straight away over a 100% chance of never being allowed to have lunch again, but you'd take the latter over a 10% chance of being killed straight away?
...Huh. Actually, rephrasing it this way made the numbers sound less implausible to me.
↑ comment by Good_Burning_Plastic · 2017-07-07T09:15:50.404Z · LW(p) · GW(p)
Putting these numbers together, a value of "having a chicken for a specific lunch" is about 1 / 1 000 000 of a value of a human life.
I'd estimate that as ((amount you're willing to pay for a chicken lunch) - (amount you're willing to pay for a vegan lunch))/(statistical value of life). But that's in the same ballpark.
↑ comment by phonypapercut · 2017-06-30T04:31:40.134Z · LW(p) · GW(p)
Now, I'm not addressing those that say morality is subjective and those that live solely for themselves.
I'd wager those not addressed are more numerous than you think, especially among lurkers.
I'm not confident that this better accounts for the disparity between your expectations and the survey numbers than confused altruists, but the thought occurs.
Replies from: Zarm↑ comment by Zarm · 2017-06-30T13:56:05.091Z · LW(p) · GW(p)
I didn't think they weren't numerous. There just isn't a point debating morality with someone who says "morality is subjective." I usually leave those people alone.
I don't think I consciously had this thought though, so thank you that actually could be a different explanation.
↑ comment by Dagon · 2017-06-27T15:14:32.052Z · LW(p) · GW(p)
You're not addressing me, as I say morality is subjective. However, even within your stated moral framework, you haven't specified the value range of a marginal animal life. I'm extremely suspicious of arguments that someone else's (including factory-farmed animals) are negative value. If you think that they're lower value than other possible lives, but still positive, then the equilibrium of creating many additional lives, even with suffering, is preferable to simply having fewer animals on earth.
So yes, suffering is worse than contentment. Is it worse than never having existed at all? I don't know, and suspect not.
Replies from: Jiro↑ comment by Jiro · 2017-06-28T20:45:48.565Z · LW(p) · GW(p)
Arguing that creating lives has positive value and that therefore it is good to create them quickly leads into Repugnant Conclusion territory.
Replies from: Dagon↑ comment by Dagon · 2017-06-28T20:53:13.478Z · LW(p) · GW(p)
Well, repugnant != incorrect, but even if I were to accept that end-state as undesirable (and I actually do), the limit doesn't describe the current margin. It's perfectly consistent to believe that we shouldn't sacrifice great lives to have more mediocre lives while believing we can correctly make additional mediocre lives while keeping the great ones.
"More animals than now" is a different position than "as many animals as possible, even if we have to crowd out humans and happier animals". I'll argue for the first, not for the second.
↑ comment by ChristianKl · 2017-06-27T09:39:28.971Z · LW(p) · GW(p)
A while ago, we did Double Crux on the topic at an LW meetup. We all agree that why would prefer that animals suffer less. A key question is how becoming vegan compares to other possible interventions for reducing animal suffering. The discussion was a while ago and at the time the numbers weren't high enough to argue that it's important to switch to being a vegan.
↑ comment by entirelyuseless · 2017-07-01T00:20:01.704Z · LW(p) · GW(p)
I think I understand your basic mistake. You seem to think that morality is objective in a way which is unrelated to the particular kinds of beings that have that morality. So for example you might think that killing innocent persons is wrong regardless of the particular history of those persons and their species.
This is a mistake. For example, if people had evolved in such a way that reproduction required the killing of one of the persons, but it consistently resulted in the birth of twins, then killing a person in this context would not be wrong. This is not because morality is or would be subjective, but because it would be objectively right for that species existing in that context.
In the same way, people have evolved as meat eaters. They consequently evolved to assign extremely low value to the lives of other edible creatures, and this is objectively correct for the human species, even if it might not be correct for some other kind of being.
Replies from: gilch↑ comment by gilch · 2017-07-01T05:53:45.787Z · LW(p) · GW(p)
Nope, I still think that's wrong. It can't be helped until they develop better technology maybe, but it's wrong. The species in Greg Egan's Orthogonal series was like that. They eventually figured out how to reproduce without dying.
There are things about ourselves that evolution did to us that we ought to change. Like dying of old age, for example. Evolution is not moral. It is indifferent. The Sequences illustrate this very clearly.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-01T13:36:56.514Z · LW(p) · GW(p)
It can't be helped until they develop better technology maybe, but it's wrong.
That's not how morality works. If you say that something is wrong, it necessarily follows that the opposite is right. So if it's still wrong in that situation, they should go extinct, rather than waiting for better technology. If something "can't be helped," then that thing cannot be morally wrong, since you cannot blame someone for doing something that cannot be helped, while moral wrongness is something blameworthy.
The correct way to describe this situation is that it is right to kill for the sake of reproduction, but this is not an ideal situation, and it is true that they should hope they will be able to change it later.
In the same way, I already said that other things being equal, I prefer that other animals suffer less. So when we have technology that enables us to get equal or superior utility without eating other animals, we will stop eating them. Meanwhile, it is right to eat them, just as it is right for those people to kill.
There are things about ourselves that evolution did to us that we ought to change.
I agree, but the point is not relevant, since it means that unfortunate situations evolve, not wicked situations. I have a post on that here.
Evolution is not moral. It is indifferent. The Sequences illustrate this very clearly.
Eliezer was very strongly opposed to the idea that morality is an abstract truth that has nothing to do with what humans have actually evolved to do.
Replies from: gilch↑ comment by gilch · 2017-07-02T00:14:41.189Z · LW(p) · GW(p)
That's not how morality works. moral wrongness is something blameworthy.
We might be arguing word definitions at this point, but if your definition is "blameworthiness", then I think I see what you mean.
If you say that something is wrong, it necessarily follows that the opposite is right.
What? No it doesn't! Reversed stupidity is not intelligence. Neither is reversed immorality morality. The foolhardy action in battle is wrong, therefore, the cowardly action is right? The right answer is not the opposite. The courageous action is somewhere in between, but probably closer to foolhardy than cowardly.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-02T00:22:56.133Z · LW(p) · GW(p)
The opposite of doing wrong is NOT doing wrong, and is also doing right. If it is wrong to kill to reproduce in the situation under discussion, it is right not to kill -- that is, it is right not to reproduce at all. But this is false, so it is right to kill in that situation.
We might be arguing word definitions at this point,
Indeed. If you say "such and such is morally wrong, but not blameworthy," then you are definitely not speaking of morally wrong as I or any normal person means it.
Replies from: gilch↑ comment by gilch · 2017-07-02T22:55:31.581Z · LW(p) · GW(p)
The opposite of doing wrong is NOT doing wrong, and is also doing right
You deny the existence of morally neutral acts? There's a difference between "not blameworthy" and "praiseworthy".
If you say "such and such is morally wrong, but not blameworthy
That's not exactly what I said. But I'm not so confident that normal persons entirely agree with each other on such definitions. If an insane person kills another person, we may not call that blameworthy (because the insane person is not a competent moral agent), but we still call the act itself "wrong", because it is unlawful, has predictably bad consequences, and would be blameworthy had a (counterfactually) competent person done it. I hear "normal person"s use this kind of definition all the time.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-04T01:22:57.826Z · LW(p) · GW(p)
You deny the existence of morally neutral acts?
There are acts which are neutral in the abstract, which can sometimes be good and sometimes bad. But particular acts are always one or the other. This is obvious, since if an act contributes to a good purpose, and there is nothing bad about it, it will be good. On the other hand, if it contributes to no good purpose at all, it will be bad, because it will be a waste of time and energy.
I think a normal person would be more likely to say that an act by an incompetent person "would" be wrong, if it were done by a competent person, rather than saying that it "is" wrong. But I don't think there is much disagreement there. I agree that in order to be blameworthy, a person has to be responsible for their actions. This makes no difference to the scenario under discussion, because people would be purposely reproducing. They could just not reproduce, if they wanted to; so if the act were wrong, they would be morally obliged not to reproduce, and this is false.
Replies from: gilch↑ comment by gilch · 2017-07-04T18:16:36.043Z · LW(p) · GW(p)
But particular acts are always one or the other. This is obvious, since if an act contributes to a good purpose, and there is nothing bad about it, it will be good. On the other hand, if it contributes to no good purpose at all, it will be bad, because it will be a waste of time and energy.
You can't have this both ways. You define the morality of an act not by its consequence, but by whether the agent should be blamed for the consequence. But then you also deny the existence of morally neutral acts based on consequence alone. Contradiction.
Moral agents in the real world are not omniscient, not even logically omniscient. Particular acts may always have perfect or suboptimal consequences, but real agents can't always predict this, and thus cannot be blamed for acting in a way that turns out to be suboptimal in hindsight (in the case the prediction was mistaken).
It sounds like you're defining anything suboptimal as "bad", rather than a lesser good. If you do accept the existence of lesser goods and lesser evils, then replace "suboptimal" with "bad" and "perfect" with "good" in the above paragraph, and the argument still works.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-07-04T19:51:31.468Z · LW(p) · GW(p)
You can't have this both ways. You define the morality of an act not by its consequence, but by whether the agent should be blamed for the consequence. But then you also deny the existence of morally neutral acts based on consequence alone. Contradiction.
There is no contradiction. If you reasonably believe that no good will come of your act, you are blameworthy for performing it, and it is a bad act. If you reasonably believe good will come of your act, and that it is not a bad act, you are praiseworthy.
↑ comment by Alicorn · 2017-06-28T02:14:24.061Z · LW(p) · GW(p)
Animals are not zero important, but people are more important. I am a pescetarian because that is the threshold at which I can still enjoy an excellent quality of life, but I don't need to eat chicken fingers and salami to reach that point. Vegans are (hopefully) at a different point on this tradeoff curve than I am and meat-eaters (also hopefully) are at a different point in the other direction.
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:12:14.965Z · LW(p) · GW(p)
I see what you're saying but I have to disagree. I agree that humans are worth more. Here's the thing though. You have to compare the numbers. This isn't one animal to one human, where I WOULD pick the human. The fact is that 60+ billion animals are slaughtered each year. And as we both probably know, at that point its just a statistic, but take a moment to think about how much that really is. There's no other number comparable to it.
While I applaud that you are pescetarian, I think more can be done and more should be changed. I think that if you pushed yourself to change, as that's all rationalism is about, it could be just as an excellent a quality of life for yourself if not better. There are soo many foods out there.
I would honestly make the argument that the majority of meat-eaters are not on the curve, but rather willfully ignorant of how bad factory farming is.
Replies from: ChristianKl, Alicorn↑ comment by ChristianKl · 2017-06-29T21:33:43.379Z · LW(p) · GW(p)
There's no other number comparable to it.
What is this supposed to mean? That you don't know other big numbers?
I think that if you pushed yourself to change, as that's all rationalism is about
No, rationalism isn't centrally about pushing oneselves to change. Where did you get that idea?
Replies from: Zarm↑ comment by Zarm · 2017-06-29T21:45:00.590Z · LW(p) · GW(p)
Oh come on. You know what I meant with the first part. There's no number of deaths in history comparable to this number.
Where did I get that idea? Frequently quoted on this site is, "Not every change is improvement, but improvement starts with a change" or something to that liking. This site is all about mitigating cognitive biases as well as related fields, so it IS about change. Learning about biases and mitigating them is all about change. Or maybe I was under the false assumption that the people wanted to mitigate the biases and in reality they just want to learn about them.
Replies from: Elo, ChristianKl↑ comment by Elo · 2017-06-29T22:03:15.479Z · LW(p) · GW(p)
I was under the false assumption that the people wanted to mitigate the biases and in reality they just want to learn about them.
Careful now. If you just try to kick up shit people will start ignoring you.
Try: "I am confused because..."
Replies from: Zarm↑ comment by Zarm · 2017-06-30T02:35:23.658Z · LW(p) · GW(p)
I thought I started out fine. I'm not trying to kick shit up. The other person said "That you don't know other big numbers?" so I responded with the same tone.
Isn't mitigating biases change?
Replies from: Elo↑ comment by Elo · 2017-06-30T02:58:34.158Z · LW(p) · GW(p)
It appears that you took christiankl's comment to be an inflammatory tone. C could have said better things, and that's up to him to be better in the future too.
Ask a side point - this is how a traditional flame war starts.
"There's no other number comparable to it." (hyperbole)
"What is this supposed to mean? That you don't know other big numbers?" (challenge)
"Oh come on." (objection)
A shortcut to discussion here is that it has very little of the hyperbole on either side. He might be giving you shit for the hyperbole but he didn't escalate where,
"Or maybe I was under the false assumption that the people wanted to mitigate the biases and in reality they just want to learn about them."
Is escalating
↑ comment by ChristianKl · 2017-06-29T22:02:51.205Z · LW(p) · GW(p)
Oh come on. You know what I meant with the first part.
To me the expressed sentiment feels, like talking to someone without a mass background who's impressed by big numbers and who generally knows no numbers in that category. $60 billion for example is near the NIH budget.
If I want to focus on deaths the number of bacteria that die within myself in a year is likely higher than 60 billion.
This site is all about mitigating cognitive biases as well as related fields, so it IS about change.
It's interesting that you don't defend the idea that this website is supposed to be about pushing for change in your reply but a more general one, that this website is about valuing change.
Creating internal alignment through a CFAR technique like internal double crux can lead to personal change but there's no pushing involved.
Replies from: gilch, Zarm↑ comment by gilch · 2017-07-02T23:02:10.711Z · LW(p) · GW(p)
Rationalists should win. We do care about instrumental rationality. Epistemic rationality is a means to this end. Doesn't that mean "change"?
↑ comment by Zarm · 2017-06-30T02:32:12.983Z · LW(p) · GW(p)
It's interesting that you don't defend the idea that this website is supposed to be about pushing for change in your reply but a more general one, that this website is about valuing change.
I don't follow. I did defend that its about change.
Replies from: username2, ChristianKl↑ comment by username2 · 2017-06-30T13:05:52.470Z · LW(p) · GW(p)
There's a difference between "here's an argument for veganism, take it or leave it" and "you guys aren't rationalists because you're not adopting my favored position."
Replies from: Zarm↑ comment by ChristianKl · 2017-06-30T12:43:20.028Z · LW(p) · GW(p)
You didn't defend that it's about "pushing to change".
↑ comment by Alicorn · 2017-07-03T04:32:18.779Z · LW(p) · GW(p)
I'm aware that you disagree, that being the premise of the thread, but your argument does not engage with my reasoning, to a degree that makes me concerned that you were not looking for perspectives but targets. Consider:
Trees are not zero important, but people are more important. (I think most people would agree with this?)
While I would not go around gratuitously killing trees for trivial reasons, as long as no qualitative negative effect on the ecosystem or somebody's property or something like that were on the line I would not hesitate to sacrifice arbitrary numbers of trees' lives for even one human's even non-mortal convenience. The trees don't matter in the right way. I still think it would be bad to kill sixty billion trees, but not that bad.
I said "hopefully" because I agree that most people are not consciously or even implicitly finding a place on the QoL/animal suffering tradeoff curve, but just using defaults. I agree that they should not mindlessly use defaults in this way and that most people should probably use fewer animal products than they do. I disagree with the rest of your position as I understand it.
Replies from: Zarm↑ comment by Zarm · 2017-07-03T13:44:44.707Z · LW(p) · GW(p)
I'd like to hear the argument about why trees lives are worth antthing. Sure, they're worth instrumental value, but thats not what we're talking about. I'm arguing that trees are worth 0 and that animals are comparable to humans. Trees aren't conscious. Many animals are.
Replies from: Alicorn↑ comment by Alicorn · 2017-07-03T23:51:09.855Z · LW(p) · GW(p)
I think if you want to have this conversation you should not start a thread by asking for "perspectives on veganism" from people who are not vegans. It would be more honest to announce "I'm a vegan and invite you to squabble with me!"
Replies from: Zarm↑ comment by Zarm · 2017-07-04T01:49:09.475Z · LW(p) · GW(p)
Being perfectly honest, I actually don't understand what's wrong with starting with those words. Maybe this is a failure of communication on my part. I do understand that I shouldn't have said 'so surprised' and some of the other stuff, but what's wrong with asking, "Can I get your guys' perspectives on veganism?"
"I'm a vegan and invite you to squabble with me!"
I'd rather debate things coherently as that's what rationalism is about. I think I'm done here at this point though because not much is getting through on either side. Some of the replies I'm getting are typical (plant, "natural," comparing animals to rocks), even fallacious, which is probably why I give off an irritated vibe, which doesn't help either party when trying to find the truth.
↑ comment by MrMind · 2017-06-27T07:00:58.421Z · LW(p) · GW(p)
I do think that unnecessary suffering is wrong, but I also lean toward the consequentialist side and determining what is "unnecessary" is very difficult. I do not consider myself particularly altruistic, although I tend to have a high degree of empathy towards other human beings. I'm also a Duster and not a Torturer, which means I don't believe in the additivity of suffering.
On one side, I literally cannot say how much my arguments are the products of backward rationalization from not wanting to abandon meat.
On the other side, I have many problems with standard veganism.
First, they usually make a set of assumptions that might seem obvious (and in there lies the danger), but are sometimes falsified. One such assumption is the connection between not consuming meat and reducing animal suffering. A simple, direct boycott for example is either too naive to work (it can even suffer from the Cobra effect) or morally wrong from a consequentialist point of view.
Second, the very notion that animals suffer more in intensive environment than in free ranging farms is doubtful: for chickens this is hardly true, for example, and the notion is blurry for force-feeded geese.
↑ comment by Zarm · 2017-06-29T21:33:32.769Z · LW(p) · GW(p)
I have to say I really think its the backwards rationalization. Everyone here eating meat as a strong drive to want to keep eating it, so there is a lot of motivation to argue that way.
As for your first point, I see what you're saying, but obviously not all vegans think that. A lot of the point is getting the message across to other people as you can't make the message if you yourself eat meat.
I'm against all animal farms so I really don't know how to address that. I mean its a pretty easy argument to say they suffer more in the intensive environments. They're in cages and they're abused their wholes lives. No room to move and only pain.
Replies from: MrMind, username2↑ comment by MrMind · 2017-07-03T07:46:25.417Z · LW(p) · GW(p)
I have to say I really think its the backwards rationalization
Well, that's the obvious outside view. On the other side, even if I am rationalizing, that doesn't mean I haven't come up with some good rebuttal.
As for your first point, I see what you're saying, but obviously not all vegans think that.
Well, I've yet to encounter some vegan who claims to be doing more for the animal than abstaining from doing evil. If reduce suffering is the real reason, I see a surprising lack of effective actions.
I mean its a pretty easy argument to say they suffer more in the intensive environments.
It's pretty easy to disprove that, also. Take chickens, for example. When raised in a farming environment, they have access to better food, better health care, they are not subjected to the pecking order. The only thing they have less is space, but modern farming have ampler cages, and it's not clear to me that a chicken would trade free roaming in the wild with the more comfortable existence in a chicken farm.
↑ comment by username2 · 2017-06-30T13:13:24.142Z · LW(p) · GW(p)
MrMind linked to the underlying moral argument you are making, the duster vs the torturer, which is by no means a settled position. This forum is full of people, such as myself and MrMind, who prefer 3^^^3 dust-in-the-eye events over the prolonged torture of one individual. As applied to this scenario, that means that we do not accept that there exists some N units of chicken happiness that equals 1 unit of human happiness, even for large values of N.
Replies from: Zarm↑ comment by Zarm · 2017-06-30T13:37:59.384Z · LW(p) · GW(p)
Ah I see the argument! That's interesting I hadn't heard of it like that and I would understand if you thought the chicken's life was near worthless. However, I'm going to challenge you on saying that the chicken's life is that minimal amount. Chickens are consciousness and feels pain and pleasure. Should you could rationalize and say "oh but its different from humans" but from what cause? There's nothing to make you think this.
I can move on to chickens, but let's talk about pigs for instance because they are easier. I find it very hard to believe that a pig's life is worth this minimal amount that you say. Pigs are around the intelligence of dogs. They can problem solve, they can have emotional bonds, and they have preferences. They experience life and are happy on a bright breezy day or they suffer if they are abused as they are. Going back to dogs, dogs have been shown to be empathetic. This has been shown in how they understand how to deceive. They will lead humans to less reward if they are in turn rewarded less. I may have to find this study. The other thing that was shown recently is that dogs are self aware. The mirror test is flawed and not all animals go primarily by sight. Dogs for instance go by smell and hearing. There was a test on smell that showed that dogs are self aware as they understand their own scent versus that of another dog. This leads to the conclusion that we should do more tests of different kinds because we were leaving out key information.
Also, pigs are some entity. Its easy to label them. When 'pigs' come to mind you just think of the average pig, rather than the total of a bunch of different individuals which allows you to de-empathize with them. Referring them as its also allows you to de-empathize.
As for chickens, they are probably less emotional than pigs, but they are not brainless as you would guess. They have internal lives and get joy from things and suffer as well. There was a study where chickens were shown to exhibit self control to gain access to more food if they self controlled.
Replies from: username2↑ comment by username2 · 2017-06-30T22:34:21.561Z · LW(p) · GW(p)
Hrm, I think you're still not getting it. It's not that a chicken, pig, or cow's life is worth some minimal but comparable amount. Because even then there would be some threshold were N chickens, pigs, or cow happiness-meters (for some suitably large N) would be worth 1 human's. The position is that they are utterly incomparable for the purpose of moral statements. This is a non-utilitarian position. What is rejected is the additive property: you can't take two bad events, add their "badness" together and argue that they are worse than some other single event that is by itself rated worse than one of the originals.
Some non-utilitarians say that certain utilities are of different classes and incomparable. Others say that utilities are comparable but don't add linearly. Others don't know but simply say that torturing to prevent dust in the eyes of any number of people just plain doesn't feel right and any moral framework that allows that must be suspect, without offering an alternative.
In any of those cases, establishing the moral value of an animal is not obviously relevant to whether it is moral for humans to eat them.
Replies from: Zarm↑ comment by Zarm · 2017-06-30T23:17:44.096Z · LW(p) · GW(p)
It's not that a chicken, pig, or cow's life is worth some minimal but comparable amount. Because even then there would be some threshold were N chickens, pigs, or cow happiness-meters (for some suitably large N) would be worth 1 human's.
I'm arguing they are comparable. See, I don't think the N is that large.
is not obviously relevant to whether it is moral for humans to eat them.
Sure it is. That's the crux of the matter.
Replies from: username2↑ comment by username2 · 2017-06-30T23:33:13.347Z · LW(p) · GW(p)
In non-utilitarian morality 1 + 1 =/= 2. Sometimes 1 + 1 = 1. Does that make sense?
Replies from: Zarmcomment by username2 · 2017-06-26T20:54:17.958Z · LW(p) · GW(p)
Are you an avid reader of non-fiction books outside your field of work? If so, how do you choose which books to read?
Replies from: Screwtape, MaryCh, sen↑ comment by Screwtape · 2017-06-27T14:31:14.010Z · LW(p) · GW(p)
Assuming one or two a month counts as "avid" then I count.
Some combination of what's on sale, what my friends/family/favourite bloggers are reading, and what's in a field I feel like I could use more background in for the fiction I write. (Anything that's all three at once gets read first, then anything with two of the three, and then whatever fits at least one of those.)
↑ comment by MaryCh · 2017-06-28T17:53:05.517Z · LW(p) · GW(p)
I sometimes buy outdated books on subjects that I remember being weird (metal alloys, ancient history) or charming/grounding (history of languages, the biological aspect of forensics etc.) from the time when I attended high school. Of course, I am not an avid reader, and sometimes I put the book aside. OTOH, my parents' library has more books than I care to read, some of them nonfiction, so I can just pull out anything and trust it will be either interesting to read or at the very list a conversation starter.
↑ comment by sen · 2017-07-02T01:12:58.089Z · LW(p) · GW(p)
Yes. I follow authors, I ask avid readers similar to me for recommendations, I observe best-of-category polls, I scan through collections of categorized stories for topics that interest me, I click through "Also Liked" and "Similar" links for stories I like. My backlog of things to read is effectively infinite.
comment by ChristianKl · 2017-07-02T06:54:39.635Z · LW(p) · GW(p)
If you assume there's an FDA that makes yes/no decision about which drugs to approve and you hate the p values that they use currently, what do you think should be the alternative statistical FDA standard?
Replies from: MrMind↑ comment by MrMind · 2017-07-03T07:22:11.546Z · LW(p) · GW(p)
Clearly pre-committed methodology and explicitly stated priors. Double blinds whenever possible. Bayesian model analysis. Posterior distributions, not conclusions.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-03T15:52:38.845Z · LW(p) · GW(p)
Do you have a specific way they should do Bayesianism in mind? How do you get the explicitly stated priors?
Replies from: MrMind↑ comment by MrMind · 2017-07-03T16:25:58.546Z · LW(p) · GW(p)
Do you have a specific way they should do Bayesianism in mind?
A specific method? No, obviously. That depends on the problem. What I would love to see is:
"H0, H1 and H2 are our prior hypothesis. We assume them to be mutually exclusive and complete. H0, based on these empirical assesments, has probability P0. H1, base on those other considerations, has probability P1. H2 intercepts all the other possible explanations and has probability 1 - P0 - P1.
We are going to use this method to analyze the data.
These are the data.
Based on the calculations, the revised probabilities for the three hypothesis are P0', P1' and P2'."
How do you get the explicitly stated priors?
How about: we only know the mean of the effect, so we suppose an exponential prior distribution. Or: we value error quadratically, so we apply a normal distribution? Or: we know that the effect stays the same at every time scale, so we are going to start from a Poisson distribution?
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-03T16:55:08.851Z · LW(p) · GW(p)
That depends on the problem.
When it comes to a test problem, what about an antidepression drug?
our
Who's that in case of a FDA approval process? The person who wants his drug approved or the FDA? If it's the person who wants his drug approved, why don't they just go into it with strong priors?
Replies from: MrMind↑ comment by MrMind · 2017-07-04T12:33:11.839Z · LW(p) · GW(p)
When it comes to a test problem, what about an antidepression drug?
You'll need to be a lot more specific if you want a specific answer.
Who's that in case of a FDA approval process? The person who wants his drug approved or the FDA?
It's whomever is doing the trial.
If it's the person who wants his drug approved, why don't they just go into it with strong priors?
They will surely go with strong prior. However, it's already like this even with frequentist methods (it just takes a different form): math cannot force honesty out of anyone. The advantage of the Bayesian approach is that priors are explicit, and others can judge them more easily.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-07-04T14:19:22.885Z · LW(p) · GW(p)
The basic idea of how the FDA process works is that it's extremely predefined and doesn't allow the person who wants the approval to cherry pick statistics.
It seems like your approach is to provide more flexibility. Did I get the wrong impression?
Replies from: MrMind↑ comment by MrMind · 2017-07-04T15:06:07.826Z · LW(p) · GW(p)
I have no idea how the FDA approval process work, so if you tell me that it doesn't allow any statistics variation then sure, I can only agree and say that the Bayesian method outlined (which not 'mine' for any stretch of the word) is more flexible.
comment by MaryCh · 2017-06-27T14:30:37.646Z · LW(p) · GW(p)
Isn't it odd how fanon dwarves [from 'Hobbit'] are seen as 'fatally and irrationally enamoured' by the gold of the Lonely mountain? I mean, any other place and any other time, put an enormous heap of money in front of a few poor travellers, tell them it's their, by right, and they would get attached to it and nobody would find it odd in the least. But Tolkien's dwarves get the flak. Why?
Replies from: knb, Screwtape, Lumifer↑ comment by knb · 2017-06-28T02:21:27.138Z · LW(p) · GW(p)
What did you mean by "fanon dwarves"? Is that just a fan interpretation or do you think Tolkien intended it? In Tolkien's idealized world, all economic motivations are marginal and deprecated. The dwarves are motivated partially by a desire for gold, but mostly by loyalty to their king and a desire to see their ancestral homeland restored to them. To the extent the treasure itself motivates Thorin & co., it causes disaster (for example his unwillingness to share the loot almost causes a battle against local men & elves.)
Replies from: MaryCh↑ comment by Screwtape · 2017-06-27T15:26:54.261Z · LW(p) · GW(p)
If there was a briefcase full of hundred dollar bills over there that someone told me was mine by right, I'd be pretty attached to it. If they then added the caveat that there was a massive dragon squatting on the thing who also believed the briefcase was theirs, I do not think I would try and steal the briefcase. Would you?
Replies from: MaryCh↑ comment by MaryCh · 2017-06-28T06:01:41.108Z · LW(p) · GW(p)
But they don't get the flak for stealing it. They get the flak for claiming it afterwards "because of the curse". It's not avarice that is the main problem - it's the being enthralled. And I can't quite get it. Why not just say "boo greedy dwarves" but go the whole way of "boo greedy dwarves who up and got themselves enchanted"? What does the enchanted bit do?
↑ comment by Lumifer · 2017-06-27T15:24:03.906Z · LW(p) · GW(p)
put an enormous heap of money in front of a few poor travellers
Put an enormous heap of money with a big nasty dragon on top of it in front a few poor travelers...
Replies from: MaryChcomment by cousin_it · 2017-06-27T11:53:13.207Z · LW(p) · GW(p)
If you were offered a package deal with X% chance of your best imaginable utopia and Y% chance of all life instantly going extinct, for which values of X and Y would you accept the deal?
Replies from: Screwtape, entirelyuseless, Dagon, MaryCh↑ comment by Screwtape · 2017-06-27T17:58:31.906Z · LW(p) · GW(p)
It seems to me like the most important piece of data is your prior on the default conditions. If, for example, there was a 99% chance of the universe winding up getting tiled in paperclips in the next year then I would accept very unlikely odds on X and likely odds on Y. Depending on how likely you think an "I Have No Mouth but I Must Scream" situation is, you might 'win' even if Y happens.
Hrm. A somewhat pedantic but important question; are these chances independent of each other? For example, lets say X and Y are both 50%. Does that mean I'm guaranteed to get either X or Y, or is there a ~25% chance that both come up 'tails' so to speak and nothing happens? If the later, what happens if both come up 'heads' and I get my utopia but everyone dies? (Assuming that my utopia involves being at least me being alive =P)
The second most important piece of data to me is that X is the odds of "my best imaginable utopia" which is very interesting wording. It seems to mean I won't get a utopia better than I can imagine, but also that I don't have to compromise if I don't want to. I can imagine having an Iron Man suit so I assume the required technology would come to pass if I won, but HPMoR was way better than I imagined it to be before reading it. Lets say my utopia involves E.Y. writing a spiritual successor to HPMoR in cooperation with, I dunno, either Brandon Sanderson or Neil Stephenson. I think that would be awesome, but if I could imagine that book in sufficient fidelity I'd just write it myself.
My probability space looks something like 10% we arrive at a utopia close enough to my own that I'm still pretty happy, 20% we screw something up existential and die, 40% things incrementally improve even if no utopia shows up, 20% things gradually decline and get somewhat worse but not awful, 9% no real change, 1% horrible outcomes worse than "everybody dies instantly." (Numbers pulled almost entirely out of my gut, as informed by some kind of gestalt impression of reading the news and looking at papers that come to my attention and the trend line of my own life.) If I was actually presented that choice, I'd want my numbers to be much firmer, but lets imagine I'm pretty confident in them.
I more or less automatically accept any Y value under 21%, since that's what I think the odds are of a bad outcome anyway. I'd actually be open to a Y that was slightly higher than 21%, since it limits how bad things can get. (I prefer existence in pain to non-existence, but I don't think that holds up against a situation that's maximizing suffering.) By the same logic, I'm very likely to refuse any X below 10%, since that's the odds I think I can get a utopia without the deal. (Though since that utopia is less likely to be my personal best imagined case, I could theoretically be persuaded by an X that was only a little under 10%.) X=11%, Y=20% seems acceptable if barely so?
On the one hand, I feel like leaving a chance of the normal outcome is risking a 1% really bad outcome, so I should prefer to occupy as much of the possible outcomes with X or Y; say, X=36% and Y=64%. On the other, 41% of the outcomes if I refuse are worse than right now, and 50% are better than right now, so if I refuse the deal I should expect things to turn out in my favour. I'm trading a 64% chance of bad things for a 41% chance of bad things. This looks dumb, but it's because I think things going a little well is pretty likely- my odds about good an bad outcomes change a lot depending on whether I'm looking at the tails or the hill. Since I'm actually alright (though not enthusiastic) with things getting a little worse, I'm going to push my X and Y halfway towards the values that include things changing a little bit. Lets say X=45% and Y=55%? That seems to square with my (admittedly loose) math, and it feels at first glance acceptable to my intuition. This seems opposed by my usual stance that I'd rather risk moderately bad outcomes if it means I have a bigger chance of living though, so either my math is wrong or I need to think about this some more.
TLDR, I'd answer X=45% and Y=55%, but my gut is pushing for a higher chance of surviving or taking the default outcome depending on how the probabilities work.
Replies from: cousin_it↑ comment by entirelyuseless · 2017-06-27T15:13:05.690Z · LW(p) · GW(p)
I feel like I would accept approximately at 90/10, and would be reluctant to accept it with worse terms. But this might be simply because a 90% chance for a human being is basically where things start to feel like "definitely so."
Replies from: cousin_it↑ comment by cousin_it · 2017-06-27T15:16:43.013Z · LW(p) · GW(p)
Would you accept at 40/10? (Remaining 50% goes to status quo.)
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-06-27T15:24:08.287Z · LW(p) · GW(p)
This feels like no. Or rather, it feels about the same as 80/20, which inclines me to say no.
↑ comment by Dagon · 2017-06-27T15:05:44.816Z · LW(p) · GW(p)
I think there are close to zero humans who make this tradeoff. Scope insensitivity hits too hard.
The first question, though, is "compared to what?" If I reject the deal, what's the chance intelligence will attain utopia at some point, and what's the chance of extinction? The second question is "why should I believe this offer?"
Replies from: cousin_it, tutcomment by cousin_it · 2017-06-26T11:00:24.076Z · LW(p) · GW(p)
I wrote a post on agentfoundations about some simple MIRI-ish math. Wonder if anyone on LW would be interested.
comment by madhatter · 2017-07-01T02:11:32.741Z · LW(p) · GW(p)
Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.
Replies from: whpearson↑ comment by whpearson · 2017-07-01T09:44:21.662Z · LW(p) · GW(p)
Did you see this thread on making an on-line course. It is probably a place to co-ordinate this sort of thing.
comment by cousin_it · 2017-06-30T20:57:04.158Z · LW(p) · GW(p)
I have a very dumb question about the thermodynamic arrow of time.
The usual story is that evolution of microstates is time-symmetric, but usually leads to more populous macrostates, pretty much by definition. The problem is that the same is true in reverse, most possible pasts of any system come from more populous macrostates as well.
For example, let's say I have a glass of hot water with some ice cubes floating in it. The most likely future of that system is uniformly warm water. But then the most likely past of that system is also uniformly warm water. WTF?
comment by cousin_it · 2017-06-30T19:23:08.714Z · LW(p) · GW(p)
I came up with another fun piece of math about LWish decision theory. 15 minute read for people who enjoy self-referential puzzles and aren't afraid of the words "Peano arithmetic". Questions welcome, as always.
comment by blankcanvas · 2017-06-30T19:14:12.001Z · LW(p) · GW(p)
I'm a high school dropout with my IQ in the low 120's to 130. I want to do my part and build a safe AGI, but it will take 7 years to finish high school and a bachelor and master's. I have no math or programming skills. What would you do in my situation? Should I forget about AGI and do what exactly?
If I work on a high school curriculum it doesn't feel like I am getting closer to building an AGI, neither do I think working on a bachelor would either. I'm questioning if I really want to do AGI work or am capable of it, compared let's say if my IQ was in the 140-160's.
Replies from: Dagon, None, username2, ChristianKl↑ comment by Dagon · 2017-06-30T23:09:06.968Z · LW(p) · GW(p)
Honestly, the best contribution that the vast majority of people (even very smart like you or extremely smart like in the 140+) is to live the best life you can find and support more research through encouragement (now) and contributions (years from now when you're established at something other than AGI-building).
If you limit yourself to direct work in the field, you greatly delay your success and seriously increase the risk that you won't succeeed.
↑ comment by [deleted] · 2017-07-01T11:28:21.280Z · LW(p) · GW(p)
For almost all people, their comparative advantage won't be in AI research, and they'd do more good doing whatever they're best placed to do, and donating a portion of their income.
You don't give enough detail for us to give specific suggestions, but unless you have extraordinarily compelling reasons to think that you were born to be an AI researcher, I wouldn't recommend making major life changes for the sole purpose of maybe becoming a fairly average AI researcher in ~10 years.
↑ comment by username2 · 2017-06-30T22:42:23.263Z · LW(p) · GW(p)
I dropped out of high school, and was able to take a short, 3hr test to get a high school equivalent certificate from my state (the California High School Proficiency Test). I then went to community college and transferred to a 4-year university.
Some frank advice: if you "have no math or programming skills" then you don't even know enough to know whether AI x-risk is even something worth working on. There is a ridiculous amount of biased literature available on the Internet, LW in particular, targeted towards those without strong backgrounds in computer science or AI. You may not be getting the whole story here. If this is what you feel you want to do, then I suggest taking the community college route and getting a comp sci degree from an inexpensive local university. You'll be in a better position then to judge what you want to do.
Also, all forms of IQ measurement are worthless and without any value in this context. I would suggest doing your best to forget that number.
↑ comment by ChristianKl · 2017-06-30T19:55:22.986Z · LW(p) · GW(p)
What's your age?
It sounds like you think that having a high school degree, a bachelor's degree and a master's degree is a requirement for working on AI safety. That isn't true Eliezer Yudkowsky for example has neither of those.
Replies from: blankcanvas↑ comment by blankcanvas · 2017-06-30T20:10:24.864Z · LW(p) · GW(p)
21-22.
The government is willing to give me (stolen from taxpayers) around $1000/month in student welfare to finish my high school degree. And a student loan at ~1.5% interest of around $1000/month and ~$500 in welfare for university. However if I work part-time alongside finishing high school, it will pretty much be what I pay in taxes. ~50% tax on freelance, and 25% on goods I purchase in the store. But that means 60 h / week.
I don't think I want to work in an unskilled labor job. If I was certain that my IQ was around 100 then I would... If I don't go to school now I will have to learn on my own for 1-2 years web development to get a job that way to sustain building AGI.
I know Yudkowsky don't, but how would you balance work-agi-life?
Replies from: morganism, ChristianKl↑ comment by morganism · 2017-07-01T21:47:41.273Z · LW(p) · GW(p)
If i were your age...
I would start learning CAD software, and try and get a job locally as a machinist. Then start learning some robotics, 3D printing, and at least some Python coding. Then head out to the Mojave Spaceport, and try and get a gig in the commercial space industry.
You're young enough to maybe get a gig later in life as an on-orbit or lunar machinist and 3-D printer tech. Might as well be building the robots that the AI will be operating. That would be helping humanity's future, and giving you a unique future to live in also.
↑ comment by ChristianKl · 2017-06-30T23:01:25.476Z · LW(p) · GW(p)
I think that it's very likely that the amount of information you provide about your present position is not enough to give you qualified advice.
I intuition suggest that spending ages 21 to 28 to get a high school diploma likely isn't worth it but I don't think I have enough information to tell you what you should do. One thing that might be valuable is to broaden your view and look at different possible paths and actually talk to different people at length in real life.
comment by Viliam · 2017-06-28T09:59:35.946Z · LW(p) · GW(p)
In future, could cryptocurrencies become an important contributor to global warming?
An important part of the common mechanisms is something called "proof of work", which roughly means "this number is valuable, because someone provably burned at least X resources to compute it". This is how "majority" is calculated in the anonymous distributed systems: you can easily create 10 sockpuppets, but can you also burn 10 times more resources? So it's a majority of burned resources that decides the outcome.
I can imagine some bad consequences as a result of this. Generally, if cryptocurrencies would become more popular; in extreme case if they would become the primary form of money used across the planet; it would create a pressure to burn insane amounts of resources... simply because if we decide collectively to only burn a moderate amount of resources, then a rogue actor could burn a slightly more than moderate amount of resources and take over the whole planet's economy.
And in long term, the universe will probably get tiled with specialized bitcoin-mining hardware.
Replies from: Pimgd, username2↑ comment by Pimgd · 2017-06-28T14:27:39.141Z · LW(p) · GW(p)
Ethereum is working on proof of stake, which boils down to "I believe that this future is what really happened, and to guarantee so, here's $1000 that you may destroy if it's not true."
https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ
Key quote for me:
"in PoW, we are working directly with the laws of physics. In PoS, we are able to design the protocol in such a way that it has the precise properties that we want - in short, we can optimize the laws of physics in our favor. The "hidden trapdoor" that gives us (3) is the change in the security model, specifically the introduction of weak subjectivity."
Replies from: None↑ comment by [deleted] · 2017-06-28T21:50:29.823Z · LW(p) · GW(p)
Is a solution in which control of the network is secured through the same private keys that secure the stores of value an option?
Replies from: username2↑ comment by username2 · 2017-06-29T07:22:30.987Z · LW(p) · GW(p)
No, you've hit the nail on the head. In fact it is a generally abstractable property that proof of stake reduces to proof of work under adversarial conditions as those with stake grind out possible recent block histories that lead to their own stakes being selected to select the next blocks.
↑ comment by username2 · 2017-06-28T10:41:49.619Z · LW(p) · GW(p)
The amount of wastage from bitcoin mining pales compared to the GDP spent on traditional forms of trust. Think banking isn't contributing to global warming? Well all those office buildings have lights and electricity and back-room servers, not to mention the opportunity costs.
If you want to reduce the need for bitcoin, then reduce the need for trustless solutions. This is an open-ended political and social problem, but not one that is likely to remain unsolved forever.
↑ comment by satt · 2017-06-28T21:51:28.742Z · LW(p) · GW(p)
The amount of wastage from bitcoin mining pales compared to the GDP spent on traditional forms of trust. Think banking isn't contributing to global warming? Well all those office buildings have lights and electricity and back-room servers, not to mention the opportunity costs.
That provoked me to do a Fermi estimate comparing banking's power consumption to Bitcoin's. Posting it in case anyone cares.
Estimated energy use of banking
The service sector uses 7% of global power and produces 68% of global GDP. Financial services make up about 17% of global GDP, hence about 25% of global services' contribution to GDP. If financial services have the same energy intensity as services in general, financial services use about 25% × 7% = 1.8% of global power. World energy consumption is of order 15 TW, so financial services use about 260 GW. Rounding that down semi-arbitrarily (because financial services include things like insurance & pension services, as well as banking), the relevant power consumption number might be something like 200 GW.
Estimated energy use of Bitcoin
A March blog post estimates that the Bitcoin network uses 0.774 GW to do 3250 petahashes per second. Scaling the power estimate up to the network's current hash rate (5000 petahashes/s, give or take) makes it 1.19 GW. So Bitcoin is a couple of orders of magnitude short of overtaking banking.
Replies from: bogus↑ comment by bogus · 2017-06-28T22:12:34.184Z · LW(p) · GW(p)
So Bitcoin is a couple of orders of magnitude short of overtaking banking.
Of course, BTC is also many orders of magnitude short of banking in the volume of trusted transactions it enables - this is hardly an apples-to-apples comparison! A single BTC transaction is actually rather economically costly, and this will only become more fully apparent to BTC users over time, as the current block-creation subsidy keeps dwindling further.
Now don't get me wrong, BTC and other crypto-currencies are still interesting as a heroic effort to establish decentralized trust and enable transfers of value in problematic settings, such that a conventional financial system is unavailable. But the amount of hype that surrounds them is rather extreme and far from justified AFAICT.
(In the long run, there is some chance that we'll come up with forms of automated "proof of work" that have beneficial side-effects, such as solving instances of interesting NP-hard problems. If so, this might reduce the collective cost of using crypto-currencies significantly, maybe even make them competitive with the traditional banking system! In fact, a prime example of this exists already although the chosen problem is little more than a mere curiosity. Clearly, we need a better characterization of what problems make for a good crypto-currency target!)
Replies from: username2, ChristianKl, Viliam↑ comment by username2 · 2017-06-28T23:10:41.017Z · LW(p) · GW(p)
This is a bogus argument (hehehe, sorry).
Bitcoin is a settlement network, used for periodic netting of positions. The fact that settlement is primarily used for direct payment now is chiefly due to the fact it is easy to do -- path of least resistance -- and transactions are still cheap. As bitcoin develops it will be used more by 2nd layer protocols that use the block chain only for reallocation of funds among settlement parties, and for this it is unclear how much larger the bitcoin volume needs to be above current levels.
Creating proof of work schemes that have resellable secondary value actually undermines the security provided by proof of work. The security provided by proof of work is the cost of the work minus the resellable outputs, so it is exactly the same or worse than if you just had fewer miners doing useless work and a few specialized servers doing the useful portion. It is possible though that we could come up with a scheme that has secondary value that is purely in the commons and that would be strictly better. For example, a proof of work that acts as unwinding a ticking timelock encryption, so it can be used to reliably encrypt messages to future block heights.
In terms of expected utility payout, working on such a scheme would be of very high benefit, more than most any task in any other domain I can think of, and I encourage people to take up the problem.
Replies from: bogus↑ comment by bogus · 2017-06-29T01:28:31.666Z · LW(p) · GW(p)
Bitcoin is a settlement network, used for periodic netting of positions. The fact that settlement is primarily used for direct payment now is chiefly due to the fact it is easy to do
I'm not sure that there's any real distinction between "direct payment" and "settlement". For that matter, while BTC may in fact be strictly preferable to physical/paper-based settlement in resource use (though even then I'm not sure that the difference is that great!), that's rather small consolation given the extent to which electronic settlement is used today. (In fact, contrary to what's often asserted, there seems no real difference between this sort of electronic settlement and what some people call a "private, trusted blockchain"; the latter is therefore not a very meaningful notion!)
↑ comment by ChristianKl · 2017-06-29T12:32:32.385Z · LW(p) · GW(p)
Switching from proof of work to proof of stake for most transactions seems to be a more likely solution to the problem.
↑ comment by Viliam · 2017-06-29T10:09:06.926Z · LW(p) · GW(p)
forms of automated "proof of work" that have beneficial side-effects, such as solving instances of interesting NP-hard problems
Would breaking cryptography be a good example of this? Like, someone enters a bunch of public keys into the system, and your "proof of work" consists of finding the corresponding private keys. Then we could construct cryptocurrencies based on hacking competing cryptocurrencies; that could be fun!
(Yeah, I guess the important obstacle is that you want the "proof of work" to scale depending on the needs of the network. Too difficult: the process is slow. Too simple: there are too many outcomes to handle. Must adjust automatically. But you can't provide enough real private keys with an arbitrary difficulty.)
Replies from: username2↑ comment by username2 · 2017-06-30T13:41:18.619Z · LW(p) · GW(p)
Breaking cryptosystems? That exact construct would be so much more useful than that. It'd let you have a ticking time-lock encryption service -- I encrypt a message using the keys from the next block until block number N at some point in the future. You now have a message that will decrypt at a specified time in the future automatically and without intervention. That is a tremendous public resource to say nothing of its use within the system as a smart contracting primitive.
Unfortunately known methods of achieving this (e.g. breaking low-bit EC keys using Pallard's rho algorithm) don't meet the basic requirements of a proof of work system, the chief problem here being non-progress-free.
comment by halcyon · 2017-06-28T07:41:41.939Z · LW(p) · GW(p)
Edit: A close reading of Shramko 2012 has resolved my confusion. Thanks, everyone.
I can't shake the idea that maps should be represented classically and territories should be represented intuitionistically. I'm looking for logical but critical comments on this idea. Here's my argument:
Territories have entities that are not compared to anything else. If an entity exists in the territory, then it is what it is. Territorial entities, as long as they are consistently defined, are never wrong by definition. By comparison, maps can represent any entity. Being a map, these mapped entities are intended to be compared to the territory of which it is a map. If the territory does not have a corresponding entity, then that mapped entity is false insofar as it is intended as a map.
This means that territories are repositories of pure truth with no speck of falsehood lurking in any corner, whereas maps represent entities that can be true or false depending on the state of the territory. This corresponds to the notion that intuitionism captures the concept of truth. If you add the concept of falsehood or contradiction, then you end up with classical logic or mathematics respectively. First source I can think of: https://www.youtube.com/playlist?list=PLt7hcIEdZLAlY0oUz4VCQnF14C6VPtewG
Furthermore, the distinction between maps and territories seems to be a transcendental one in the Kantian sense of being a synthetic a priori. That is to say, it is an idea that must be universally imposed on the world by any mind that seeks to understand it. Intuitionism has been associated with Kantian philosophy since its inception. If The Map is included in The Territory in some ultimate sense, that neatly dovetails with the idea of intuitionists who argue that classical mathematics is a proper subset of intuitionistic mathematics.
In summary, my thesis states that classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system. In contrast, intuitionistic logic is the logic of describing a territory without seeking to compare it to something else. Intuitionistic type theory turns up type errors, for example, when such a description turns out to be inconsistent in itself.
Where did I take a wrong turn?
Replies from: g_pepper, g_pepper, gjm↑ comment by g_pepper · 2017-07-01T01:23:58.076Z · LW(p) · GW(p)
Also possibly problematic is the dichotomy described by the summary:
classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system. In contrast, intuitionistic logic is the logic of describing a territory without seeking to compare it to something else. Intuitionistic type theory turns up type errors, for example, when such a description turns out to be inconsistent in itself.
seems more appropriate to contrast scientific/Bayesian reasoning, which strives to confirm or refute a model based on how well it conforms to observed reality vs deductive (a priori) reasoning, which looks only at what follows from a set of axioms. However, one can reason deductively using classical or intuitionistic logic, so it is not clear that intuitionistic logic is better suited than classical logic for "describing a territory without seeking to compare it to something else".
Replies from: halcyon↑ comment by halcyon · 2017-07-01T15:33:05.375Z · LW(p) · GW(p)
I don't see how distinguishing between deductive and inductive reasoning is mutually exclusive with the map/description distinction. That is to say, you could have each of the following combinations: deductive map, deductive description, inductive map, and inductive description.
Edit: On second thought, I see what you were saying. Thanks, I will think about it.
↑ comment by g_pepper · 2017-07-01T01:09:51.303Z · LW(p) · GW(p)
I can't shake the idea that maps should be represented classically and territories should be represented intuitionistically.
But, it seems to me that a map is a representation of a territory. So, your statement “maps should be represented classically and territories should be represented intuitionistically” reduces to “representations of the territory should be intuitionistic, and representations of those intuitionistic representations should be classical”. Is this what you intended, or am I missing something?
Also, I’m not an expert in intuitionistic logic, but this statement from the summary sounds problematic:
classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system
But, the concept of falsehood is integral to both classical and intuitionistic logic. Intuitionistic logic got rid of the principle of the excluded middle but did not get rid of the concept of falsity.
Replies from: halcyon↑ comment by halcyon · 2017-07-01T12:05:09.547Z · LW(p) · GW(p)
Thanks.
Regarding falsehood: I would say that intuitionistic logic ejects falsehood from its formal system in the specific sense mentioned in my link. I could dig up more references if you want me to. I agree that there are many reasonable interpretations in which it does not do so, but I don't think those interpretations are relevant to my point. I only intended to argue that proof by contradiction is the strategy of correcting a map as opposed to describing a territory.
Regarding mapping versus description: I agree that my motivations were semantic rather than syntactic. I just wanted to know whether the idea I had made sense to others who know something of intuitionistic logic. I guess I have my answer, but for the sake of clarifying the sense I was going for, here's the example I posted below:
Suppose you have a proposition like, "There is a red cube." Next, you learn that this proposition leads to a contradiction. You could say one of two things:
- This proves there is no red cube.
- This means the context in which that proposition occurs is erroneous.
Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?
Replies from: g_pepper↑ comment by g_pepper · 2017-07-02T20:38:31.087Z · LW(p) · GW(p)
Regarding mapping versus description: I agree that my motivations were semantic rather than syntactic. I just wanted to know whether the idea I had made sense to others who know something of intuitionistic logic.
Understood. But, the point that I raised is not merely syntactic. On a fundamental level, a description of the territory is a map, so when you attempt to contrast correcting a map vs rejecting a description of a territory, you are really talking about correcting vs. rejecting a map.
Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?
Yes, in the case of number 1 you have proved via contradiction that there is no red cube, and in #2 you have concluded that one or more of your assumptions is incorrect (i.e. that your map is incorrect). However, this is not a map vs. territory distinction; in both cases you are really dealing with a map. To make this clear, I would restate as:
1 is the strategy of correcting the map and 2 is the strategy of rejecting the map as inaccurate without seeking to correct it.
So, I guess I don't really have anything additional to add about intuitionistic logic - my point is that when you talk about a description of the territory vs. a map, you are really talking about the same thing.
Replies from: halcyon↑ comment by halcyon · 2017-07-02T23:05:35.847Z · LW(p) · GW(p)
Thanks. The next thing I was going to say is that the intuitionistic strategy of neutrality with regard to affirming or negating propositions in worlds until proof comes along roughly (i.e. in a sense to be argued for later) differentiates the classical and intuitionistic approaches like so:
The classical approach is good for having one "world" description that is almost certainly inaccurate. This can be gradually updated, making it represent one map.
The intuitionistic approach is good for having multiple world descriptions that are almost certainly incomplete. Their contours are filled in as more information becomes available and rejected as inaccurate when they lead to contradictions, making each one a holistic representation of a possible territory. (Shoehorning the same approach into classical logic is possible, but you have to create a set of conventions to do so. These conventions are not universal, making the approach less natural.)
Something like that anyway, but Shramko 2012 has put a lot more thought into this than I have: http://kdpu.edu.ua/shramko/files/2012_Logic_and_Logical_Philosophy_What_is_a_Genueny_Intuitionistic_Notion_of_Falsity.pdf I defer to expert opinion here.
↑ comment by gjm · 2017-06-28T12:49:31.395Z · LW(p) · GW(p)
I can't tell you where you took a wrong turn, because I don't know whether you did. But I can tell you where you lost me -- i.e., where I stopped seeing how each statement was an inference drawn from its predecessors plus uncontroversial things.
The first place was when you said "This corresponds to the notion that intuitionism captures the concept of truth." How does is correspond to that? "This" is the idea that tthe territory has no errors in it, whereas the map has errors, and I don't see how you get from that to anything involving intuitionism.
... Oh, wait, maybe I do? Are you thinking of intuitionism as somehow lacking negation, so that you can only ever say things are true and never say they're false? Your "summary" paragraph seems to suggest this. That doesn't seem like it agrees with my understanding of intuitionism, but I may be missing something.
The second time you lost me was when you said "If The Map is included in The Territory [...] that neatly dovetails with the idea [...] that classical mathematics is a proper subset of intuitionistic mathematics". Isn't that exactly backwards? Intuitionistic mathematics is the subset of classical mathematics you can reach without appealing to the law of the excluded middle.
Finally, your "summary" paragraph asserts once again the correspondence you're describing, but I don't really see where you've argued for it. (This may be best viewed as just a restatement of my earlier puzzlements.)
Replies from: halcyon↑ comment by halcyon · 2017-06-29T00:23:15.892Z · LW(p) · GW(p)
Thank you for the response.
Regarding errors: It's not that intuitionism never turns up errors. It's that the classical approach incorporates the concept of error within the formal system itself. This is mentioned in the link I gave. There are two senses here:
Falsehood is more tightly interwoven in the formal system when following the classical approach.
Errors are more integral to the process of comparing maps to territories than the description of territories in themselves.
It is possible that these two senses are not directly comparable. My question is: How meaningful is the difference between these two senses?
Regarding subsets: It is true that intuitionism is often regarded as the constructive subset of classical mathematics, but intuitionists argue that classical mathematics is the proper subset of intuitionistic mathematics where proof by contradiction is valid. I'm basically paraphrasing intuitionistic mathematicians here.
This (i.e. subsets thing) is not intended as an irrefutable argument. It is only intended to extend the correspondence. After all, if either classical or intuitionistic approaches can be used as a foundation for all of mathematics, then it stands to reason that the other will appear as a proper subset from the foundational perspective of either.
Edit: This doesn't add any new information, but let me give an example for the sake of vividness. Suppose you have a proposition like, "There is a red cube." Next, you learn that this proposition leads to a contradiction. You could say one of two things:
- This proves there is no red cube.
- This means the context in which that proposition occurs is erroneous.
Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?
comment by erratio · 2017-06-28T03:11:26.605Z · LW(p) · GW(p)
Request: A friend of mine would like to get better at breaking down big vague goals into more actionable subgoals, preferably in a work/programming context. Does anyone know where I could find a source of practice problems and/or help generate some scenarios to practice on? Alternatively, any ideas on a better way to train that skill?
Replies from: username2comment by blankcanvas · 2017-06-27T17:41:10.284Z · LW(p) · GW(p)
I don't know what goal I should have to be a guide for instrumental rationality in the present moment. I want to take this fully seriously, but for the instrumental rationality in of it self with presence.
"More specifically, instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences.
Why, my, preferences? Have we not evolved rational thought further than simply anything one self cares about? If there even is such a thing as a self? I understand, it's how our language has evolved, but still.
Said preferences are not limited to 'selfish' preferences or unshared values; they include anything one cares about."
Not limited, to selfish preferences or unshared values, what audience is rationality for?
https://wiki.lesswrong.com/wiki/Rationality
Replies from: Erfeyah, entirelyuseless, Screwtape, Lumifer↑ comment by Erfeyah · 2017-06-28T17:25:03.633Z · LW(p) · GW(p)
This is the question isn't it? Rationality can help you create and implement strategies but the choice of your goal is another matter. Of course as humans we have some build in goals that are common to everyone:
- Basic Survival (food, water, warmth, selter etc.). In our era that means a job.
- Social Interaction
- Intimate Relationship and Family
If you have these sorted then you can decide what is important for you as a next level goal. Attempts have been made in psychology to organise goals (an example). I personally have found that there are much deeper value systems that can be experientially confirmed. A large part of the LW community seems (to my current personal assessment) to be behind in their understanding due to a kind of undetected prejudice. You see, the most advanced value systems are actually stemming from stories as can be found in stories. Myth, religion, mysticism, drama, literature etc. And there is a growing amount of evidence that they have evolved to accord with reality and not to be socially constructed (and definitely not rationally arrived at).
If your new age alert was activated due to my last statement consider the possibility of this undetected prejudice I was talking about. Check out the work of Jordan Peterson.
Replies from: blankcanvas↑ comment by blankcanvas · 2017-06-30T18:51:14.220Z · LW(p) · GW(p)
Unfortunately, I'm not a man who has this undetected prejudice, I've personally been delving into mysticism in the past through meditation. I am familiar with Jordan Peterson and have watched many of his lectures thanks to the YouTube algorithm, but I'm unsure what I have learned. Do you have any suggestion in what order to watch and learn from his lectures? I'm thinking Maps of Meaning 2017 - Personality 2017 - Biblical Series.
I've also tried reading his book recommendations, like Brave New World, rated at #1, but it doesn't really seem to captivate my attention, it feels more like a chore than anything. I suppose that's how I viewed the books after all, "I need to download this information into my brain so our AGI-system I might help create won't wipe us out".
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-30T19:53:09.033Z · LW(p) · GW(p)
Yes, Maps of Meaning 2017 and Personality 2017 are the most advanced ones as they are actual university courses. Maps of Meaning in particular provides a great answer to your question regarding goals. I found people find them hard to understand but you are on LW so I would assume you like an intellectual challenge ;-)
The bible series is interesting in itself and a bit easier than the lectures but I found Maps of Meaning to be deeper (though harder to get through). I think people used to a more systematic, sequential type of thinking can get confused by his style as he is rapidly presenting patterns through examples in multiple levels.
If you have any objections or discussion points I have created a thread regarding his view of religion, myths etc. I am trying to engage the LW community as, after studying the material for a few months, my assessment is that the points he is making are really strong and would love some strong counter arguments.
Replies from: blankcanvas↑ comment by blankcanvas · 2017-06-30T20:38:51.250Z · LW(p) · GW(p)
I want to go into this full-time but I'm unfortunately looking at part-time work and full-time studies (60 h / week) which annoys me deeply, and I've never manged to do even 10 hours a week (conscientiousness in the 2nd percentile, yes 2% of people have less in conscientiousness and uhm neuroticism in the 80th percentile). I'm thinking about skipping my studies to the government-funded school, which bribes me very well, just working 20 h a week and doing Maps of Meaning etc, 40 h a week. I wrote about it more here: http://lesswrong.com/lw/p6l/open_thread_june_26_july_2_2017/dusd
I'm not your ordinary LWer, this is not my only account. If you are looking to make people buy into this who are hyperrational and IQ's in the 140's, I wasn't the targeted audience :).
Thanks for the advice by the way.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-30T21:37:38.034Z · LW(p) · GW(p)
I'm not your ordinary LWer, this is not my only account. If you are looking to make people buy into this who are hyperrational and IQ's in the 140's, I wasn't the targeted audience :).
I am not selling anything :) I enjoy discussing ideas and would not discriminate based on intelligence (at least beyond sufficient level). IQ is a useful measurement for the capacity of manipulation of abstractions but there is no evidence that it correlates with wisdom. Indeed, in my experience, I feel it can have adverse effects. As Peterson would put it "the intellect has the tendency to fall in love with its own creations".
Regarding your life dilemmas I do not know your circumstances but one thing I can tell you is that you are very young and there is absolutely no reason to feel so much pressure (if I read your state correctly). Your goal is clear. You need to complete your education. You have the funding for high school so take it seriously and go for it . Other goals such as the AGI one you can leave for later. Retain your flexibility as in 5 years you are not going to be the same person you are today. Identify an area of interest during the time you complete high school, be honest, patient and exercise humility for flexibility of mind. This is your time to learn so take the next years for learning and then you decide what your next step is. Don't put the cart before the horse.
↑ comment by entirelyuseless · 2017-06-27T22:45:52.141Z · LW(p) · GW(p)
The problem seems to be that you think that you need to choose a goal. Your goal is what you are tending towards. It is a question of fact, not a choice.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-06-28T17:33:41.032Z · LW(p) · GW(p)
Well, we have language so our discussion of goals is feeding back to the goal choosing mechanism. So apart from our 'needs' which we could consider base and inescapable goals we have the ability of creating imagined goals. Indeed people often die for these imagined goals. So one question among many is "How do we create and/or choose our imagined goals?".
↑ comment by Screwtape · 2017-06-27T18:01:30.189Z · LW(p) · GW(p)
Your preferences can include other people's well being. I have a strong preference that my brother be happy, for example.
Replies from: blankcanvas↑ comment by blankcanvas · 2017-06-27T18:20:41.176Z · LW(p) · GW(p)
So all of your actions in the present moment is guided towards your brothers happiness? I didn't mean switching between goals as situations change, only one goal.
Replies from: Screwtape↑ comment by Screwtape · 2017-06-27T19:45:19.558Z · LW(p) · GW(p)
My terminal goal is that I exist and be happy.
Lots of things make me happy, from new books to walks in the woods to eating a ripe pear to handing my brother an ice cream cone.
Sometimes trying to achieve the terminal goal involves trading off which things I like more against each other, or even doing something I don't like in order to be able to do something I like a lot in the future. Sometimes it means trying new things in order to figure out if there's anything I need to add to the list of things I like. Sometimes it means trying to improve my general ability to chart a path of actions that lead to me being happy.
One goal, that is arguably selfish but also includes others values as input, that gets followed regardless of the situation. Does that make more sense?
↑ comment by Lumifer · 2017-06-27T17:58:16.782Z · LW(p) · GW(p)
Why, my, preferences?
What are your other options?
Replies from: blankcanvas↑ comment by blankcanvas · 2017-06-27T18:19:29.576Z · LW(p) · GW(p)
That's why I am asking here. What goal should I have? I use goal and preference interchangeably. I'm also not expecting the goal/preference to change in my lifetime, or multiple lifetimes either.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-27T18:33:26.588Z · LW(p) · GW(p)
What goal should I have?
First, goals, multiple. Second, internally generated (for obvious reasons). Rationality might help you with keeping your goals more or less coherent, but it will not help you create them -- just like Bayes will not help you generate the hypotheses.
Oh, and you should definitely expect your goals and preferences to change with time.
Replies from: blankcanvas↑ comment by blankcanvas · 2017-06-27T18:43:53.023Z · LW(p) · GW(p)
It doesn't make sense to have internally generated goals, as any goal I make up seems wrong and do not motivate me in the present moment to take action. If a goal made sense, then I could pursue it with instrumental rationality in the present moment, without procrastination as a means of resistance. Because it seems as it simply is resistance of enslavement to forces beyond my control. Not literally, but you know, conditioning in the schooling system etc.
So what I would like, is a goal which is universally shared among you, me and every other Homo Sapiens, which lasts through time. Preferences which are shared.
Replies from: MrMind, Lumifer↑ comment by Lumifer · 2017-06-27T19:00:37.554Z · LW(p) · GW(p)
any goal I make up seems wrong and do not motivate me in the present moment to take action
You are not supposed to "make up" goals, you're supposed to discover them and make them explicit. By and large your consciousness doesn't create terminal goals, only instrumental ones. The terminal ones are big dark shadows swimming in your subconscious.
Besides, it's much more likely your motivational system is somewhat broken, that's common on LW.
a goal which is universally shared among you, me and every other Homo Sapiens, which lasts through time
Some goal, any goal? Sure: survival. Nice terminal goal, universally shared with most living things, lasts through time, allows for a refreshing variety of instrumental goals, from terminating a threat to subscribing to cryo.
comment by cousin_it · 2017-06-26T07:23:48.651Z · LW(p) · GW(p)
I've been mining Eliezer's Arbital stuff for problems to think about. The first result was this LW post, the second was this IAFF post, and I'll probably do more. It seems fruitful and fun. I've been also mining Wikipedia's list of unsolved philosophy problems, but it was much less fruitful. So it seems like Eliezer is doing valuable work by formulating philosophy problems relevant to FAI that people like me can pick up. Is anyone else doing that kind of work?
comment by halcyon · 2017-06-26T06:55:11.474Z · LW(p) · GW(p)
I found an interesting paper on a Game-theoretic Model of Computation: https://arxiv.org/abs/1702.05073
I can't think of any practical applications yet. (I mean, do silly ideas like a game-theoretic "programming language" count as practical?)
comment by Thomas · 2017-06-26T06:14:09.180Z · LW(p) · GW(p)
Replies from: gilch, cousin_it, Manfred↑ comment by gilch · 2017-07-02T23:08:16.039Z · LW(p) · GW(p)
Possibly relevant (crazy idea about extracting angular momentum from the Earth)
↑ comment by cousin_it · 2017-06-26T21:26:30.317Z · LW(p) · GW(p)
Well, tidal power plants exist. If you want a smaller device that will work in principle, hang a ball on a spring from the ceiling. When the moon passes overhead, the ball will go up. Then it's up to you to translate that to rotary motion. The effect will be tiny though, so you'll need a really lossless spring, a really stable building, and maybe a vacuum filled room :-)
As for Foucault's pendulum, which works based on rotation of the Earth (not tides), I don't think you can extract energy from such things even in principle.