Are these arguments valid?
post by Erfeyah · 2018-01-10T14:06:42.301Z · LW · GW · 20 commentsContents
20 comments
I have lately found myself using two particular strategies quite often during discussions and want to make sure that their logical structure is valid. So, I thought what better place to have them dismantled than LW :)
[1] The first strategy involves sending a hypothetical example's equivalent back in time and using the present knowledge of the outcome as a justification for the validity or not of the argument. The last time I used this was when someone tried to convince me that IQ is the main factor for human value by asking me which one is superior, a technologically developed, high IQ culture vs an under-developed mid IQ one?
I responded that I can not rationaly know what to do based on only this information. When pushed on why and on making a choice, I responded that if you were asking me this question about pre-war Germany in the place of the highly developed country using your own logic you would choose Germany as superior but we now know that the 'superior' country was morally inferior (I assumed correctly that they accept similar definitions of good and evil and German actions in the war were evil). With the benefit of hindsight we now know that this would be the wrong decision so their argument is demonstrably wrong.
Now, I don't want to get into this argument here. I just want to know if the strategy I used is logicaly valid as they did not accept it and instead, more or less, accused me of sophistry.
[2] The second strategy is more suspect to my estimation but I am not sure why. In this method I demonstrate humanities miniscule understanding of reality (when put in proper perspective) and use this as a basis for a kind of attitude. Here is an example:
When discussing whether life has meaning or not one answer I use is a pragmatic one. The issue at hand is deciding how to act. In other words which belief to use as a motivation for action. There are two epistemic possibilites:
- [2.1] life has meaning
- [2.2] life does not have meaning.
First of all, we do not know if life has meaning or can estimate with any reasonable confidence. We can estimate based on current data but our data is tiny compared to the whole of reality. Therefore, we should always act as if [2.1] is true on the basis that, if true, we (personally or humanity as a whole) might understand and even contribute towards it. If [2.2] is true on the other hand the things to be lost (like effort, comfort etc.) are nothing in comparison.
(Woops, I just casualy introduced a discussion starter about the meaning of life... - sorry about that :P - Feel free to respond on whether the presented argument is sound but please do it in a seperate comment from the one discussing whether it is logically valid)
20 comments
Comments sorted by top scores.
comment by AndHisHorse · 2018-01-10T14:47:34.036Z · LW(p) · GW(p)
1) Historical counter-examples are valid. Counter-examples of the form of "if you had followed this premise at that time, with the information available in that circumstance, you would have come to a conclusion we now recognize as incorrect" are valid and, in my opinion, quite good. Alternately, this other person has a very stupid argument; just ask about other things which tend to be correlated with what we consider "advanced", such as low infant mortality rates (does that mean human value lies entirely in surviving to age five?) or taller buildings (is the United Arab Emirates is the objectively best country?).
2) "Does life have meaning" is a confused question. Define what "meaning" means in whatever context it is being used before engaging in any further debate, otherwise you will be arguing over definitions indefinitely and never know it. Your argument does sound suspiciously similar to Pascal's Wager, which I suspect other commenters are more qualified to dissect than I am.
Replies from: Erfeyah↑ comment by Erfeyah · 2018-01-10T18:53:36.237Z · LW(p) · GW(p)
Thank you for your comment.
I too think that the logic of [1] is valid. I am going to ask Dagon on the other comment why he thinks that it is not even near a logical structure. As for [2] I was interested in finding out whether, in the case where we agree on the terms, the conclusion follows from the premise. But I think you are right; it is probably impossible to judge this on the abstract.
In terms of the argument itself It is kind of like Pascals Wager with the difference of framing it as a moral duty towards 'meaning' itself (since if meaning exists it - in my formulation - grounds the moral duty) instead of self interest as in Pascal's Wager.
P.S: Interesting to see downvotes for a question that invites criticism... If you down voted the post yourself any constructive feedback of the reason why would be appreciated :)
comment by gjm · 2018-01-15T16:51:20.097Z · LW(p) · GW(p)
The first argument is valid in principle but liable to mislead in practice. That is: yes, "here is a historical example where you could have followed this approach, and we can see that the result would have been bad" is indeed an argument against the thing under consideration; but in historical examples it's very often true that the people we're talking about were (from today's perspective) terribly underinformed or misinformed about many things, and in that situation it's perfectly possible for good things to lead to bad results.
For instance, it is arguable that the heliocentric models of Galileo's time were not initially much better than the geocentric ones (if indeed they were better at all) and that a reasonable person then would not necessarily have been on Galileo's side. Richard Dawkins has famously said that Darwin made it possible to be an intellectually fulfilled atheist, and that before Darwin design arguments made theism hard to avoid. For those of us who are heliocentrists and atheists, does this mean that there's something wrong with rationality, since it would have led to wrong answers in those cases? No, it means that there was something wrong with the information available in those historical situations.
Replies from: Erfeyah↑ comment by Erfeyah · 2018-01-15T18:26:03.522Z · LW(p) · GW(p)
These are great points. I think the strategy is particularly useful against one sided arguments. In the case of my example it was someone suggesting that high IQ is the sole measure of value and I can thus use the strategy with confidence to point to the existence of other parameters.
But you are making another point that I am very interested in and have touched upon in the past:
For those of us who are heliocentrists and atheists, does this mean that there's something wrong with rationality, since it would have led to wrong answers in those cases? No, it means that there was something wrong with the information available in those historical situations.
Since rationality is dependant on available information it can be said there was something wrong with that 'rational assesment' though not with rationality itself. But we should then attend to the fact that our information is still incomplete (to say the least). I touched on a related point in my post Too Much Effort | Too Little Evidence.
I have been attempting to compose a more formal exploration of this issue for some time, but it is quite difficult to formulate properly (and also a bit intimidating to present it, from all places, to the rationalist community, haha).
comment by TheMajor · 2018-01-15T15:23:19.081Z · LW(p) · GW(p)
There is a technique in the card game Bridge that is similar to your point [2], so I wanted to mention it briefly (I believe this specific example has been mentioned on LessWrong before but I can't seem to find it).
The idea is that you have to assume that your partner holds cards (i.e. the part of the universe outside of your control is structured in such a way) that your decisions influence the outcome of the game. Following this rule will let you play better in games where it is true, and has no impact on the other games. Is this similar to what you are trying to say?
Replies from: Erfeyahcomment by Hazard · 2018-01-10T19:34:28.550Z · LW(p) · GW(p)
You correctly identified that the disguised query is "How should I act?", though I don't think you completely sidestepped the "Does life have meaning?" trap. By saying "We should believe life has meaning because it will make us more effective" you implicitly assert that:
- "Life has meaning" is a coherent proposition that everyone is on the same page about.
- The above proposition is the true state of reality.
- Whether or not we should "give up" is an immediate consequence of whether or not "Life has meaning" is true or false.
An argument or discussion that avoids addressing those implicit proposition is unlikely to be one of great epistemic virtue. If "Does life have meaning?" feels like a real question, try dissolving the question. If you're interested in a useful, well put together model of meaning (which dodges the moral "and therefor you should XYZ"), look here.
Meta:
I feel conflicted about this having been downvoted. Yes, it's not a super high value post, but it seems sincere and I feel that there should be a space for people to ask questions like this.
Replies from: Zvi, habryka4↑ comment by Zvi · 2018-01-10T20:43:38.409Z · LW(p) · GW(p)
I say system working as designed. Erfeyah spends a few karma, gets answers and engagement. If the questions are valuable that's a great trade.
Replies from: Hazard, habryka4↑ comment by habryka (habryka4) · 2018-01-10T21:08:41.374Z · LW(p) · GW(p)
Actually, good point. This seems like a sensible equilibrium to me.
↑ comment by habryka (habryka4) · 2018-01-10T19:55:10.894Z · LW(p) · GW(p)
I do not feel like the author has displayed familiarity with any significant chunk of the core writing on the page (I.e, the sequences and associated writing), but is asking the community to answer questions for him. I think it is important that people become familiar with the core writing of LessWrong before they ask a lot of questions of the community, otherwise you just get stuck in an eternal September and answer the same basic epistemological questions over and over again.
Replies from: Erfeyah↑ comment by Erfeyah · 2018-01-10T20:39:28.578Z · LW(p) · GW(p)
With all due respect, active engagement and feedback is a great way to learn in addition to reading the sequences.
Hazard above has pointed me to specific articles that I can apply directly to the analysis of arguments that I relate to, accelerating my learning. If Hazard and others are willing to help and I display the correct attitude towards learning I can only think of two problems this may cause:
- You feel that posts such as this one are claiming your attention and you would prefer to avoid them completely.
- You feel that they are cluttering the site itself.
These are valid concerns but I would suggest that if your concerns are shared by a majority of users the problem can be adressed at the level of site design.
I am also curious if you could specify exactly what you mean with the phrase "the same basic epistemological questions" in relation to this post.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-01-10T21:13:46.302Z · LW(p) · GW(p)
Strongly agree that engagement and feedback is a great way to learn! Though it's not trivial for users on the page to distinguish someone who is using the questions to further their understanding and come from a perspective of scholarship, from someone who feels entitled to the community answering questions for them (which is a common occurence on the page). For example, I would have been strongly supportive of a post that directly referrenced one or two things on LessWrong that you read that you felt were kind of trying to answer your questions, but not really making sense to you, or that relied on the assumptions of your questions but seemed unjustified. In the current form of your post, it's not clear whether you were someone who had any interest in trying to engage with the material on the page in a serious way, and I do apologize for making you feel less welcome if you are.
I care strongly about people having a good scholarship experience on LessWrong, and also wrote the above comment on my phone while waking up, so I was probably a bit less nice than I should have been. Sorry for that.
I do indeed think that the problem is mostly addressed on the level of site design, by you paying a small tax in karma for writing the post (as Zvi said above), and then people answering the questions for you. And if the questions are shared by others and seem insightful to others, then you get a bunch of karma instead. That seems to create a pretty sensible incentive for the site.
Replies from: Erfeyahcomment by Eugen · 2018-01-10T17:16:16.803Z · LW(p) · GW(p)
2) That depends entirely on the definition of meaning, just as AndHisHorse points out. It's not clear to me what is the most accepted definition of meaning, not even among scientists, let alone laymen.
One could even define meaning as loosely as "a map/representation of something that is depdendant or entangled with reality". Most people seem to use meaning and purpose interchangably. In this case I like purpose better, because then we can ask "what is the purpose of this hammer" and there is a reasonable answer to it that we all know. And if you furtner ask "why does it have purpose" you can say because humans made it to fulfill a certain function, which is therefore its purpose.
But be careful; "what is the purpose of a wing" (or alternatively insert any other biologically evolved feature here) may be a deeply confused question. In the case of the hammer "purpose" is a future-directed function/utility because an agent shaped it. In the case of a wing, there is no future-directed function, but rather a past-directed reason for its existance. Therefore, "In order to fly" is not the correct answer to the question "why do wings exist", the correct answer has to be past-directed (something like "becasue it enabled many generations before to do X, Y, and Z and thus became selected for by the environment"). So the purpose of hammers and wings aren't necessarily well-defined questions at all.
Humans, being biologically evolved beings, don't have purpose in the sense of a hammer, but only in the sense of a wing - the difference however may be, that we can exert more agency than a wing or a bird and can actually create things with purpose and can thus possibly give ourselves or our lives purpose.
My answer then would be that we don't have future-directed purpose apart from whatever purpose(s) we choose to give ourselves. Sure we may be in a simulation, but there is little evidence that this simulation is in any way about us, we may just be a complete by-product of whatever purpose the simulation may have.
Replies from: Erfeyah↑ comment by Erfeyah · 2018-01-10T20:54:08.893Z · LW(p) · GW(p)
Yes I think you and AndHisHorse are right on your criticism of [2].
I also really loved the past-directed future-directed distinction you are making! It kind of corners me towards making a teleological argument as a response, which I have to support against the evolutionary evidence of a past-directed purpose! There is another answer I can attempt that is based on the pragmatic view of truth but phew… I don't think I am ready for that at the moment :)
Thanks!
comment by Eugen · 2018-01-10T22:26:56.976Z · LW(p) · GW(p)
There is a very obvious problem with [1] as well:
" The first strategy involves sending a hypothetical example's equivalent back in time and using the present knowledge of the outcome as a justification for the validity or not of the argument."
It has basically the same problem as any "reasoning by analogy"-type argument. Reality is built from relatively simple components from the ground up and becomes complex quickly the "further up" you go. What you do is take a slice from the middle and compare it to some other slice from the middle and say X is like Y > Z applies to Y > Z thus also applies to X
In a perfect world you'd never even have to rely on reasonig by analogy because instead of comparing a slice of reality to some other slice you'd just explain something from the ground up. Often we can't do that with sufficient detail let alone with enough time, so reasoning by analogy is not always the wrong way to do but the example you picked are too far apart and too different I think.
Here's an example of reasoning by analogy I put once to a friend:
Your brain is like a vast field of wheat and the paths that lead through the field are like your neural connections. The more often you walk down the same path the deeper that path becomes ingreined and eventually habits form. Leaving the path and doing something you've never done before is like leaving the path and going through a thick patch of wheat - it requires a lot more energy from you, and it will for some time. But what you need to trust in is that - as you know - eventually there will be a new path in the field if you just walk it often enough. And in exactly the same fashion will your brain will develop new connections, you just have to trust that it will actually happen, just as you completely trust your intuition that eventually you'll walk a deep path into the field.
An he replied: So what if I took a tractor and just mowed all the field down? WOW I never saw that one coming, I never even expected anyone could be missing the point so completely...
obviously I wasn't claiming a brain IS LIKE a field in every way you can possibly think of. It just shares a few abstract features with fields and those were the features that I was interested it, so therefore it seemed like a reasonable analogy.
Coming back to your story: The strenght of an argument by analogy relies on how well you can actually connect the two and make a persuasive case that the two things work similar in those features / structural similarities that you actually try to compare. It's not clear to me how your analogy helps your case. A superintelligent AI is the most intelligent thing you can(t) , imagine but could turn the universe into paperclips, which I don't care much for, so I for one do not value intelligence above literlly ALL else.
I your friend says feature X is the most important thing we should value about humans, the obvious counterargument would be "perhaps it could be, yet there are many features that we also care about in humans apart from their intelligence and dealing with someone that is only intelligent but cannot/does not do Y, Z and W would not be good company for any normal human, so these other things must matter too."
Alternativly you could try to transcend the whole argument and point out how meaningless it is. To explain how here's a more mathematical approach: If "human value" is the outcome variable of a function, your friend rips out particular variable X and says this variable contributes most to the value outcome of human value. For him that may be true, or at least he may genuinely believe it, but the whole concept seems ridiculous we all know we care about a lot of things in other humans like loyalty and firendship and reciprocity and humor and whatnot.
To make this argument meaningful, he's really the one to argue for how precisely does it help to decide focusing on only one part of very many parts that we clearly all value. What purpose does he think it would accomplish if he managed to make others believe it?
Now I don't think he or most people actually care, it seems like a classic case of "here's random crap I believe, and you should believe it too for some clever reason I'm making up on the fly - both so I can dominate you by making you give in and so we can better relate".
Replies from: Erfeyahcomment by Dagon · 2018-01-10T17:21:07.189Z · LW(p) · GW(p)
Ugh. There is no valid logic anywhere near these topics. If you're not defining terms and making concrete measurable propositions, you can't use logic as part of your reasoning.
Replies from: Erfeyah↑ comment by Erfeyah · 2018-01-10T18:59:47.877Z · LW(p) · GW(p)
The validity of a logical argument can be judged independantly of whether it is sound. I think for [2] this seems quite difficult as per AndHisHorse's comment. Could you elaborate on why this is the case for [1] as well? It seems to me that the logic can be abstracted quite easily.