False Dilemmas w/ exercises
post by Logan Riggs (elriggs) · 2019-09-17T22:35:33.882Z · LW · GW · 9 commentsContents
Generalization Algorithm Useful Constraints: Introspective Problem Set Conclusion None 9 comments
This is the third post in the Arguing Well [LW · GW] sequence, but it can be understood on its own. This post is influenced by False Dilemma, The Third alternative [LW · GW].
A false dilemma is of the form “It’s either this, or that. Pick one!” It tries to make you choose from a limited set of options, when, in reality, more options are available. With that in mind, what’s wrong with the following examples?
Ex. 1: You either love the guy or hate him
Counterargument 1: “Only a Sith deals in absolutes!”
Counterargument 2: I can feel neutral towards the guy
Ex. 2: You can only add and subtract in mathematics.
Uh, division and multiplication, right?
Ex. 3: Either you get me that new car, or you don’t love me!
I can care about your well-being and happiness and not get you that specific car. I could buy a used car, so you wouldn’t freak out if it got damaged or [3 different examples based on what the (father?) thinks is best].
Ex. 4: You didn’t donate to the food drive, so you don’t care about starving children!
I do care about starving children, but my family and I are just barely scraping by and I value feeding them first.
Generalization
How would you generalize the above examples? Of course it presents a certain set of options as the only available options (I said this in the intro!), but what’s the relationship between those options? Are there different types of options that are similar/different between the examples?
One way to frame it is to separate options into two types: values and actions
With that frame, false dilemmas can be categorized into 4 varieties:
1. Only these values are compatible with an agent
2. Only these actions are compatible with a system/environment
3. Only these actions are compatible with this value
4. Only these values are compatible with this action
These 4 varieties correspond to each of their respective examples above (this is not a coincidence). Note: you could have answered Ex.3 as (4) and Ex.4 as (3), it really just depends on the truth of the situation. I’ll leave the details as an exercise for the reader.
[Also, I’m sticking my neck out here and claiming that these are the only 4 categories a false dilemma can ever fit. Prove me wrong in the comments and I’ll update the post]
So how about trying this new frame out on the following:
Ex. 5: You can either get up and work out every day or stay on that couch and stay an unhealthy slob!
Ouch. Rephrasing: “Only working out every day is compatible with desiring health”. There’s also couch-to-5k, high intensity training 1-3 days/week, playing sports together that could all improve health and be better long-term or transitioning-wise or whatever you also value.
Ex. 6: Pencils are for writing and erasing
I could use it to make a beat, play miniature football, as a flagpole in a diorama, as a gift, or to poke holes in a page, or …
Ex. 7: You upset your friend Alice with an insensitive joke. Bob tells you, “I can’t believe you said such a terrible thing. You don’t care about her at all!”
Rephrasing: “Only never hurting someone’s feelings is compatible with caring about them”. I care about Alice very much, I just really suck at showing it. (I think it’s interesting that this covers failed goals/good intentions. This makes sense since agents, like us, aren’t logically omniscient)
Ex. 8: You either care for animal life or your own temporary comfort!
I can care about both to different degrees.
Algorithm
What’s a possibly ideal general algorithm to solve False Dilemmas?
1. What are the possible values and actions?
2. What’s arbitrarily constrained?
3. Yoda Timers: [LW · GW] brainstorm more options
[Note: Some of these examples are limited because we don’t know the context or the purpose, so it’s hard to enumerate very useful actions/accurate values. That’s okay because in reality, you can introspect and ask questions]
Useful Constraints:
Constraints can be useful when used purposefully. False Dilemmas are generally bad because they are arbitrary constraints and no one is even aware that a constraint is being used! In the following prompts, what’s a useful constraint to use and why is it useful?
Ex. 9: “Write a 500 word essay on the United States”
"Narrow it down to the front of one building on the main street of Bozeman. The Opera House. Start with the upper left-hand brick.” [This is stolen from Zen and the art of motorcycle maintenance]. Narrowing down the topic is useful for writer’s block.
Ex. 10: Prove that given two Natural numbers (0,1,2,3,...) n and m, n*m = 0 if and only if one of them is 0.
I could talk about all natural numbers n &m; however, it may be easier to divide this into 4 cases: n = 0 & m = 0, n >0 & m = 0, n = 0 & m > 0, and n > 0 & m >0. This is useful because it’s trivial to prove each case, and it’s hard (for me!) to prove this in a way that doesn’t rely on cases.
Ex. 11: You listed them out, and it appears you have 10 problems in your life and you can’t even do anything about some of them!
Circle which ones you can do something about, and constrain your thoughts/actions to only those problems. Constraining in this way helps combat feeling overwhelmed/helpless.
Ex. 12: Your friend asks “Where do you want to go eat?”. You say “Oh wherever, I’m not pick”. They say, “Oh me too!”. (Normally it takes 5-10 minutes after this to figure out where to go)
I could say anything like “Place A and B are very close and I like both, which one is good for you?”, and that normally speeds up the process. Same applies for picking a time and place for hanging out. “How about this coffeehouse at this time? Is that good?”
Ex. 13: You always seem to have trouble deciding what food to order when you go to a new restaurant.
There are several constraints that may serve you well. One is “Pick what is popular in the restaurant”, another is “pick the first thing that sounds good” (Got this from someone here in LW).
Introspective Problem Set
Very often we arbitrarily constrain what we can do.
Ex. 14: How do identities (I am a Parent/Good Student/ Good friend/Smart Person/etc) constrain possible actions? What’s a specific example of an identity you’ve held (or are holding) that has constrained what you thought you could do.
By identifying as X, we must act like someone who is X. Whether that’s mimicking how we’ve seen another X act, or how our model of being X.
I’ve personally identified as a “Good Student”. What I mean is that as long as I really put in the time to study, then I have done my duty. This has constrained possible actions such as asking what specifically confused me and googling other explanations.
Ex. 15: Are there any times when it’s good to constrain who you are/ what you can do?
Tons! The search space of a problem is sometimes very big. Having an identity/ being a role always is usually bad because it constrains you to a specific chunk of search space always. But temporarily playing as different roles helps search through different chunks more efficiently.
For example, playing as the devil’s advocate with yourself can help search for the very best reasons why your idea is good, and find the very best reasons why it’s bad. When a friend tells me a huge problem they’re dealing with, it’s time to switch to “Very concerned friend who listens”. When I meet someone very shy, it’s time to switch to “Talkative friend who asks easy questions”.
This is also related to the Intelligent Social Web [LW · GW].
Ex. 16: What about emotions/mental states? What’s a time they’ve constrained you to do something bad? What about something something good?
Sometimes when something frustrating happens, I feel like I’m only constrained to the action “Eat something sweet and distract yourself”. Realizing that, I have now produced several alternatives: take a walk, take care of any present needs (hunger, sleep, etc), play piano, call a loved one, etc. I would say this is category 2, “Only these actions are compatible in this specific environment”
Being in a meditative state helps me love those who are late (it’s true because it rhymes). I am consistently way more patient in a meditative state and it’s my go to move when someone is late.
Conclusion
Category Qualifications was about seeing words as a set of qualifications and how to wield qualifications to effectively communicate. This post is about seeing constraints in planning/agents/environments and how to wield those constraints effectively to achieve your goals.
When arguing well, it’s useful to know exactly which constraints are being used and why they are being used. People fall victim to False Dilemmas when they’re not aware of the implied/assumed constraints.
In the next post [LW · GW], we’ll be investigating how to find cruxes and how this is useful when arguing well.
As a final exercise (which I haven’t worked myself), how does this apply to conflict vs mistake culture?
[This is an iterative post/sequence. I don’t think any of us on LW claim we have the 100% truth/final word on a topic. If you strongly downvote a post, please also leave a comment saying why it’s wrong/not useful so the iterative/improving process can happen]
9 comments
Comments sorted by top scores.
comment by Raemon · 2019-09-18T01:17:39.603Z · LW(p) · GW(p)
Note: we fixed spoilers tags right....
around the same time
that
you
published this
But the changes aren't retroactive, alas. If you'd like someone on the LW team to fix your spoilertags for you we'd be happy to do that.
Replies from: elriggs↑ comment by Logan Riggs (elriggs) · 2019-09-18T01:49:03.429Z · LW(p) · GW(p)
I would very much appreciate someone doing that for me.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-09-18T05:37:24.905Z · LW(p) · GW(p)
Done!
Replies from: elriggs↑ comment by Logan Riggs (elriggs) · 2019-09-18T14:30:51.269Z · LW(p) · GW(p)
Underneath the Algorithm heading, everything below it is a spoiler and has a scroll bar. Is this what you see?
Replies from: elriggs↑ comment by Logan Riggs (elriggs) · 2019-09-18T15:16:26.969Z · LW(p) · GW(p)
I fixed it the issue on mine. I created and shared a draft with you reproducing the error.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-09-18T16:35:55.905Z · LW(p) · GW(p)
Yeah, I also found that yesterday. Will fix that today.
comment by Donald Hobson (donald-hobson) · 2019-09-18T13:59:52.744Z · LW(p) · GW(p)
"Either my curtains are red, or they are blue." Would be a false dilemma that doesn't fit any category. You can make a false dilemma out of any pair of non mutually exclusive predicates, there is no need for them to refer to values or actions.
Replies from: elriggs↑ comment by Logan Riggs (elriggs) · 2019-09-18T14:50:43.400Z · LW(p) · GW(p)
This is good, thanks! I want to get this right though, so the general form would be:
X is only compatible with Y
And your example is "My curtain is only compatible with being red or blue". Which could generalize to "This object is only compatible with these properties".
Would this work as a 5/6th category? I think it's useful to have categories, but it might be better to just give the above general form, and then give possible general categories (like actions, values, properties, etc.) Thoughts?
comment by jmh · 2019-09-19T13:40:07.816Z · LW(p) · GW(p)
This post is about seeing constraints in planning/agents/environments and how to wield those constraints effectively to achieve your goals.
There is a huge literature on this type of agenda setting. I think for the most part one person achieving their goals will depend a lot on how well others, with competing and possibly incompatible goal, recognize the situation and formulate their own strategies.