[Hammertime Final Exam] Quantum Walk, Oracles and Sunk Meaning

post by silentbob · 2019-09-03T11:59:45.696Z · LW · GW · 4 comments

Contents

  Technique: Quantum Walk
  Framework: Ask the Oracle
  Bias: Sunk Meaning
None
4 comments

One and a half years ago, alkjash [LW · GW]published his Hammertime [? · GW]sequence. Admittedly a bit late to the party I recently went through the 30 days together with a few other aspiring rationalists, who too may post their "final exams [? · GW]" in the coming weeks.

The task was the following:

  1. Design a instrumental rationality technique.
  2. Introduce a rationality principle or framework.
  3. Describe a cognitive defect, bias, or blindspot.

For each of these, we're supposed to spend 5 minutes brainstorming plus 5 minutes writing the text. This worked out for the brainstorming in my case, but the writing took somewhat longer, especially turning the whole thing into a semi-readable post (I couldn't figure out how to properly format stuff in here; took me 15 minutes to create proper headlines as they were never limited to my selection but turned my whole text into a headline, plus creating these fancy separators always removed big chunks of text; so, sorry about that).


Technique: Quantum Walk

Murphyjitsu [? · GW]is a CFAR technique that helps you bulletproof plans that might feel secure but in reality probably aren't. It makes you focus on all these things that could go wrong which you haven't taken into account yet.

I propose a similar yet inverse method: picking goals that you're hesitant to set because they seem unreachable for whatever reason. Given you have such a goal, imagine that x months from now you will have reached it, and now think what course of actions may have brought you from where you are now to that point.
So here we are not looking for unforeseen failure modes, but instead for a hidden path to success, one that may make the difference between shying away from that goal, and making tangible progress toward it.

The reason I'm calling it "quantum walk" is twofold: Firstly, I couldn't think of any better name, as "anti murphyjitsu" sounds suboptimal. Secondly, I recently read Life on the Edge and am now primed to see quantum mechanics everywhere. One thing the book explained was how photosynthesis contains a step that only works due to quantum mechanics: some particle has to find a certain very specific spot to settle in. If that particle behaved in a classical way, finding that spot would be incredibly unlikely and the whole process of photosynthesis wouldn't actually work. Yet utilizing quantum laws the particle finds its way to its destination.
The technique's approach is similar in the way that you're not searching a way to the goal from where you are now. Instead you simply accept that the goal is reached in some hypothetical future situation, and backtrack from there. The analogy is weak, but at least the name has a certain ring to it.

For those interested, I suggest the following challenge: Pick any goal you'd like to have but are afraid to actually accept as a goal. In case you can't think of any, use the technique of picking any existing goal and doubling its scale until it seems utterly ridiculous [? · GW]. Now try to answer the question "assuming x months from now I've reached that goal, what course of actions has led me there?" and feel very free to share whether or not this has led you to any new insights.

(Disclaimer: my explanation of that photosynthesis phenomenon may be somewhat (or very) off, but as it's not really the point of the text I take that risk)

Framework: Ask the Oracle

There are times when we're trying to learn or understand something but aren't really getting anywhere. Whether it's learning some esoteric concept in physics class, assembling a piece of furniture, setting up a computer or debugging a piece of code. Sometimes these cases seem really obscure and highly confusing - we don't even know where to start, let alone how to solve the issue. As aspiring rationalists we're aware that this confusion stems not from the territory itself but from the map we're using, yet we feel unable to adjust our map accordingly: it's so distant from the territory that banging our head against the wall may seem just as promising a step as any other we could possibly think of.

There is one thing we have control about however: becoming an expert on our own confusion. Only after whe know exactly what we don't understand and why can we take the necessary steps to fill in the gaps and slowly get to a higher vantage point. One way to approach this is to ask oneself which question, if answered by a hypothetical oracle that truthfully answers yes or no to any question that has a clear answer, would reduce one's confusion the most.

Tutoring often works in a way that the student struggles to understand a certain concept and thus the teacher tries to explain it to the student. Hence the teacher is more active and the student, in a comparably passive role, takes the consumed information and tries to add it to their mental model wherever it sticks. This framework turns things around and puts the student in the active role and any progress emerges from them instead of the teacher. The hypothetical teacher in this case - the oracle - is completely reactive and does nothing other than passively answering the student's enquiries.

In reality of course there is no oracle. At best there's another person with an understanding that exceeds yours. At worst there's still the internet and text books.

I personally often find myself in the situation that I struggle with something, say some API I want to use in a web application but it doesn't behave as expected, and I get frustrated and blame outside sources such as the developer of that API or the bad documentation. This framework forces me to take full responsibility and focus on what actually matters, which is what exactly is currently limiting my understanding and what I could do to change that. Once I have a set of questions the answers to which would allow me to progress, the process of finding answers to these questions is often easier than expected. I tend to focus so much on wanting to find answers that I forget to realize the crucial part is to come up with the right questions. And the latter doesn't depend on the territory to which I'm lacking access, but entirely on my personal map.


Bias: Sunk Meaning

We all know the concept of "sunk cost", usually in the context of money, time, effort or some other limited resource. I'd like to discuss a related idea which could be called "sunk meaning".

Imagine a cow is slaughtered because somebody wants to eat its meat. For whatever reason, that meat turns out to be unusable and cannot be consumed. The only thing that remains which could be utilized in any way is the cow's skin.

You are now invited as the expert on whether to skin the cow in order to produce some leather or not, and you know with certainty that doing so would cost exactly X resources (including time, money, material etc.). You also know there's a new innovative method to synthetize perfect leather indistinguishable from animal leather but without using an animal, which also costs exactly X resources. You also happen to own a waste dematerializer, so getting rid of the cow's remains is not an issue whichever action you choose.
You have these two options, each exactly equally expensive and with exactly the same outcome. Which option would you prefer, if any?

I'd assume more than half of all people would feel like using the cow is the reasonable option here, for some reason similar to "otherwise the cow would have died for nothing". I recently experienced a member of the rationalist community use that exact line of reasoning.

To this I counter the following: causality never moves back in time.

A present action can have no causal effect on something that happened in the past. Using pieces of the cow today does not affect the death of the cow in any real way. Saying "otherwise the cow would have died for nothing" says nothing about the cow or its death, only about our personal current interpretation. The thing that may or may not improve by us using the cow instead of the synthetic method has nothing to do with what the argument states, instead it has everything to do with how we personally feel.

The thing I'm hinting at is that every time we feel like something gives meaning to the past, we should be aware that this is 100% imaginary. It may certainly affect how people feel and behave and as such has to be taken seriously, but the same is true for sunk cost, and we should at the very least be that honest when communicating that issue. Falling for sunk meaning is just as (ir)rational as falling for sunk cost.
We're not using the cow's skin because it gives meaning to the cow's death or because it makes it less of a waste. We're using the cow's skin to feel better.

4 comments

Comments sorted by top scores.

comment by alkjash · 2019-09-10T20:46:17.817Z · LW(p) · GW(p)

I just stumbled back into LessWrong after many months, very pleased to see that you completed this journey with such seriousness! Looking forward to your thoughts and comments in the postmortem post.

Re: Sunk Meaning, immediately after reading this, my brain sent strong signals that *something else is going on here* with regards to making the cow's death mean something and there's some Chesterton's fence that you shouldn't break. I think the error you're making is basically the same error that causal decision theorists make two-boxing in Newcomb's problem. You do not get to choose whether or not to honor a dead cow in this specific instance. You only get to choose (for all time) whether or not your source code implements an algorithm which honors the dead. Being able to do this is really really useful for all sorts of Newcomblike situations that occur in real life.

comment by Shmi (shminux) · 2019-09-04T01:34:24.584Z · LW(p) · GW(p)

Trying to understand your points... Possibly incorrectly.

Technique: Quantum Walk: set a stretch goals and Work backwards from your goal.

Ask the Oracle: seems like you suggest working on deconfusion.

Bias: Sunk Meaning: Meaning is in the map.

Replies from: silentbob
comment by silentbob · 2019-09-05T11:17:56.028Z · LW(p) · GW(p)

Quantum Walk: That's pretty much it.

Oracle: Possibly, didn't get around to reading it all so far. As far as I understand from just skimming, I guess a difference may be that the term deconfusion is used with regards to a domain where people are at risk of thinking they understand and aren't aware of any remaining confusion. I was more referring to situations where the presence of confusion is clear, but one struggles to identify a strategy to reduce it. In that case it may be helpful to focus on the origin of one's own confusion first as opposed to the thing one is trying to understand.

Sunk Meaning: Yes, plus there may be times when we are talking about meaning / interpretation without realizing so, falsely assuming we're referring to actual properties of the real world. In the above example, people may feel like "we should use the cow's skin as otherwise it is wasted" is a real argument, that reality would in some way be better when acting that way, because "wasting things = bad". That's a (usually useful but still at times) flawed heuristic though. I wonder if there are more ways where we intuitively think we're talking about properties of the real world, when in actuality we're only referring to states of our own mind.

comment by assignvaluetothisb · 2019-09-06T11:39:12.356Z · LW(p) · GW(p)

You have these two options, each exactly equally expensive and with exactly the same outcome. Which option would you prefer, if any?

This is obviously untrue. The same outcome for one entity, an entirely different outcome for a different entity.

And replace the word 'expensive' with 'cost', since we are pretending (funny) that money is something that is of abundance and shareholders will be making more regardless.