My Failed Situation/Action Belief System

post by MrHen · 2010-02-02T18:56:32.011Z · LW · GW · Legacy · 36 comments

Contents

36 comments

Note: This is a description pieced together many, many years after my younger self subconsciously created it. This is part of my explanation of how I ended up me. I highly doubt all of this was as neatly defined as I present it to you here. Just know: The me in this post is me between the age of self-awareness and 17 years old. I am currently 25.

An action based belief system asks what to do when given a specific scenario. The input is Perceived Reality and the output is an Action. Most of my old belief system was built with such beliefs. A quick example: If the stop light is red, stop before the intersection.

These beliefs form a network of really complicated chains of conditionals:

They can keep getting bigger as I find more clauses to throw into the system:

Each node can be broken into more specific instructions if need be:

I did not sit down and decide that this was an optimal way to build a belief system. It just happened. My current best guess is that I spent most of my childhood trying to optimize my behavior to match my environment. And I did a fantastic job: I didn't get in trouble; didn't do drugs, smoke, drink, have sex, disobey my parents, or blaspheme God. My matrix put in a situation and an action came out.

The underlying motivation was a set of things I liked and things I didn't like. The belief system adapted over time to accommodate enough scenarios to provide me with a relatively stress free childhood. (I do not take all the credit for that; my parents are great.)

The next level of the system is the ability to abstract scenarios so I can apply the matrix to scenarios that I had never encountered. Intersections that were new would not break the system. I could traverse unfamiliar environments and learn how to act quickly. The more I learned, the quicker I learned. It was great!

The problem with this belief system is that it has nothing to do with reality. Essentially, this system is the universal extrapolation of guessing the teacher's password. If a problem was presented, I knew the answer. Because I could abstract these question/answer pairs, I knew all of the answers. "Reality" was a keyword that dropped into a particular area of the matrix. An action would appear with the right password and I would get my gold star.

That being said, this was a powerful system. It could simulate passwords to teachers I hadn't even met. I would allow myself to daydream about hypothetical teachers asking questions that I expected around the corner. Which implies that my predictor beliefs were driving the whole engine. The Action beliefs were telling me how to act but the Predictors were creating the actual situation/action matrix. Abstraction and extension of my experiences were reliant on my ability to see the future accurately. When I became surprised I would begin the simulations until I found something that worked within my given experiences.

This worked wonders during childhood but now I have an entire belief system made out of correctly anticipating what other people expected from me. Oops. The day I pondered reality the whole system came crashing down. But that is a story for another day.

36 comments

Comments sorted by top scores.

comment by thomblake · 2010-02-02T20:34:00.586Z · LW(p) · GW(p)

In form this post seems similar to many of Eliezer's backwards-looking posts. However, it seems to be missing an interesting insight or otherwise a bit that makes it relevant to the reader.

Answer the question: "So what?"

Replies from: wedrifid, CarlShulman, MrHen
comment by wedrifid · 2010-02-03T01:30:45.430Z · LW(p) · GW(p)

Answer the question: "So what?"

He is going through an ad-hoc ritualistic process of conversion to a new psychological tribe. He is publicly shaming his past self for its folly, which serves as a mild form of hazing. This is the most effective way of changing identity related beliefs that I am aware of.

comment by CarlShulman · 2010-02-03T03:27:06.822Z · LW(p) · GW(p)

Agreed, this post doesn't add value on its own. If this is going somewhere, that destination should have been combined with this.

comment by MrHen · 2010-02-03T04:12:22.789Z · LW(p) · GW(p)

The "so what" is that this style of belief system isn't hinged on Reality. It basically follows the popular beliefs of the environment around. It can detect inconsistencies in popular beliefs but it really doesn't have a way to get more True.

I suppose I didn't actually say this... and I suppose the reason is that I didn't want to list all of the details needed to fill in the gaps between what I have been reading in the sequences and the "Ah ha!" moment that allowed me to see the core problem in a Situation/Action belief system. This is pure laziness on my part. Sorry (but not enough to fix it).

Replies from: Blueberry
comment by Blueberry · 2010-02-07T00:07:04.725Z · LW(p) · GW(p)

Sorry (but not enough to fix it).

Would you please fix it? Listing at least some of the details filling in the gaps would be very helpful, more so than this post by itself.

Replies from: MrHen
comment by MrHen · 2010-02-07T14:27:45.838Z · LW(p) · GW(p)

Sure. It will take a while though.

comment by LauraABJ · 2010-02-02T21:07:02.940Z · LW(p) · GW(p)

Having a functional model of what will be approved by other people is very useful. I would hardly say that it "has nothing to do with reality." I think much of the trauma of my own childhood would have been completely avoided if I had been able to pull that off. Alas! Pity to my 9-year-old-self trying to convince the other children they were wrong.

Replies from: MrHen
comment by MrHen · 2010-02-03T04:19:41.882Z · LW(p) · GW(p)

Sure, the functional model of predicting other people's approval is great. The problem with what I did is organize all of the beliefs by situation. These things aren't tied to Reality. They are tied to perceptions. It would be the equivalent of claiming your belief system should be a Map of other people's Maps of the Territory. When none of the people around you are terribly concerned about mapping the territory, your map won't be either.

Building a worldview based on other peoples' approvals results in a worldview with all of the problems of those peoples. It makes a child's life easier because a child doesn't need to understand reality. At least, not the way a non-child does.

comment by byrnema · 2010-02-03T03:50:01.109Z · LW(p) · GW(p)

What's especially odd about this post

is that MrHen hasn't replied yet.

He always replies to all his comments, and has written comments elsewhere since posting this.

Do you think this is some kind of experiment?

If so, what's the experiment?

Replies from: MrHen, wedrifid
comment by MrHen · 2010-02-03T04:07:00.631Z · LW(p) · GW(p)

I bided my time on this topic for two reasons:

I have noticed that my posts go up in karma very quickly and than drop slowly. This post spiked at +7 and is now at +3. This happened on my last post as well, but the drop coincided with me commenting on the post. I decided to wait longer this time and see if the same pattern happened. It did. My theory is that I generally post in the afternoon and the afternoon readers vote me up. The evening readers then come through and vote me down. I personally wouldn't consider this an experiment, but I guess I cannot deny that it is strange behavior.

Second, I spent significantly less time editing and tuning this post. I have very little invested in this post and don't really care what happens to it. I didn't know how to start this topic or say what I wanted to say without writing at least twice what I did here. As such, I didn't really expect much in the comments. I am about to go through and reply to everything but I doubt the conversation will be that interesting.

FYI, I never posted a comment in Easy Predictor Tests.

Replies from: Liron
comment by Liron · 2010-02-03T18:15:39.054Z · LW(p) · GW(p)

I have noticed that my posts go up in karma very quickly and than drop slowly.

I see you've located the situation in your matrix.

comment by wedrifid · 2010-02-03T03:56:35.067Z · LW(p) · GW(p)

I will make the prediction "if this post is some kind of experiment then the rating of this thread will reduce by at least 4 points within one day of the moment MrHen announces it".

Replies from: MrHen
comment by MrHen · 2010-02-03T04:15:44.273Z · LW(p) · GW(p)

Eh, no conspiracies here. It's just a lazy post.

EDIT: For historical reasons, when I started commenting, the post was at +3.

comment by Blueberry · 2010-02-03T00:38:57.314Z · LW(p) · GW(p)

I'm also confused and possibly missing the point. You've described the development of an apparently useful, functional algorithm for how to act in the world. I don't see the problem with such a system; don't we all have one?

I also don't see what this has to do with beliefs. This is about how to act.

Replies from: MrHen
comment by MrHen · 2010-02-03T04:27:30.565Z · LW(p) · GW(p)

I also don't see what this has to do with beliefs. This is about how to act.

The system was defining situation/action pairs as beliefs. As in, "Given X, I should Y." Should, in this case, holds all of the weight of believing in gravity. This wordage is great, but when you start applying the pattern to mundane tasks such as, "I should pour milk after cereal" you can spin off into a world that has nothing to do with Reality. "I should blork" is just as valid because nothing is requesting that these beliefs satisfy some coda of "proper beliefs." If I can convince myself that blorking is going to make me happy, I will firmly believe that I should blork.

This idea of beliefs flies completely against the concepts promoted in The Simple Truth.

Replies from: LauraABJ, AndyWood
comment by LauraABJ · 2010-02-03T15:51:19.832Z · LW(p) · GW(p)

I think an important point missing from your post is that this is how many (most?) people model the world. 'Causality' doesn't necessarily enter into most people's computation of true and false. It would be nice to see this idea expanded with examples of how other people are using this model, why it gives them the opinions (output) that it does, and how we can begin to approach reasoning with people who model the world in this way.

Replies from: MrHen
comment by MrHen · 2010-02-03T16:05:33.234Z · LW(p) · GW(p)

I think an important point missing from your post is that this is how many (most?) people model the world.

Why do you think this? I am not disagreeing, I am just wondering if you had any information I don't. :)

Replies from: LauraABJ
comment by LauraABJ · 2010-02-03T17:05:25.902Z · LW(p) · GW(p)

The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested). There have been comments referencing the idea that many people don't reason or think but just do, and the world appears magical to them. Your model does seem to explain how these people can get by in the world without much need for thinking- just green-go, red-stop. If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.

Replies from: AdeleneDawner, MrHen
comment by AdeleneDawner · 2010-02-03T21:40:15.838Z · LW(p) · GW(p)

The model you present seems to explain a lot human behavior.

I agree. This seems to give much more accurate predictions of most peoples' actual actions than modeling them as consequentialists or deontologists. (The latter is close to this, but fails to account for how people fail to generalize rules across contexts.)

comment by MrHen · 2010-02-03T22:51:36.608Z · LW(p) · GW(p)

The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested).

This model works extremely well for predicting other people's actions. Your point about it being broad is true. People probably shortcut decisions into behavior patterns and habits after a while. I doubt a large number of them do it consciously.

If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.

I think the model is applicable to more than me. The underlying point was that some people (such as myself) use this as their belief system. I don't know how often people do that or if it is common.

In other words, this model can explain and predict people's actions well but I don't know how often it ends up absorbing the role of those people's belief system.

comment by AndyWood · 2010-02-03T16:25:14.721Z · LW(p) · GW(p)

I agree with Blueberry. This reads like a reflective account of how I (and many others, I'd bet) have always learned and navigated the regularities in my life. Why would you have fused this kind of procedural knowledge with belief? Did you focus on it so hard that you forgot to think about truth? This is the part where I feel like I'm missing something. In my case, I developed efficient action systems in order to free up mental cycles, precisely so that I would have as many free cycles as possible to think about computer programming, reality, and truth.

Replies from: MrHen
comment by MrHen · 2010-02-03T16:57:31.059Z · LW(p) · GW(p)

Did you focus on it so hard that you forgot to think about truth?

No. The problem is that when I thought about truth an action popped out. It only mattered when a scenario called for The Truth. Then I entered the Matrix looking for actions and passwords relating to The Truth. The Truth was a valid relative statement with regards to a scenario or question. The idea that "The sky is blue" was true in the scenario of "Being asked for the color of the sky."

This was abstracted to allow the color that I saw in the sky to apply to other objects I saw in life. I could look at the sky, see the color, associate the Action "Label the color blue" with the Situation "I need to label the color of the sky" and reuse the association for the Situation "I need to label the color of the ocean."

This has nothing to do with Reality. If I grew up in a world where the sky was never visible I would still be happy as a clam calling the sky blue (or green) because this was the correct action. If you phrased the question in terms of a prediction ("What do you predict for the color of the sky?") it would be internally translated into the Situation "I need a prediction for the color of the sky." I would look up the right answer relative to your expectations and return the result. The answer would have nothing to do with me predicting the color of the sky. It had everything to do with my expectation of you predicting the color of the sky.

Replies from: AndyWood
comment by AndyWood · 2010-02-03T17:01:34.833Z · LW(p) · GW(p)

Would you say this behavior was primarily driven by other-approval-seeking (as opposed to achievement for achievement's sake)?

Replies from: MrHen
comment by MrHen · 2010-02-03T22:57:34.101Z · LW(p) · GW(p)

I don't know. It is hard for me to remember the driving reasons why. I don't think approval was really the target so much as low stress was. I would rather be left alone than praised a whole bunch.

"Achievement" really doesn't seem to describe my younger self well either. "Achievement" is an action without a matching scenario. As a description, it would be too vague to be of much use. Specifically, the action "Achieve a goal" is impossible to perform without more information.

comment by cousin_it · 2010-02-02T20:51:43.818Z · LW(p) · GW(p)

I like this idea of the situation/action matrix. You seem to be complaining that it doesn't support goal-directed behavior - or am I reading too much into your post?

Replies from: MrHen, wedrifid
comment by MrHen · 2010-02-03T04:22:26.276Z · LW(p) · GW(p)

The matrix is useful, but they aren't valid predictors. You cannot test them; they only go one way. The predictors are needed to adjust the matrix which produces the question, "Why the matrix?" If the answer is "useful," that's fine, but "useful" doesn't make the best system of beliefs.

comment by wedrifid · 2010-02-03T01:24:47.325Z · LW(p) · GW(p)

I think he is complaining that it doesn't support epistemically rational beliefs which are assumed to be intrinsically valued for ideological reasons. "Doesn't support goal-directed behavior" would seem to be the answer to the next question, "Why should I, for practical purposes, care whether my beliefs match reality?"

comment by John_Maxwell (John_Maxwell_IV) · 2010-02-03T00:03:35.345Z · LW(p) · GW(p)

The next level of the system is the ability to abstract scenarios so I can apply the matrix to scenarios that I had never encountered. Intersections that were new would not break the system. I could traverse unfamiliar environments and learn how to act quickly. The more I learned, the quicker I learned. It was great!

Could you give an example of an abstract matrix?

It sounds like you were essentially codifying common sense as it applied to situations you encountered frequently, which makes sense from a willpower-conservation point of view because doing things you've planned in advance requires less willpower.

Replies from: MrHen
comment by MrHen · 2010-02-03T04:29:32.916Z · LW(p) · GW(p)

Could you give an example of an abstract matrix?

An abstract scenario would be the difference between, "my teacher is yelling at me" and "someone is yelling at me." Further abstractions would include, "someone is upset with me" and "someone has an unfulfilled expectation of me."

It sounds like you were essentially codifying common sense as it applied to situations you encountered frequently, which makes sense from a willpower-conservation point of view because doing things you've planned in advance requires less willpower.

Sure. It doesn't make sense as a belief system.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-02-03T04:52:04.057Z · LW(p) · GW(p)

By abstract matrix I think I meant an matrix of abstract scenarios. Also, I think maybe "decision tree", "flowchart", or "rule set" is a more commonly used term for what you describe than "matrix"; am I understanding you correctly?

How did your matrix become a belief system?

Replies from: MrHen
comment by MrHen · 2010-02-03T05:22:32.136Z · LW(p) · GW(p)

By abstract matrix I think I meant an matrix of abstract scenarios.

Ohh...

Also, I think maybe "decision tree", "flowchart", or "rule set" is a more commonly used term for what you describe than "matrix"; am I understanding you correctly?

Yeah, those terms work. "Matrix" fits better with how I visualize it in my head. I think "linked list" or "web" would be the most accurate. The problem I have with flowchart is that a flowchart is too organized. The actual process for taking a scenario and providing an action is much more organic. When a scenario presents itself I run the high level, abstract scenario through the system and respond appropriately. If the scenario doesn't change or gets worse I need more detail and drop into a different layer. While you could describe the whole thing as a flowchart, it probably wouldn't be the most efficient translation of what I am talking about.

In addition, an actual real life event is going to have dozens of active scenarios. I need to be able to access the matrix in multiple places at once. The process that actually acts is separate from this system and merely accesses it.

How did your matrix become a belief system?

I don't know. My guess is that it started purely as way to remember feedback loops that increased happiness and reduced stress. As I grew older and was taught about beliefs, ethics, decisions, and responsibilities I just translated those terms into what I was using to govern my actions. When it came time to organize my beliefs and thoughts I started categorizing things by scenario, keeping track of the actions, and drawing relationships between the various parts.

If someone asked me, "What do you believe about gravity?" I would look up scenarios and actions labeled "Gravity". This would return facts about gravity in the form of answers and there would be soft relations in the matrix to scenarios dealing with falling, balance, and various areas of physics.

These relationships would be another way to describe what I was calling an abstract scenario. The relationships themselves could be abstracted with more relationships, labels, and commentary.

comment by MrHen · 2010-02-08T20:14:15.494Z · LW(p) · GW(p)

I thought of an example that may help clarify what I was trying to say. Someone who is completely blind is unable to recognize the difference between a $20 bill and a $5 bill. They will fold the bills in a pattern to remind themselves which bill is which, but they need someone to tell them what it was first. It is impossible for them to use their tactile senses to understand Reality.

Likewise, if someone who is blind knows that the grass is green, they can tell you what color the grass is. They will never, ever be able to verify it in any other way than just listening to people talk about the grass being green. They can only repeat what people expect from them and call that the right answer. It is the right answer, but it isn't a belief attached to Reality.

The Situation/Action belief system is blind in a similar way. Once something tells it what Reality is it can regurgitate the answer forever. It can take a $20 bill and remember it. But if the person is lied to, a $5 bill will be folded as if it were a $20 bill. There is no way to verify the belief through Reality. The connection just isn't being made. If there is an entire society of people who think the grass is blue, the situation/action belief system will think the grass is blue and never once wonder what went wrong. It keeps getting the passwords right; what more is there to want?

comment by Zachary_Kurtz · 2010-02-02T19:54:15.231Z · LW(p) · GW(p)

So do you think there's a human system which includes a closer approximation of reality? (whatever that means)

Replies from: byrnema, MrHen
comment by byrnema · 2010-02-02T21:42:21.925Z · LW(p) · GW(p)

I think my question is related to the above:

what's wrong with stopping at red lights? I do it, and it keeps me alive.

(It literally just occurred to me that you might have been using 'red lights' as a metaphor for reactions from people you don't like. So that you mean that you've learned to stop or change direction if you get these signals. Was this what you meant?)

Replies from: MrHen
comment by MrHen · 2010-02-03T04:35:46.873Z · LW(p) · GW(p)

what's wrong with stopping at red lights? I do it, and it keeps me alive.

There is nothing wrong with stopping at red lights. There is something wrong with believing you should stop at red lights simply because you think you should. The belief should be anchored somewhere.

A major clarification that may help: The matrix does not provide a reason to action in any given scenario. It just remembers how to act in a given scenario. There is no "updating" in the sense that the belief is accurate or inaccurate. The belief can change or grow but it isn't correct or wrong. But even though the belief isn't accurate, inaccurate, right, or wrong the system still considers them beliefs.

(It literally just occurred to me that you might have been using 'red lights' as a metaphor for reactions from people you don't like. So that you mean that you've learned to stop or change direction if you get these signals. Was this what you meant?)

No. I meant traffic signals.

comment by MrHen · 2010-02-03T04:30:17.503Z · LW(p) · GW(p)

What do you mean by human system? I think The Simple Truth provides a much better system for beliefs.