Conjecture on Addiction to Meta-level Solutions

post by Gram_Stone · 2016-03-18T04:13:06.877Z · LW · GW · Legacy · 10 comments

Contents

10 comments

Related: Meta Addiction

Says Eliezer in LessWrong Q&A (16/30):

In one sense, a whole chunk of LessWrong is more or less my meta-thinking skills.

When we become more rational, it's usually because we invent a new cognitive rule that:

  1. Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and,
  2. Leads to winning in some, if not all, heretofore unforeseen situations that also share this property.

When you learn the general rule of not-arguing-over-definitions, you obtain a general understanding of why humans on a desert island will draw lines in the sand to communicate if necessary instead of, say, mutually drawing lines that are naively intended to communicate the fact that they are dissatisfied with their respective companions' line-drawing methods. You will foresee future instances of the general failure mode as well.

You might say that one possible statement of the problem of human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.

Because there is some such algorithm. Your new cognitive rules are output, and the question is: "What algorithm generates them?" If you explicitly understood that algorithm, then many, if not all, other insights about human rationality would simply fall out of it as consequences.

You know, there exists a science of metacognition that has scarcely been mentioned in seven years of LessWrong.

And if it was mentioned, it was almost always in reference to the relationship between meditation and metacognition. It seems like there would be more to say than just that.

But enough about that, let's get back to the far more interesting matter of the rationalist movement's addiction to meta-level solutions.

Abstract of Spada, Zandvoort, & Wells (2006):

The present study examined metacognitions in problem drinkers and a community sample. A sample of 60 problem drinkers and 84 individuals from the general population were compared on the following measures: Hospital Anxiety and Depression Scale, Meta-Cognitions Questionnaire 30, Quantity Frequency Scale and Alcohol Use Disorders Identification Test. Mann–Whitney U-tests, logistic regression analysis and hierarchical regression analyses were performed on the data. Mann–Whitney U-tests revealed that metacognitions, anxiety, depression and drinking scores were significantly higher for problem drinkers than for the general population. The logistic regression analysis indicated that beliefs about cognitive confidence and beliefs about the need to control thoughts were independent predictors of a classification as a problem drinker over and above negative emotions. Finally, hierarchical regression analyses on the combined samples showed that beliefs about cognitive confidence, and beliefs about the need to control thoughts, independently predicted both alcohol use and problem drinking scores. These results add to the argument that metacognitive theory is relevant in understanding excessive and problematic alcohol use.

It might be that problem drinkers aren't avoiding punishment signals by drinking, as one might initially think, and that they don't start and continue drinking because they're anxious. It might be that they are rewarded for using a strategy that allows them to regulate their cognition. They revisit the alcohol over and over again because the need for a solution to cognitive self-regulation led them to try drinking in the first place, and, in a most limited and hardly sustainable sense, it's consistently solved the problem before.

Problem drinkers stop being problem drinkers when they find a better reward; i.e. when they find a more rewarding cognitive self-regulation solution than drinking. This is rare because it takes time to obtain the feedback necessary for something other than drinking to be a more rewarding solution, and it's more rewarding to directly maximize the reward signal (find ways to keep drinking instead of stop drinking) instead of directly maximizing the external thing in the world that the reward signal correlates with (cognitive self-regulation).

Going meta works sometimes, and probably more often than you think, considering that you've been taught that meta is dangerous. And when it works and you know it works, it's highly rewarding.

I don't have evidence, but I nevertheless predict that intelligent humans are more likely to develop high metacognitive ability independently; that is, without being primed into doing so.

You'd imagine then that many LessWrong users would have started being rewarded very early in their lives for choosing meta-level solutions over object-level ones. How would you even make your way across the Internet all the way to LessWrong unless you were already far along the path of looking for meta-solutions?

(One way is that you happened upon an object-level solution that was mentioned here. But you know, not all LessWrong users are addicts.)

I also predict that the sort of process described in the abstract above is the same thing that separates rationalists who stave off their addiction to meta-solutions from rationalists who relapse or never get unhooked in the first place.

The obverse error is overvaluing object-level solutions. It's also possible to straddle the line between the two types of solutions in the wrong way; otherwise there would be an old LessWrong post with the same content as this one.

10 comments

Comments sorted by top scores.

comment by RomeoStevens · 2016-03-19T00:28:24.583Z · LW(p) · GW(p)

Tom Chi presented on some related topics at EAG, part of which I like referring to as the Tom Chi question: Can this process, in principle, lead to a robust solution to the given problem domain? So kind of a handle for the concept of checking your heuristic-problem mapping.

Edit: I remember now that this is commonly referred to as the Emperor of China's Nose problem.

comment by Elo · 2016-03-18T05:12:02.393Z · LW(p) · GW(p)

When we become more rational, it's usually because we invent a new cognitive rule that:

  1. Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and,
  2. Leads to winning in some, if not all, heretofore unforeseen situations that also share this property.

When you learn the general rule of not-arguing-over-definitions, then, in hindsight, you understand in a very general sense why humans on a desert island will draw lines in the sand to communicate if necessary instead of, with futility, mutually drawing lines that are naively intended to communicate the fact that they are dissatisfied with their respective companions' line-drawing methods. You will foresee future instances of the general failure mode as well.

When we become more rational, it's usually because we invent a new cognitive rule that:

  1. Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and,
  2. Leads to winning in some, if not all, unforeseen situations that also share this property.

When you learn the rule of not-arguing-over-definitions. In hindsight, you understand in a sense - why humans on a desert island will draw lines in the sand to communicate if necessary. Instead of; with futility, mutually drawing lines that are intended to communicate the fact that they are dissatisfied with their respective companions' line-drawing methods. You will expect future instances of the failure mode as well.


You might say that one possible problem statement of solving human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.

Because there is some algorithm. Your new cognitive rules are output, and the question is what algorithm generates them. If you explicitly understood that algorithm, then all other insights about human rationality would simply fall out of it as consequences.

You might say that one problem of solving human rationality is obtaining an understanding of the algorithm in the physical structure of our brains that allows us to generate new rules.

There is some algorithm. Your new cognitive rules are output, and the question is: "What algorithm generates them?" If you explicitly understood that algorithm, then all other insights about human rationality would simply fall out of it as consequences.


Trying to modify to make this more readable. small changes; but your original is quite hard to parse.

Replies from: Elo, Gram_Stone
comment by Elo · 2016-03-18T05:13:40.697Z · LW(p) · GW(p)

meta: not sure why I find this hard to read but your past post(s) fine. If something changed in this iteration I would encourage you to go back to the more readable method. Keep up the writing!

comment by Gram_Stone · 2016-03-18T05:19:30.220Z · LW(p) · GW(p)

You might say that one problem of solving human rationality is obtaining an understanding of the algorithm in the physical structure of our brains that allows us to generate new rules.

Well, that's terrifying. I didn't mean that that's a subproblem, I meant that that's one possible way of stating most, if not all, of the problem. Thank you for speaking up.

comment by Viliam · 2016-03-18T09:05:08.332Z · LW(p) · GW(p)

Editorial note: The introduction before the quote was quite confusing to me. I would prefer the article starting with the quote about alcoholics, so I know what is the main topic. Also, perhaps a summary at the end?

This is how I understood the article:

Research suggests that alcoholics use alcohol as a tool for regulating their emotions. This could be of interesting to LW users, because it is a part of metacognition, which we consider important, but probably wouldn't look at it from this specific angle. Could we also be somehow victims of our metacognitive strategies, successful (rewarded) in short term, but harmful in long term? We may be addicted to "going meta", which results in procrastination, because we habitually escape from the object level to the meta level, just like the alcoholics escape from their daily problems to alcohol. The solution for the alcoholics (easier said than done) is to find a better way to regulate their emotions. Those of us who are addicted to "going meta", i.e. those of us who procrastinate a lot, probably need something similar.

It reminds me of a friend (a LW lurker) who said: After you have gained all the necessary insights and formulated the best strategy, the remaining component is the ability to shut up and suffer. You can (and should) reduce the unpleasant work, but you often cannot reduce it to zero, and you have to accept it and endure the necessary suffering, otherwise you never get things done.

Replies from: Gram_Stone
comment by Gram_Stone · 2016-03-18T14:14:36.827Z · LW(p) · GW(p)

This is mostly an excellent description of the article as I meant to convey it. But I also believe that I've failed to make myself entirely clear.

Consider these particular sentences:

The logistic regression analysis indicated that beliefs about cognitive confidence and beliefs about the need to control thoughts were independent predictors of a classification as a problem drinker over and above negative emotions. Finally, hierarchical regression analyses on the combined samples showed that beliefs about cognitive confidence, and beliefs about the need to control thoughts, independently predicted both alcohol use and problem drinking scores.

Alcohol can certainly affect your emotions directly, but I'm saying that it does not look like this is primarily the reason that problems drinkers become problem drinkers. A failure to regulate one's beliefs about one's own beliefs is the proximate issue, and it is merely by consequence that this results in alleviation of anxiety. The idea is that the causal graph may not look like 'Drink alcohol --> Be less anxious'; but rather more like 'Drink alcohol --> By some currently unknown mechanism, get better at controlling thoughts --> Become less anxious'. And it may be that there is something of a feedback mechanism, such the anxiety makes one less able to control thoughts, and thus more anxious, and thus less able to control thoughts, etc.

comment by SquirrelInHell · 2016-03-18T10:10:58.499Z · LW(p) · GW(p)

Some of what you are saying feels slightly off, but overall I understand very well what you are getting at, so I'm just going to give you credit and understand it in the best way I can.

When we become more rational, it's usually because we invent a new cognitive rule that:

Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and, Leads to winning in some, if not all, heretofore unforeseen situations that also share this property.

This is different from my subjective experience. I started describing it here, but it grew long and I thought it was interesting enough to write a separate post: http://lesswrong.com/r/discussion/lw/ney/how_it_feels_to_improve_my_rationality/

comment by [deleted] · 2016-03-19T06:08:11.684Z · LW(p) · GW(p)

I am thinking about optimal value salvage from foregone opportunities. To illustrate the application: I recently turned down an interview offer for a poor paying (<10k AUD annually) full time job in India. The pay was the decisive in my decision. In the future should I screen jobs by salary in advance to save on application time? What is my acceptable minimum salary for full time work? Should I disclose my error to the employer, friends and/or the public (e.g. by blogging? Commenting on existing material by others? Social media?). And meta questions like is this task formulated at the optimal level or abstraction and where is its relative importance? I have not systematically answered these questions. Writing this out gives me feeling that the cumulative reward of systematically generated answers (that i expect will be highly uncertain and full non dominated options that will be harder to regress to my preferences) is dominated by the prospect of trusting my gut :) and, I reckon its more convenient and braggable to read up on the moat proximate problem domain than to reinvent the wheel. Maybe i should read about options trading in finance...

Replies from: Viliam
comment by Viliam · 2016-03-20T10:01:05.814Z · LW(p) · GW(p)

I think that "addiction to meta level" in your situation may manifest in trying to find the best method for dealing with the problem, and then even jumping to finding the best method to dealing with problems in general... and at the end, you have a lot of questions and a maybe new topic to study... while you should have a new good and well-paying job instead.

In the future should I screen jobs by salary in advance to save on application time?

If you know that you will refuse any job paying less than X, and if the information is easily available, you obviously should.

What is my acceptable minimum salary for full time work?

Start with "your previous salary + 50%". If at the following 10 interviews you get feedback that you are asking too much, adjust downwards.

But the important thing is to do the job interviews, at least once in a while. Instead of just sitting at home and trying to develop the Perfect Algorithm for Job Choice.

comment by [deleted] · 2016-03-21T07:53:50.717Z · LW(p) · GW(p)

“Ultimately, it is the desire, not the desired, that we love.”

  • Friedrich Nietzsche