Controlling your inner control circuits

post by Kaj_Sotala · 2009-06-26T17:57:56.343Z · LW · GW · Legacy · 159 comments

Contents

159 comments

On the topic of: Control theory

Yesterday, PJ Eby sent the subscribers of his mailing list a link to an article describing a control theory/mindhacking insight he'd had. With his permission, here's a summary of that article. I found it potentially life-changing. The article seeks to answer the question, "why is it that people often stumble upon great self-help techniques or productivity tips, find that they work great, and then after a short while the techniques either become ineffectual or the people just plain stop using them anyway?", but I found it to have far greater applicability than just that.

Richard Kennaway already mentioned the case of driving a car as an example where the human brain uses control systems, and Eby mentioned another: ask a friend to hold their arm out straight, and tell them that when you push down on their hand, they should lower their arm. And what you’ll generally find is that when you push down on their hand, the arm will spring back up before they lower it... and the harder you push down on the hand, the harder the arm will pop back up! That's because the control system in charge of maintaining the arm's position will try to keep up the old position, until one consciously realizes that the arm has been pushed and changes the setting.

Control circuits aren't used just for guiding physical sequences of actions, they also regulate the workings of our mind. A few hours before typing out a previous version of this post, I was starting to feel restless because I hadn't accomplished any work that morning. This has often happened to me in the past - if, at some point during the day, I haven't yet gotten started on doing anything, I begin to feel anxious and restless. In other words, in my brain there's a control circuit monitoring some estimate of "accomplishments today". If that value isn't high enough, it starts sending an error signal - creating a feeling of anxiety - in an attempt to bring that value into the desired range.

The problem with this is that more often than not, that anxiety doesn't push me into action. Instead I become paralyzed and incapable of getting anything started. Eby proposes that this is because of two things: one, the control circuits are dumb and don't actually realize what they're doing, so they may actually take counter-productive action. Two, there may be several control circuits in the brain which are actually opposed to each other.

Here we come to the part about productivity techniques often not working. We also have higher-level controllers - control circuits influencing other control circuits. Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do. When they notice that we've found a method to actually accomplish something we've been struggling with for a long time, they start sending an error signal... causing neural reorganization, eventually ending up at a stage where we don't use those productivity techniques anymore and solving the "crisis" of us actually accomplishing things. Moreover, these circuits are to a certain degree predictive, and they can start firing when they pick up on a behavior that only even possibly leads to success - that's when we hear about a great-sounding technique and for some reason never even try it. A higher-level circuit, or a lower-level one set up by the higher-level circuit, actively suppresses the "let's try that out" signals sent by the other circuits.
But why would we have such self-sabotaging circuits? This ties into Eby's more general theory of the hazards of some kinds of self-motivation. He uses the example of a predator who's chased a human up to a tree. The human, sitting on a tree branch, is in a safe position now, so circuits developed to protect his life send signals telling him to stay there and not to move until the danger is gone. Only if the predator actually starts climbing the tree does the danger become more urgent and the human is pushed to actively flee.

Eby then extends this example into a social environment. In a primitive, tribal culture, being seen as useless to the tribe could easily be a death sentence, so we evolved mechanisms to avoid giving the impression of being useless. A good way to avoid showing your incompetence is to simply not do the things you're incompetent at, or things which you suspect you might be incompetent at and that have a great associated cost for failure. If it's important for your image within the tribe that you do not fail at something, then you attempt to avoid doing that.

You might already be seeing where this is leading. The things many of us procrastinate on are exactly the kinds of things that are important to us. We're deathly afraid of the consequences of what might happen if we fail at them, so there are powerful forces in play trying to make us not work on them at all. Unfortunately, for beings living in modern society, this behavior is maladaptive and buggy. It leads to us having control circuits which try to keep us unproductive, and when they pick up on things that might make us more productive, they start suppressing our use of those techniques.

Furthermore, the control circuits are stupid. They are occasionally capable of being somewhat predictive, but they are fundamentally just doing some simple pattern-matching, oblivious to deeper subtleties. They may end up reacting to wholly wrong inputs. Consider the example of developing a phobia for a particular place, or a particular kind of environment. Something very bad happens to you in that place once, and as a result, a circuit is formed in your brain that's designed to keep you out of such situations in the future. Whenever it detects that you are in a place resembling the one where the incident happened, it starts sending error signals to get you away from there. Only that this is a very crude and unoptimal way of keeping you out of trouble - if a car hit you while you were crossing the road, you might develop a phobia for crossing the road. Needless to say, this is more trouble than it's worth.

Another common example might be a musician learning to play an instrument. Learning musicians are taught to practice their instrument in a variety of postures, for otherwise a flutist who's always played his flute sitting down may realize he can't play it while standing up! The reason being that while practicing, he's been setting up a number of control circuits designed to guide his muscles the right way. Those control circuits have no innate knowledge of what muscle postures are integral for a good performance, however. As a result, the flutist may end up with circuits that try to make sure they are sitting down when playing.

This kind of malcalibration extends to higher-level circuits as well. Eby writes:

I know this now, because in the last month or so, I’ve been struggling to identify my “top-level” master control circuits.

And you know what I found they were controlling for? Things like:

* Being “good”
* Doing the “right” thing
* “Fairness”

But don’t be fooled by how harmless or even “good” these phrases sound.

Because, when I broke them down to what subcontrollers they were actually driving, it turned out that “being good” meant “do things for others while ignoring your own needs and being resentful”!

“Fairness”, meanwhile, meant, “accumulate resentment and injustices in order to be able to justify being selfish later.”

And “doing the right thing” translated to, “don’t do anything unless you can come up with a logical justification for why it’s right, so you don’t get in trouble, and no-one can criticize you.”

Ouch!

Now, if you look at that list, nowhere on there is something like, “go after what I really want and make it happen”. Actually doing anything – in fact, even deciding to do anything! – was entirely conditional on being able to justify my decisions as “fair” or “right” or “good”, within some extremely twisted definitions of those words!

So that's the crux of the issue. We are wired with a multitude of circuits designed for controlling our behavior... but because those circuits are often stupid, they end up in conflict with each other, and end up monitoring values that don't actually represent the things they ought to.

While Eby provides few references and no peer-reviewed experimental work to support his case of motivation systems being controlled in this way, I find it to mesh very well with everything I know about the brain. I took the phobia example from a textbook on biological psychology, while the flutist example came from a lecture by a neuroscientist emphasizing the stupidity of the cerebellum's control systems. Building on systems that were originally developed to control motion and hacking them to also control higher behavior is a very evolution-like thing to do. We already develop control systems for muscle behavior starting from the time when we first learn to control our body as infants, so it's very plausible that we'd also develop such mechanisms for all kinds of higher cognition. The mechanism by they work is also fundamentally very simple, making it easy for new circuits to form: a person ends up in an unpleasant situation, causing an emotional subsystem to flood the whole brain with negative feedback, leading to pattern recognizers which were active at the time to start activating the same kind of negative feedback the next time when they pick up on the same input. (At its simplest, it's probably a case of simple Hebbian learning.)

Furthermore, since reading his text, I have noticed several things in myself which could only be described as control circuits. After reading Overcoming Bias and Less Wrong for a long time, I've found myself noticing whenever I have a train of thought that seems to be indicative of a number of certain kinds of cognitive biases. In retrospect, that is probably a control circuit that has developed to detect the general appearance of a biased thought and to alert me about it. The anxiety circuit I already mentioned. A closely related circuit is one that causes me to need plenty of time to accomplish whatever it is that I'm doing - if I only have a couple of hours before a deadline, I often freeze up and end up unable of doing anything. This leads to me being at my most productive in the mornings, when I have a feeling of having the whole day for myself and of not being in any rush. That's easily interpreted as a circuit that looks at the remaining time and sends sending an alarm when the time runs low. Actually, the circuit in question is probably even stupid than that, as the feeling of not having any time is often tied only what the clock is, not to the time when I'll be going to bed. If I get up at 2 PM and go to bed at 4 AM, I have just as much time as if I'd get up at 9 AM and went to bed at 11 PM, but the circuit in question doesn't recognize this.

So, what can we do about conflicting circuits? Simply recognizing them for what they are is already a big step forward, one which I feel has already helped me overcome some of their effects. Some of them can probably be dismantled simply by identifying them, working out their purpose and deciding it to be unnecessary. (I suspect that this process might actually set up new circuits whose function is to counteract the signals sent by the harmful ones. Maybe. I'm not very sure of what the actual neural mechanism might be.) Eby writes:

So, you want to build Desire and Awareness by tuning in to the right qualities to perceive. Then, you need to eliminate any conflicts that come up.

Now, a lot of times, you can do this by simple negotiation with yourself. Just sit and write down all your objections or issues about something, and then go through them one at a time, to figure out how you can either work around the problem, or find another way to get your other needs met.

Of course, you have to enter this process in good faith; if you judge yourself for say, wanting lots of chocolate, and decide that you shouldn’t want it, that’s not going to work.

But it might work, to be willing to give up chocolate for a while, in order to lose weight. The key is that you need to actually imagine what it would be like to give it up, and then find out whether you can be “okay” with that.

Now, sadly, about 97% of the people who read this are going to take that last paragraph and go, “yeah, sure, I’m going to give up [whatever]”, but without actually considering what it would be like to do so.

And those people are going to fail.

And I kind of debated whether or not I should even mention this method here, because frankly, I don’t trust most people’s controllers any further than I can reprogram them (so to speak).

See, I know from bitter experience that my own controllers for things like “being smart” used to make me rationalize this sort of thing, skipping the actual mental work involved in a technique, because “clearly I’m smart enough not to need to do all that.”

And so I’d assume that just “thinking” about it was enough, without really going through the mental experience needed to make it work. So, most of the people who read this, are going to take that paragraph above where I explained the deep, dark, master-level mindhacking secret, and find a way to ignore it.

They’re going to say things like, “Is that all?” “Oh, I already knew that.” And they’re not going to really sit down and consider all the things that might conflict with what they say they want.

If they want to be wealthy, for example, they’re almost certainly not going to sit down and consider whether they’ll lose their friends by doing so, or end up having strained family relations. They’re not considering whether they’re going to feel guilty for making a lot of money when other people in the world don’t have any, or for doing it easily when other people are working so hard.

They’re not going to consider whether being wealthy or fit or confident will make them like the people they hate, or whether maybe they’re really only afraid of being broke!

But all of them will read everything I’ve just written, and assume it doesn’t apply to them, or that they’ve already taken all that into account.

Only they haven’t.

Because if they had, they would have already changed.

That's a pretty powerful reminder not to ignore your controllers. When you've been reading this, some controller that tries to keep you from doing things has probably already picked up on the excitement some emotional system might now be generating... meaning that you might be about to stumble upon a technique that might actually make you more productive... causing signals to be sent out to suppress attempts to even try it out. Simply acknowleding its existence isn't going to be enough - you need to actively think things out, identify different controllers within you, and dismantle them.

I feel I've managed to avoid the first step, of not doing anything even after becoming aware of the problem. I've been actively looking at different control circuits, some of which have plagued me for quite a long time, and I at least seem to have managed to overcome them. My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it. It feels that the best way to counteract that is to try to consciously set up new circuits dedicated to the task of monitoring for the presence of new circuits, and alarming me of their presence. In other words, keep actively looking for anything that might be a mental control circuit, and teach myself to notice them.

(And now, Eby, please post any kind of comment here so that we can vote it up and give you your fair share of this post's karma. :))

159 comments

Comments sorted by top scores.

comment by SilasBarta · 2009-06-26T20:39:48.760Z · LW(p) · GW(p)

Let me clarify where I do and do not agree with PJ Eby, since we've been involved in some heated arguments that often seem to go nowhere.

I accept that the methods described here could work, and intend to try them myself.

I accept that all of the mechanisms involved in behavior can be restated in the form of a network of feedback loops (or a computer program, etc.).

I accept that Eby is acting as a perfect Bayesian when he says "Liar!" in response to those who claim they "gave it a try" and it didn't work. To the extent that he has a model, that is what it obligates him to believe, and Eliezer Yudkowsky has extensively argued that you should find yourself questioning the data when it conflicts with your model.

So what's the problem, then?

I do not accept that these predictions actually follow from, or were achieved through the insights of, viewing humans as feedback control systems. The explanations here for behavioral phenomena look like commonsense reasoning that is being shoehorned into controls terminology by clever relabeling. (ETA: Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?)

For that reason, I ran the standard check to see if a model is actually constraining expectations by asking pjeby what it rules out, and, more importantly, why PCT says you shouldn't observe such a phenomenon. And I still don't have an answer.

(This doesn't contradict my third point of agreement, because pjeby can believe something contradicts his model, even if, it turns out, the model he claims to believe in draws no such conclusion.)

Rather, based on this article, it looks like PCT is in the position of:

"Did PCT Method X solve your problem? Well, that's because it reset your control references to where they should be. Did it fail? Well, that's because PCT says that other (blankly solid, blackbox) circuts were, like, fighting it."

Replies from: pjeby, GuySrinivasan
comment by pjeby · 2009-06-26T21:29:36.039Z · LW(p) · GW(p)

Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?

You don't. As it says in the report I wrote, I've been teaching most of these things for years.

I ran the standard check to see if a model is actually constraining expectations by asking pjeby what it rules out, and, more importantly, why PCT says you shouldn't observe such a phenomenon. And I still don't have an answer.

And I'm still confused by what it is you expect to see, that I haven't already answered. AFAIK, PCT doesn't say that no behavior is ever generated except by control systems, it just says that control systems are an excellent model for describing how living systems generate behavior, and that we can make more accurate predictions about how a living system will behave if we know what variables it's controlling for.

Since the full PCT model is Turing complete, what is it exactly that you are asking be "ruled out"?

Personally, I'm more interested in the things PCT rules in -- that is, the things it predicts that other models don't, such as the different timescales for "giving up" and symptom substitution. I'm not aware of any other model where this falls out so cleanly as a side effect of the model.

"Did PCT Method X solve your problem? Well, that's because it reset your control references to where they should be. Did it fail? Well, that's because PCT says that other (blankly solid, blackbox) circuts were, like, fighting it."

It's no more black-box than Ainslie's picoeconomics. In fact, it's considerably less black box than picoeconomics, which doesn't do much to explain the internal structure of "interests" and "appetites". PCT, OTOH provides a nice unboxing of those concepts into likely implementations.

Replies from: SilasBarta
comment by SilasBarta · 2009-06-27T20:52:06.909Z · LW(p) · GW(p)

Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?

You don't. As it says in the report I wrote, I've been teaching most of these things for years.

Then why does it get you there faster? If someone had long ago proposed to you that the body operates as a network of negative feedback controllers, would that make you more quickly reach the conclusion that "I should rationally think through the reasons I'm afraid of something", as opposed to, say, blindly reflecting on your own conscious experience?

PCT ... says that control systems are an excellent model for describing how living systems generate behavior, and that we can make more accurate predictions about how a living system will behave if we know what variables it's controlling for.

Yes, and that's quite a monster "if". So far, I haven't seen you identify -- in the rationalist sense -- a "variable being controlled". That requires you to be able to explain it in terms "so simple a computer could understand it". So far, no one can do that for any high-level behavior.

For example, judging sexiness of another person. To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format. Only then am I able to set up a model that shows an error signal which can drive behavior.

(Aside: note that the above can be rephased as saying that you need to find the person's "invariants" of sexiness, i.e., the features that appear the same in the "sexiness" dimension, even despite arbitrary transformations applied to the sense data, like rotation, scaling, environment changes, etc. Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.)

But all of these tasks are just as difficult as the original problem!

Since the full PCT model is Turing complete, what is it exactly that you are asking be "ruled out"?

Now I'm running into "inferential distance frustration", as I'd have to explain the basics of technical explanation, what it means to really explain something, etc., i.e. all those posts by Eliezer Yudkowsky, starting back from his overcomingbias.com posts.

But suffice to say, yes, PCT is Turing complete. So is the C programming language, and so is a literal tape-and-table Turing machine. So, if you accept the Church-Turing Thesis, there must be an isomorphism between some feedback control model and the human body.

And between some C program and the human body.

And between some Turing machine and the human body.

Does this mean it is helpful to model the body as a Turing machine? No, no, no, a thousand times, NO! Because a "1" on the tape is going to map to some hideously complex set of features on a human body.

In other words, the (literal) Turing machine model of human behavior fails to simplify the process of predicting human behavior: it will explain the same things we knew before, but require a lot more complexity to do so, just like using a geocentric eypicycle model.

In OB/LW jargon, it lengthens the message needed to describe the observed data.

I claim the same thing is true of PCT: it will do little more than restate already known things, but allow it to be rephrased, with more difficulty, using controls terminology.

Personally, I'm more interested in the things PCT rules in -- that is, the things it predicts that other models don't, ... I'm not aware of any other model where this falls out so cleanly as a side effect of the model.

Okay, good. Those are things that rationalists should look for. But people have a tendency to claim their theory predicted something after-the-fact, when an objective reading of it would say the the theory predicted no such thing. So I need something that can help me distinguish between:

a) "PCT predicts X, while other models do not."

vs.

b) "PJ Eby's positive affect toward PCT causes him to believe it implies we should observe X, while other models do not."

A great way to settle the matter would be an objective specification of how exactly PCT generates predictions. But so far, it seems that to learn what PCT predicts, you have to pass the data up through the filter of someone who already likes PCT, and thus can freely claim the model says what they want it to say, with no one able to objectively demonstrate, "No, PCT says that shouldn't happen."

Replies from: pjeby
comment by pjeby · 2009-06-27T22:57:52.471Z · LW(p) · GW(p)

If someone had long ago proposed to you that the body operates as a network of negative feedback controllers, would that make you more quickly reach the conclusion that "I should rationally think through the reasons I'm afraid of something", as opposed to, say, blindly reflecting on your own conscious experience?

Of course not; the paths between Theory and Practice are not symmetrical. In the context of my work, the usefulness of a theory like this is it provides me with a metaphorical framework to connect practical knowledge to. Instead of teaching all the dozens of principles, ideas, aphorisms, etc. that I have as isolated units, being able to link each one ot a central metaphor of controllers, levels, etc. makes communication and motivation easier.

To be perfectly fair, PCT would be useful for this purpose even if it were not a true or semi-true theory. However, all else being equal, I'd rather have something true that fits my practical observations, and PCT fits more of my practical observations than anything else. And I believe it is, in fact, true.

I'm merely stating the above so as to make it clear that if I thought it were only a metaphor, I would have no problem with saying, "it's just a metaphor that aids education and motivation in applying certain practical observations by giving them a common conceptual framework."

For example, judging sexiness of another person. To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format. Only then am I able to set up a model that shows an error signal which can drive behavior.

I still don't see your point about this. Any model has to do the same thing, doesn't it? So how is this a flaw of PCT?

And on a practical level, I just finished a webinar where we spent time breaking down people's references for things like "Preparedness" and "Being a good father" and showing how to establish controllable perceptions for these things that could be honored in more than the breach. (For example, if your perceptual definition of "being a good father" is based solely on the actions of your kids rather than your actions, then you are in for some pain!)

IOW, I don't actually see a lot of problem with reference breakdowns and even reference design, at the high-level applications for which I'm using the theory. Would I have a hard time defining "length" or "color" in terms of its referents? Sure, but I don't really care. Powers does a good job of explaining what's currently known about such invariants, and pointing to what research still needs to be done.

Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.)

Did you ever look at any of the Numenta HTM software demos? AFAIK, they actually have some software that can learn the idea of "airplane" from noisy, extremely low-res pictures of them flying by. That is, HTMs can learn invariants from combinations of features. I'm not sure if they have any 3D rotation stuff, but the HTM model appears to explain how it could be done.

I claim the same thing is true of PCT: it will do little more than restate already known things, but allow it to be rephrased, with more difficulty, using controls terminology.

And I've already pointed out why this claim is false, since the controller hierarchy/time-scale correlation has already been a big help to me in my work; it was not something that was predicted by any other model of human behavior.

But so far, it seems that to learn what PCT predicts, you have to pass the data up through the filter of someone who already likes PCT, and thus can freely claim the model says what they want it to say, with no one able to objectively demonstrate, "No, PCT says that shouldn't happen."

Or, you could just go RTFM, instead of asking people to summarize 300 pages in a comment for you... Or you could just wait until someone you trust gives you a summary. But if you don't trust anyone who holds a positive attitude about PCT, why do you insist on asking more questions? As I said, if you want all the detailed evidence and models, you're eventually going to be asking me for virtually every chapter in B:CP.

What I'm teaching to my group is only a watered-down version of the highest levels, specifically as a framework to clarify, connect, and enhance things I've already been teaching. So my writings on it are really not the place to be looking for the math and the science.

Replies from: SilasBarta
comment by SilasBarta · 2009-06-28T18:26:30.630Z · LW(p) · GW(p)

Not surprisingly, the Hawkins HTM model you love so much also appeals to "invariants" for their explanatory power, and yet leaves the whole concept as an unhelpful black box! At least, that's what I got from reading On Intelligence.)

Did you ever look at any of the Numenta HTM software demos? AFAIK, they actually have some software that can learn the idea of "airplane" from noisy, extremely low-res pictures of them flying by.

Actually, yes, I have downloaded their demos, expecting to be wowed, but then fell over laughing. Specifically, this one. It claims to be able to learn to recognize simple black/white 16x16x pixel images using HTM and saccading the images around. But then I gave it a spin, had it learn the images, and then tested it by drawing one of the figures with a very, very slight rotation, which completely screwed up its ability to identify it.

Not impressive.

I claim the same thing is true of PCT: it will do little more than restate already known things, but allow it to be rephrased, with more difficulty, using controls terminology.

And I've already pointed out why this claim is false, since the controller hierarchy/time-scale correlation has already been a big help to me in my work; it was not something that was predicted by any other model of human behavior.

No, what you have shown is that you learned of PCT and HTM, and then you believe you improved in your work. As per my two previous comments in this thread, I can (edited phrasing) accept both of those claims and still doubt the more interesting claims, specifically, that the model actually did help, rather than you merely thinking it did because you could rephrase your intuitive, commonsense reasoning in the model's terminology. I could also doubt that your ability to help people improved.

Or, you could just go RTFM, instead of asking people to summarize 300 pages in a comment for you... As I said, if you want all the detailed evidence and models, you're eventually going to be asking me for virtually every chapter in B:CP.

Please pay attention. I am R-ingTFM, and I even complained that one of the Powers demos understated the strength of their point about feedback control. I already told you I'm going to try your advice. I've read several of the pdfs you've linked, including TheSelfHelpMyth.pdf linked here, and will read several more, and probably even buy Behavior. (Though I couldn't get the freebie you mentioned to work because the website crapped out after I entered my info). I am making every effort to consider this model.

But it is simply not acceptable of you to act like the only alternatives are to repeat hundreds of pages, or speak in dumbed-down blackbox terminology. You can e.g. summarize the chain of useful, critical insights that get me from "it's a network of feedback controllers" to a useful model, so I know which part I'd be skeptical of and which parts assume the solution of problems I know to be unsolved, so I know where to direct my attention.

Replies from: pjeby
comment by pjeby · 2009-06-28T21:55:00.992Z · LW(p) · GW(p)

drawing one of the figures with a very, very slight rotation, which completely screwed up its ability to identify it.

I'm not clear on whether you took this bit from their docs into account:

The system was NOT trained on upside down images, or rotations and skews beyond a simple right-to-left flip. In addition, the system was not trained on any curved lines, only straight line objects.

That is, I'm not clear whether the steps you're describing include training on rotations or not.

rather than you merely thinking it did because you could rephrase your intuitive, commonsense reasoning in the model's terminology

No, I gave you one specific prediction that PCT makes: higher-level controllers operate over longer time scales than low-level ones. This prediction is not a part of any other model I know of. Do you know of another model that makes this prediction? I only know of models that basically say that symptom substitution takes time, with no explanation of how it occurs.

This doesn't have anything to do with whether I believe that prediction to be useful; the prediction is still there, the observation that people do it is still there, and the lack of explanation of that fact is still there, even if you remove me from the picture entirely.

You can e.g. summarize the chain of useful, critical insights that get me from "it's a network of feedback controllers" to a useful model, so I know which part I'd be skeptical of and which parts assume the solution of problems I know to be unsolved, so I know where to direct my attention.

I can only do that if I understand specifically what it is you don't get -- and I still don't.

For example, I don't see why the existence of unsolved problems is a problem, or even remotely relevant, if all the other models we have have to make the same assumption.

From my POV, you are ignoring the things that make PCT useful: namely that it actually predicts as normal, things that other current behavioral models have to treat as special cases or try to handwave out of existence. It's not that PCT is "simpler" than stimulus-response or "action steps" models, it's that it's the simplest model that improves on our ability to make correct predictions about behavior.

Your argument seems to be, "but PCT requires us to gather more information in order to make those predictions". And my answer to that is, "So what? Once you have that information, you can make way better predictions." And it's not that you could just feed the same information into some other model and get similar predictions - the other models don't even tell you what experiments to perform to get yes-or-no answers.

To put it another way, to the extent that PCT requires you to be more precise or gather more information, it is doing so because that degree of actual uncertainty or lack of knowledge exists... and current experimental models disguise that lack of understanding behind statistics.

In contrast, to do a PCT experiment, you need to have a more-specific, falsifiable hypothesis: is the animal or person controlling quantity X or not? You may have to do more experiments in order to identify the correct "X", but you will actually know something real, rather than, "47% of rats appear to do Y in the presence of Z".

Replies from: SilasBarta
comment by SilasBarta · 2009-06-29T18:20:27.344Z · LW(p) · GW(p)

That is, I'm not clear whether the steps you're describing include training on rotations or not.

But that's a pretty basic transformation, and if they could handle it, they would have done so. In any case, the rotation was very slight, and was only one of many tests I gave it. It didn't merely assign a slightly lower probability to the correct answer, it fell off the list entirely.

Consider how tiny the pictures are, this is not encouraging.

Your argument seems to be, "but PCT requires us to gather more information in order to make those predictions". And my answer to that is, "So what? Once you have that information, you can make way better predictions."

No, you misunderstand: my complaint is that PCT requires us to solve problems of equal or greater difficulty than the initial problem being solved. To better explain what I mean, I gave you the example with the literal tape-and-table Turing machine. Watch what happens when I make your same point, but in advocacy of the "Turing machine model of human behavior".

"I've discovered a great insight that helps unify my research and better assist people with their problems. It's to view them as a long, sectioned tape with a reader and state record, which [explanation of Turing machine]. This model is so useful because all I have to do is find out whether people have 1's rather than 0's in places 5000-5500 on their tape, and if they do, I just have to change state 4000 to erase rather than merely move state! This helps explain why people have trouble in their lives, because they don't erase bad memories."

See the problems with my version?

1) Any model of a human as a Turing machine would be way more complex than the phenomenon I'm trying to explain, so the insight it gives is imaginary.

2) Even given a working model, the mapping from any part of the TM model to the human is hideously complex.

3) "Finding someone's 1's and 0's" is near impossible because of the complexity of the mapping.

4) The analogy between erasing memories and erasure operations is only superficial, and not indicative of the model's strength.

5) Because I obviously could not have a TM model of humans, I'm not actually getting my insight from the model, but from somewhere else.

And points 1-5 are exactly what I claim is going on with you and PCT.

Nevertheless, I will confess I've gotten more interested in PCT, and it definitely looks scientific for the low level systems. I've read the first two Byte magazine articles and reproduced it in Matlab's Simulink, and I'm now reading the third, where it introduces hierarchies.

My main dispute is with your insistence that you can already usefully apply real predictions from PCT at higher-level systems, where the parallels with feedback control systems appear very superficial and the conclusions seem to be reached with commonsense reasoning unaided by PCT.

Btw: my apologies, but somehow I accidentally deleted a part of my last reply before posting it, and my remark now resides only in my memory. It's related to the same point I just made. I'll put it here so you don't need to reply a second time to that post:

To phrase it in terms of a feedback controller, I have to identify -- again, in the rationalist sense, not just a pleasant sounding label -- the reference being controlled. So, that means I have to specify all of the relevant things that affect my attaction level. Then, I have to find how the sensory data is transformed into a comparable format ... Only then am I able to set up a model that shows an error signal which can drive behavior.

I still don't see your point about this. Any model has to do the same thing, doesn't it? So how is this a flaw of PCT?

No, a model doesn't need to do the same thing. A purely neuronal model would not need to have the concept of "sexiness" and a comparator for it. Remember, the whole framing of a situation as a "romantic relationship" is just that: a framing the we have imposed on it to make sense of the world. It does not exist at lower levels, and so models need not be able to indentify such complex "invariants".

Replies from: pjeby
comment by pjeby · 2009-06-29T19:00:24.539Z · LW(p) · GW(p)

I'm sorry, but I'm still utterly baffled by your comments, since your proposed "purely neuronal" model is more analogous to the Turing machine.

It sounds a bit like the part you're missing is the PCT experimental design philosophy, aka the Test -- a way of formulating and testing control hypotheses at arbitrary levels of the hierarchy. To test "sexiness" or some other high-level value, it is not necessary to completely specify all its lower-level components, unless of course the goal of your experiment is to identify those components.

We don't need, for example, to break down how object invariance happens to be able to do an experiment where a rat presses a bar! We assume the rat can identify the bar and determine whether it is currently pressed. The interesting part is what other things (like food, mate availability, shock-avoidance, whatever) that you can get the rat to control by pressing a bar. (At least, at higher levels.)

Replies from: SilasBarta
comment by SilasBarta · 2009-06-29T22:22:18.710Z · LW(p) · GW(p)

I'm sorry, but I'm still utterly baffled by your comments, since your proposed "purely neuronal" model is more analogous to the Turing machine.

So? I agree that the "purely neuronal" model would be really complex (though not as complex as the Turing machine would be). I just brought it up in order to show how a model doesn't "need to have a sexiness comparator anyway", so you do have to justify the simplicity gained when you posit that there is one.

It sounds a bit like the part you're missing is the PCT experimental design philosophy, aka the Test -- a way of formulating and testing control hypotheses at arbitrary levels of the hierarchy. To test "sexiness" or some other high-level value, it is not necessary to completely specify all its lower-level components, unless of course the goal of your experiment is to identify those components.

But if you don't specify all of the lower level components, then your controls explanation is just a restating of the problem, not a simplifying of it. The insight you claim you are getting from it is actually from your commonsense reasoning. Indeed, virtually every insight you "explain" by PCT, you got some other way.

We don't need, for example, to break down how object invariance happens to be able to do an experiment where a rat presses a bar!

Sure, but that's because you don't need to account for the rat's ability to identify the bar in a wide variety of contexts and transformations, which is the entire point of looking for invariants.

Replies from: pjeby
comment by pjeby · 2009-06-29T22:44:29.010Z · LW(p) · GW(p)

But if you don't specify all of the lower level components, then your controls explanation is just a restating of the problem, not a simplifying of it. The insight you claim you are getting from it is actually from your commonsense reasoning.

Kindly explain what "commonsense reasoning" explains the "symptom substitution" phenomenon in hypnosis, and in particular, explains why the duration of effect varies, using any model but PCT.

Replies from: SilasBarta
comment by SilasBarta · 2009-06-30T14:28:48.294Z · LW(p) · GW(p)

While I can look up "symptom substitution", I'll to know more specifically what you mean by this. But I'd have to be convinced that PCT explains it first in a way that doesn't smuggle in your commonsense reasoning.

Now, if you want examples of how commonsense reasoning leads to the same conclusions that are provided as examples of the success of PCT, that I already have by the boatload. This whole top-level post is an example of using commonsense reasoning but attributing it to PCT. For example, long before I was aware of the concept of a control system, or even feedback (as such) I handled my fears (as does virtually everyone else) by thinking through what exactly it is about the feared thing that worries me.

Furthermore, it is obvious to most people that if you believe obstacles X, Y, and Z are keeping you from pursuing goal G, you should think up ways to overcome X, Y, and Z, and yet Kaj here presents that as something derived from PCT.

Replies from: pjeby
comment by pjeby · 2009-06-30T16:33:15.363Z · LW(p) · GW(p)

While I can look up "symptom substitution", I'll to know more specifically what you mean by this.

Specifically, find a "commonsense" explanation that explains why symptom substitution takes time to occur, without reference to PCT's notion of a perception averaged over time.

Replies from: CronoDAS
comment by CronoDAS · 2009-06-30T17:24:47.173Z · LW(p) · GW(p)

Googling "symptom substitution" lead me to a journal article that argued that people have tried and failed to find evidence that it happens...

Replies from: pjeby
comment by pjeby · 2009-06-30T21:18:53.938Z · LW(p) · GW(p)

Googling "symptom substitution" lead me to a journal article that argued that people have tried and failed to find evidence that it happens...

That's Freudian symptom substitution, and in any case, the article is splitting hairs: it says that if you stop a child sucking its thumb, and it finds some other way to get its needs met, then that doesn't count as "symptom substitution". (IOW, the authors of the paper more or less defined it into nonexistence, such that if it exists and makes sense, it's not symptom substitution!)

Also, the paper raises the same objection to the Freudian model of symptom substitution that I do: namely, that there is no explanation of the time frame factor.

In contrast, PCT unifies the cases both ruled-in and ruled out by this paper, and offers a better explanation for the varying time frame issue, in that the time frame is governed by the perceptual decay of the controlled variable.

comment by SarahNibs (GuySrinivasan) · 2009-06-26T21:52:44.002Z · LW(p) · GW(p)

I purchased Behavior: The Control of Perception and am reading it. Unless someone else does so first, I plan to write a review of it for LW. A key point is that at least part of PCT is actually right. The lowest level controllers, such as those controlling tendon tension, are verifiably there. So far as I can see so far, real physical structures corresponding pretty closely to second and third level controllers also exist and have been pointed to by science. I haven't gotten further than this yet, but teasers within the book indicate that (when the book was written of course) there is good evidence that some fifth level control systems exist in particular places in the brain, and thus fourth level somewhere. Whether it's control systems (or something closish to them) all the way up? Dunno. But the first couple levels, not yet into the realm of traditional psychology or whatnot, those definitely exist in humans. And animals of course. The description of the scattershot electrodes in hundreds of cats experiment was pretty interesting. :)

That said, you're absolutely right, there should be some definite empirical implications of the theory. For example due to the varying length of the paths at various supposed levels, it should be possible to devise an experiment around a simple tracking task with carefully selected disturbances which will have one predicted result under PCT and another under some other model. Also, predicting specific discretization of tasks that look continuous should be possible... I have not spent a lot of time thinking about how to devise a test like this yet, unfortunately.

Replies from: Eliezer_Yudkowsky, SoullessAutomaton
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-27T17:15:36.994Z · LW(p) · GW(p)

Please add PCT to the wiki as Jargon and link there when this term, whatever it means, is used for the first time in a thread. It is not in the first 10 Google hits.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-06-27T17:52:48.877Z · LW(p) · GW(p)

It seems jimrandomh has taken the time to do so; the wikipedia article should be helpful.

In defense of people using the acronym without definition, though, it seemed fairly obvious if you look at the wikipedia disambig page for the acronym in question.

comment by SoullessAutomaton · 2009-06-26T22:31:50.352Z · LW(p) · GW(p)

Whether it's control systems (or something closish to them) all the way up? Dunno. But the first couple levels, not yet into the realm of traditional psychology or whatnot, those definitely exist in humans.

As a general-purpose prior assumption for systems designed by evolutionary processes, reusing or adapting existing systems is far more likely than spontaneous creation of new systems.

Thus, if it can be demonstrated that a model accurately represents low-level hierarchical systems, this is reasonably good evidence in favor of that model applying all the way to the top levels as opposed to other models with similar explanatory power for said upper levels.

comment by pjeby · 2009-06-26T19:06:04.820Z · LW(p) · GW(p)

My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it.

You don't need to be quite that paranoid. PCT's model of "reorganization" presumes that it is associated with "intrinsic error" -- something we generally perceive as pain, stress, fear, or "pressure".

So if you are experiencing a conflict between controllers that will result in rewiring you, you should be able to notice it as a gradually increasing sense of pressure or stress, at which point you can become aware of the need to resolve a conflict in your goals.

Remember: your controllers are not alien monsters taking you over; they are you, and reflect variables that at some point, you considered important to control for. They may have been set up erroneously or now be obsolete, but they are still yours, and to let go of them therefore requires actual reflection on whether you still need them, whether there is another way to control the variable, whether the variable should be slightly redefined, etc.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-06-27T23:28:41.940Z · LW(p) · GW(p)

Ah, yeah. After thinking it through for a while, I realized you were right. At its bottom, it's a question of me (or whoever who's suffering from the problem) not really wanting to change and also not wanting acknowledge this. Not a malevolent demon invisibly rewriting reality whenever things don't go the way it likes.

Replies from: pjeby
comment by pjeby · 2009-06-28T00:02:29.638Z · LW(p) · GW(p)

At its bottom, it's a question of me (or whoever who's suffering from the problem) not really wanting to change and also not wanting acknowledge this.

It's not even that; it's just that unless you make connections between what you want and how to get it, you're just going to end up with whatever slop of a controller structure you previously managed to throw together well enough to just barely work... for your circumstances at the time.

And to get an improved control structure, you have to be willing to look at what's already there, not just throw in a bunch of optimistic new stuff and expect it to work. Most likely, things are the way they are for good reasons, and if your changes don't take those reasons into account, you will end up with conflicts and relapse.

Of course, as long as you take the relapse as feedback that something else in the control structure needs modification, you'll be fine. It's the interpretation of a relapse as meaning that you lack sufficient willpower or something like that, that creates problems.

comment by Jonathan_Graehl · 2009-06-27T09:56:35.300Z · LW(p) · GW(p)

I have an alternative theory for why some self-help methods that at first seem to work, eventually don't.

You were excited. You wanted to believe. You took joy in every confirmation. But either you couldn't maintain the effort, or the method became routine, and it seems you have rather less than you first thought.

The next revelation will change EVERYTHING.

comment by jimrandomh · 2009-06-26T21:08:54.076Z · LW(p) · GW(p)

PCT is the first thing I've encountered that seems like it can make real headway in understanding the brain. Many thanks to PJ, Kaj and the others who've written about it here.

I notice that all of the writings about controllers I've seen so far assume that the only operations controllers can perform on each other are to set a target, push up and push down. However, there are two more natural operations with important consequences: damping and injecting noise. Conversely, a controller need not measure only the current value of other controllers, but can also measure their rate of change in the short term and their domain and instability in the long term.

Stress seems like it might be represented by the global rate of change and oscillation in a large group of controllers. That would explain why conflicts between controllers induce stress, and why reorganizations that eliminate the conflict can reduce it. Focus meditation is probably best explained as globally damping a large set of normally-oscillating controllers at once, which would explain why it's calming.

Injecting noise into controllers allows them to find new equilibria, where they'll settle when the noise goes away. This seems like a likely purpose for REM sleep. The very-high activity levels recorded during REM using EEG and similar methods suggest that's exactly what it's doing. This would predict that getting more REM sleep would decrease stress, as the new equilibria would have fewer conflicts, and that is indeed the case.

If fMRI studies can confirm that the brain activity it measures corresponds to oscillating controllers, then combined with meditations thar dampen and excite particular regions, this could be a powerful crowbar for exposing more of the mind.

comment by cousin_it · 2009-06-29T09:56:54.628Z · LW(p) · GW(p)

So Vassar was right, we have reached a crisis. A self-help sales pitch with allegations of first-percentile utility right here on LW. This gets my downvote on good old Popperian grounds.

You say this stuff helps with akrasia? However hot your enthusiasm burns, you don't get to skip the "controlled study" part. Come back with citations. At this point you haven't even ruled out the placebo effect, for Bayes' sake!

Replies from: Kaj_Sotala, jimrandomh, Nick_Tarleton, thomblake
comment by Kaj_Sotala · 2009-06-29T16:15:00.206Z · LW(p) · GW(p)

However hot your enthusiasm burns, you don't get to skip the "controlled study" part.

While I agree with some of what you're saying, it isn't like "cached thoughts" or many of Eliezer's other classics come with references to controlled studies, either. Like Robin Hanson pointed out in response to my own critique of evpsych:

claims can be "tested" via almost any connection they make with other claims that connect etc. to things we see. This is what intellectual exploration looks like.

No, Eby's article didn't have direct references to empirical work establishing the connection between PCT and akrasia, but it did build on enough existing work about PCT to make the connection plausible and easy to believe. If this were a peer-reviewed academic journal, that wouldn't be enough, and it'd have to be backed with experimental work. But I see no reason to require LW posts to adhere to the same standard as an academic journal - this is also a place to simply toss out interesting and plausible-seeming ideas, so that they can be discussed and examined and somebody can develop them further, up to the point of gathering that experimental evidence.

comment by jimrandomh · 2009-06-29T12:15:20.510Z · LW(p) · GW(p)

You say this stuff helps with akrasia? However hot your enthusiasm burns, you don't get to skip the "controlled study" part. Come back with citations. At this point you haven't even ruled out the placebo effect, for Bayes' sake!

The term "placebo effect" was coined to refer to phsychological effects intruding on non-psychological studies. In this case, since the desired effect is purely psychological, it's meaningless at best and misleading at worst. There is no self-help advice equivalent to a sugar pill. The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical.

So, if you have an experimental procedure, go ahead and suggest it. Absent that, the only available data comes from self-experimentation and anecdotes.

Replies from: cousin_it, wedrifid, thomblake, Vladimir_Nesov
comment by cousin_it · 2009-06-29T12:53:30.412Z · LW(p) · GW(p)

What if you're wrong? What if the most effective anti-procrastination technique is tickling your left foot in exactly the right manner, and this works regardless of whether you believe in its efficacy, or even know about it? That (predicated on a correct theory of human motivation) is the kind of stuff we're looking for.

There is no self-help advice equivalent to a sugar pill. The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical.

You're saying that there's no neutral (non-positive and non-negative) self-help advice? That's a pretty weird statement to make. Some advice is good, some is bad; why do you suspect a gap at zero? Failing all else, you could refrain from telling the subjects that the study is about self-control and anti-procrastination, just tell them to blindly follow some instructions and measure the effects covertly.

No, I have no experimental protocol ready yet, but have the impudence to insist that we as a community should create one or shut up.

Replies from: Nick_Tarleton, wedrifid
comment by Nick_Tarleton · 2009-06-29T16:47:36.442Z · LW(p) · GW(p)

That (predicated on a correct theory of human motivation) is the kind of stuff we're looking for.

You don't know what "we" are looking for. There is no one thing "we" are looking for. Some of us may be interested in plausible, attested-to self-help methods, even without experimental support.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-29T17:01:53.562Z · LW(p) · GW(p)

Some of us may be interested in plausible, attested-to self-help methods, even without experimental support.

Without experimental support is fine. But without extraordinary support isn't. Something must make the plausibility of a particular thing stand out, because you can't be interested in all the 1000 of equally plausible things unless you devote all your time to that.

comment by wedrifid · 2009-07-04T23:04:24.969Z · LW(p) · GW(p)

No, I have no experimental protocol ready yet, but have the impudence to insist that we as a community should create one or shut up.

I certainly agree with the 'create one' part of what you're saying. Not so much the 'shut up'. Talking about the topic (and in so doing dragging all sorts of relevant knowledge from the community) and also self experimenting has its use. Particularly in as much as it can tell us whether something is worth testing.

I do note that there are an awful lot of posts here (and on Overcoming Bias) which do not actually have controlled studies backing them. Is there a reason why Kaj's post requires a different standard to be acceptable? (And I ask that non-rhetorically, I can see reasons why you may reasonably do just that.)

comment by wedrifid · 2009-07-04T22:50:42.681Z · LW(p) · GW(p)

The closest thing to a sugar pill available is known-bad advice, and giving known-bad advice to a control group strikes me as decidedly unethical.

It would seem ethically acceptable to give groups advice selected from common social norms. For example, give one group some "Getting Things Done", another group nothing at all, a third some instruction on calculus (irrelevant but still high status attention and education), a fifth a drill sergeant motivational yelling at and the fourth group gets PJEby's system.

comment by thomblake · 2009-06-29T13:08:00.705Z · LW(p) · GW(p)

The closest thing to a sugar pill available is known-bad advice,

  1. One example of a control group in a psychological study (can't find reference): researchers compared freudian psychoanalysis to merely sitting there and listening.

  2. sugar has physiological effects, so you can't really assume a sugar pill is neutral with no side-effects

Replies from: wedrifid
comment by wedrifid · 2009-07-04T22:53:51.080Z · LW(p) · GW(p)

sugar has physiological effects, so you can't really assume a sugar pill is neutral with no side-effects

And when you are testing the psychological effects of urea based salts you can't really assume lithium salts are neutral with no side-effects.

comment by Vladimir_Nesov · 2009-06-29T14:02:46.260Z · LW(p) · GW(p)

Is it how the real studies view the situation?

comment by Nick_Tarleton · 2009-06-29T17:05:20.318Z · LW(p) · GW(p)

However hot your enthusiasm burns, you don't get to skip the "controlled study" part.

To what end do you not get to skip it? Others may legitimately have lower standards for something being interesting, as Kaj said, or for a technique being worth a try.

Honestly, it sounds more like you're trying to take down Kaj for getting uppity and violating the norms of Science, than like you're trying to contribute to finding truth or usefulness.

comment by thomblake · 2009-06-29T17:32:52.652Z · LW(p) · GW(p)

This gets my downvote on good old Popperian grounds. ... you don't get to skip the "controlled study" part. Come back with citations.

I'm afraid you have Popper all turned around. According to Popper, one should make claims that are testable, and then it's the job of (usually other) scientists to perform experiments to try to tear them apart.

If you're a Popperian and you disagree, go ahead and perform the experiment. If your position is that the relevant claim isn't testable, that's a different complaint entirely.

Replies from: Annoyance, cousin_it
comment by Annoyance · 2009-06-29T17:38:05.033Z · LW(p) · GW(p)

You're supposed to try to tear apart your own claims, first. Making random but testable assertions for no particular reason is not part of the methodology.

comment by cousin_it · 2009-06-29T19:43:36.883Z · LW(p) · GW(p)

Yes, I'm a Popperian. Yes, people should make testable claims and other people should test them. That's how everything is supposed to work. All right so far.

As to the nature of my complaint... Here's a non-trivial question: how do we rigorously test Kaj and Eby's assertions about akrasia? I took Vassar's words very seriously and have been trying to think up an experiment that would (at least) properly control for the belief effect, but came up empty so far. If I manage to solve this problem, I'll make a toplevel post about that.

Replies from: wedrifid
comment by wedrifid · 2009-07-04T23:14:17.535Z · LW(p) · GW(p)

Why is it so difficult? Even a head to head test between PJ's magic and an arbitrarily selected alternative would provide valuable information. Given the claims made for, as you pointed out, first percentile utility, it seems that just a couple of tests against arbitrary alternatives should be expected to show drastic differences and at least tell us whether it is worth thinking harder.

comment by jajvirta · 2009-07-03T04:22:20.250Z · LW(p) · GW(p)

One particular feature of the mind that PCT explains neatly is the mind's tendency to reject attempts to will oneself to do an unpleasant action. In fact it is often the case that the harder you try, the harder the mind resists. Aaron Swartz calls this the mental force field and that's just how it often feels like.

What eventually resolves the conflict is not that you are finally able to will yourself to do the action, but usually some sort of context or reference point switch. At day-job, this is typically some kind of realization that you really need to do the job or you get in trouble. In fact, most anti-procrastination tricks are basically instances of a some sort of context switch.

comment by timtyler · 2009-06-26T18:31:26.385Z · LW(p) · GW(p)

This article is quite long. As general feedback, I won't usually bother reading long articles unless they summarise their content up front with an abstract, or something similar. This post starts with more of a teaser. A synopsis at the end would be good as well: tell me three times.

Replies from: pjeby, Cyan
comment by pjeby · 2009-06-26T19:18:58.924Z · LW(p) · GW(p)

FWIW, the original article on Kaj's blog is formatted in a way that makes it much easier to read/skim than here.

comment by Cyan · 2009-06-26T18:40:29.860Z · LW(p) · GW(p)

I don't mind the length; I second the "tell me three times".

Replies from: thoughtfulape
comment by thoughtfulape · 2009-06-28T05:16:17.326Z · LW(p) · GW(p)

An observation: PJeby if you really have a self help product that does what it says on the tin for anyone who gives it a fair try, I would argue that the most efficient way of establishing credibility among the Less wrong community would be to convince a highly regarded poster of that fact. To that end I would suggest that offering your product to Eliezer Yudkowsky for free or even paying him to try it in the form of a donation to his singularity institute would be more effective than the back and forth that I see here. It should be possible to establish an mutually satisfactory set of criteria of what constitutes 'really trying it' beforehand to avoid subsequent accusations of bad faith.

Replies from: pjeby, Cyan, Vladimir_Nesov
comment by pjeby · 2009-06-28T17:02:57.005Z · LW(p) · GW(p)

I would argue that the most efficient way of establishing credibility among the Less wrong community would be to convince a highly regarded poster of that fact.

What makes you think that that's my goal?

Replies from: thoughtfulape
comment by thoughtfulape · 2009-06-29T01:49:46.951Z · LW(p) · GW(p)

Pjeby: If your goal isn't to convince the less wrong community of the effectiveness of your methodology then I am truly puzzled as to why you post here. If convincing others is not your goal, then what is?

Replies from: pjeby
comment by pjeby · 2009-06-29T01:55:01.589Z · LW(p) · GW(p)

If convincing others is not your goal, then what is?

Helping others.

Replies from: Alicorn, Vladimir_Nesov
comment by Alicorn · 2009-06-29T02:37:10.215Z · LW(p) · GW(p)

Do you expect anyone to benefit from your expertise if you can't convince them you have it?

Replies from: pjeby
comment by pjeby · 2009-06-29T02:51:19.160Z · LW(p) · GW(p)

Do you expect anyone to benefit from your expertise if you can't convince them you have it?

Either someone uses the information I give or they don't. One does not have to be "convinced" of the correctness of something in order to test it.

But whether someone uses the information or not, what do I or my "expertise" have to do with it?

Replies from: arundelo, Vladimir_Nesov
comment by arundelo · 2009-06-29T03:35:58.399Z · LW(p) · GW(p)

But whether someone uses the information or not, what do I or my "expertise" have to do with it?

Someone is more likely to spend the time and effort to test something if they think it's more likely to be correct.

comment by Vladimir_Nesov · 2009-06-29T03:29:04.052Z · LW(p) · GW(p)

Either someone uses the information I give or they don't. One does not have to be "convinced" of the correctness of something in order to test it.

It's irrational of people who aren't convinced that the information is useful to use it.

Either a tiger eats celery or it doesn't. But the tiger has to be "convinced" that celery is tasty in order to taste it.

Replies from: pjeby
comment by pjeby · 2009-06-29T04:45:11.694Z · LW(p) · GW(p)

One of the most frustrating things about dealing with LW is the consistent confusion by certain parties between the terms "correct" and "useful".

I said "one does not have to be convinced of the correctness of something in order to test it", and you replied with something about usefulness. Therefore, there is nothing I can say about your response except that it's utterly unrelated to what I said.

Replies from: LeBleu, Technologos, Vladimir_Nesov
comment by LeBleu · 2009-06-29T21:58:56.411Z · LW(p) · GW(p)

You are the one who introduced correctness into the argument. Alicorn said:

Do you expect anyone to benefit from your expertise if you can't convince them you have it?

Feel free to read this as 'convince them your expertise is "useful" ' rather than your assumed 'convince them your expertise is "correct" '.

The underlying point is that there is a very large amount of apparently useless advice out there, and many self-help techniques seem initially useful but then stop being useful. (as you are well aware since your theory claims to explain why it happens)

The problem is to convince someone to try your advice, you have to convince them that the (probability of it being useful claimed benefit probability of claim being correct) is greater than the opportunity cost of the expected effort to try it. Due to others in the self-help market, the prior for it being useful is very low, and the prior for the claimed benefits equaling the actual benefits is low.

You also are running into the prior that if someone is trying to sell you something, they are probably exaggerating its claims to make a sale. Dishonest salespeople spoil the sales possibilities for all the honest ones.

If you can convince someone with a higher standing in the community than you to test your advice and comment on the results of their test, you can raise individual's probability expectations about the usefulness (or correctness) of your advice, and hence help more people than you otherwise would have.

P.S. I did go to your site and get added to your mailing list. However, even if your techniques turn out positively for me, I don't think I have any higher standing in this community than you do, so I doubt my results will hold much weight with this group.

Replies from: pjeby
comment by pjeby · 2009-06-29T22:37:59.527Z · LW(p) · GW(p)

You also are running into the prior that if someone is trying to sell you something, they are probably exaggerating its claims to make a sale.

Actually, I'm also running into a bias that merely because I have things to sell, I'm therefore trying to sell something in all places at all times... or that I'm always trying to "convince" people of something.

Indeed, the fact that you (and others) seem to think I need or even want to "convince" people of things is a symptom of this. Nobody goes around insisting that say, Yvain needs to get some high-status people to validate his ideas and "convince" the "community" to accept them!

If I had it all to do over again, I think I would have joined under a pseudonym and never let on I even had a business.

comment by Technologos · 2009-06-29T06:13:22.595Z · LW(p) · GW(p)

You are certainly right that "one does not have to be convinced of the correctness of something in order to test it." But as you also said, immediately prior, "Either someone uses the information I give or they don't."

If we test information that we do not have reason to believe is useful, then we have a massive search space to cover. Much of the point of LW is to suggest useful regions for search, based on previous data.

So no, correctness is not a necessary condition of usefulness. But things that are correct are usually rather useful, and things that are not correct are less so. To the extent that you or your expertise are reliable indicators of the quality of your information, they help evaluate the probability of your information being useful, and hence the expected benefit of testing it.

Perhaps some parties on LW are actually confused by the distinction between truth and utility. I do not suspect Vladimir_Nesov is one of them.

Replies from: pjeby
comment by pjeby · 2009-06-29T18:50:28.783Z · LW(p) · GW(p)

But things that are correct are usually rather useful, and things that are not correct are less so.

Really? With what probability?

Or to put it another way: how were people were to start and put out fires for millennia before they had a correct theory of fire? Work metals without a correct atomic or molecular theory? Build catapults without a correct theory of gravity? Breed plants and animals without a correct theory of genetics?

In the entire history of humanity, "Useful" is negatively correlated with "Correct theory"... on a grand scale.

Sure, having a correct theory has some positive correlation with "useful", but there's usually a ton more information you need besides the correct theory to get to "useful", and more often, the theory ends up being derived from something that's already "useful" anyway.

Replies from: Cyan, Technologos, derekz
comment by Cyan · 2009-06-29T19:52:17.357Z · LW(p) · GW(p)

That's a shockingly poor argument. Who can constrain the future more effectively: someone who knows the thermodynamics of combustion engines, or someone who only knows how to start fires with a flint-and-steel and how to stop them with water? Someone who can use X-ray crystallography to assess their metallurgy, or someone who has to whack their product with a mallet to see if it's brittle? Someone who can fire mortars over ranges requiring Coriolis corrections (i.e., someone with a correct theory of mechanics) or someone who only knows how to aim a catapult by trial and error? Someone who can insert and delete bacterial genes, or someone who doesn't even know germ theory?

Someone who actually knows how human cognition works on all scales, or someone with the equivalent of a set of flint-and-steel level tools and a devotion to trial and error?

Replies from: Sideways, pjeby
comment by Sideways · 2009-06-29T21:27:56.753Z · LW(p) · GW(p)

'Correctness' in theories is a scalar rather than a binary quality. Phlogiston theory is less correct (and less useful) than chemistry, but it's more correct--and more useful!--than the theory of elements. The fact that the modern scientific theories you list are better than their precursors, does not mean their precursors were useless.

You have a false dichotomy going here. If you know of someone who "knows how human cognition works on all scales", or even just a theory of cognition as powerful as Newton's theory of mechanics is in its domain, then please, link! But if such a theory existed, we wouldn't need to be having this discussion. A strong theory of cognition will descend from a series of lesser theories of cognition, of which control theory is one step.

Unless you have a better theory, or a convincing reason to claim that "no-theory" is better than control theory, you're in the position of an elementalist arguing that phlogiston theory should be ignored because it can't explain heat generated by friction--while ignoring the fact that while imperfect, phlogiston theory is strictly superior to elemental theory or "no-theory".

Replies from: Cyan
comment by Cyan · 2009-06-30T01:35:18.201Z · LW(p) · GW(p)

You've misunderstood my emphasis. I'm an engineer -- I don't insist on correctness. In each case I've picked above, the emphasis is on a deeper understanding (a continuous quantity, not a binary variable), not on truth per se. (I mention correctness in the Coriolis example, but even there I have Newtonian mechanics in mind, so that usage was not particularly accurate.)

My key perspective can be found in the third paragraph of this comment.

I'm all for control theory as a basis for forming hypotheses and for Seth Roberts-style self-experimentation.

comment by pjeby · 2009-06-29T22:26:10.427Z · LW(p) · GW(p)

As best I can tell, you agree that what I said is true, but nonetheless dispute the conclusion... and you do so by providing evidence that supports my argument.

That's kind of confusing.

What I said was:

One of the most frustrating things about dealing with LW is the consistent confusion by certain parties between the terms "correct" and "useful".

And you gave an argument that some correct things are useful. Bravo.

However, you did not dispute the part where "useful" almost always comes before "correct"... thereby demonstrating precisely the confusion I posted about.

Useful and correct are not the same, and optimizing for correctness does not necessarily optimize usefulness, nor vice versa. That which is useful can be made correct, but that which is merely correct may be profoundly non-useful.

However, given a choice between a procedure which is useful to my goals (but whose "theory" is profoundly false), or a true theory which has not yet been reduced to practice, then, all else about these two pieces of information being equal, I'm probably going to pick the former -- as would most rational beings.

(To the extent you would pick the latter, you likely hold an irrational bias... which would also explain the fanboy outrage and downvotes that my comments on this subject usually provoke here.)

Replies from: Cyan
comment by Cyan · 2009-06-30T00:54:14.690Z · LW(p) · GW(p)

I did not simply argue that some correct things are useful. I pointed out that every example of usefulness you presented can be augmented beyond all recognition with a deeper understanding of what is actually going on.

Let me put it this way: when you write, "how were people were to start and put out fires for millennia..." the key word is "start": being satisfied with a method that works but provides no deep understanding is stagnation.

Ever seeking more useful methods without seeking to understand what is actually going on makes you an expert at whatever level of abstraction you're stuck on. Order-of-magnitude advancement comes by improving the abstraction.

However, given a choice between a procedure which is useful to my goals (but whose "theory" is profoundly false), or a true theory which has not yet been reduced to practice, then, all else about these two pieces of information being equal, I'm probably going to pick the former -- as would most rational beings.

I would also pick the former, provided my number one choice was not practical (perhaps due to time or resource constraints). The number one choice is to devote time and effort to making the true theory practicable. But if you never seek a true theory, you will never face this choice.

ETA: I'll address:

As best I can tell, you agree that what I said is true, but nonetheless dispute the conclusion... and you do so by providing evidence that supports my argument.

by saying that you are arguing against, and I am arguing for:

But things that are correct are usually rather useful, and things that are not correct are less so.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-30T02:27:39.955Z · LW(p) · GW(p)

Deep theory has profound long-term impact, but is useless for simple stuff.

Replies from: Cyan
comment by Cyan · 2009-06-30T03:12:10.733Z · LW(p) · GW(p)

What is considered simple stuff is itself a function of that profound long-term impact.

comment by Technologos · 2009-07-01T07:18:44.494Z · LW(p) · GW(p)

I agree with Cyan, but even more basically, the set of correct beliefs necessarily includes any and all useful beliefs, because anything that is useful but incorrect can be derived from correct beliefs as well (similar to Eliezer's Bayesians vs. Barbarians argument).

So, probabilistically, we should note that P(Useful|Correct)>P(Useful|Incorrect) because the space of correct beliefs is much smaller than the space of all beliefs, and in particular smaller than the space of incorrect beliefs. More importantly, as Sideways notes, more correct beliefs produce more useful effects; we don't know now whether we have a "correct" theory of genetics, but it's quite a bit more useful than its predecessor.

Replies from: pjeby, thomblake
comment by pjeby · 2009-07-01T16:05:22.853Z · LW(p) · GW(p)

You still don't get it. Correct beliefs don't spring full-grown from the forehead of Omega - they come from observations. And to get observations, you have to be doing something... most likely, something useful.

That's why your math is wrong for observed history - humans nearly always get "useful" first, then "correct".

Or to put it another way, in theory you can get to practice from theory, but in practice, you almost never do.

Replies from: Technologos
comment by Technologos · 2009-07-02T05:04:32.912Z · LW(p) · GW(p)

Let's assume that what you say is true, that utility precedes accuracy (and I happen to believe this is the case).

That does not in any way change the math. Perhaps you can give me some examples of (more) correct beliefs that are less useful than a related and corresponding (more) incorrect belief?

Replies from: pjeby
comment by pjeby · 2009-07-02T05:38:46.662Z · LW(p) · GW(p)

Perhaps you can give me some examples of (more) correct beliefs that are less useful than a related and corresponding (more) incorrect belief?

It doesn't matter if you have an Einstein's grasp of the physical laws, a Ford's grasp of the mechanics, and a lawyer's mastery of traffic law... you still have to practice in order to learn to drive.

Conversely, as long as you learn correct procedures, it doesn't matter if you have a horrible or even ludicrously incorrect grasp of any of the theories involved.

This is why, when one defines "rationality" in terms of strictly abstract mentations and theoretical truths, one tends to lose in the "real world" to people who have actually practiced winning.

Replies from: Technologos, Nick_Tarleton
comment by Technologos · 2009-07-02T07:08:12.917Z · LW(p) · GW(p)

And I wasn't arguing that definition, nor did I perceive any of the above discussion to be related to it. I'm arguing the relative utility of correct and incorrect beliefs, and the way in which the actual procedure of testing a position is related to the expected usefulness of that position.

To use your analogy, you and I certainly have to practice in order to learn to drive. If we're building a robot to drive, though, it damn sure helps to have a ton of theory ready to use. Does this eliminate the need for testing? Of course not. But having a correct theory (to the necessary level of detail) means that testing can be done in months or years instead of decades.

To the extent that my argument and the one you mention here interact, I suppose I would say that "winning" should include not just individual instances, things we can practice explicitly, but success in areas with which we are unfamiliar. That, I suggest, is the role of theory and the pursuit of correct beliefs.

Replies from: pjeby
comment by pjeby · 2009-07-04T16:00:19.018Z · LW(p) · GW(p)

To use your analogy, you and I certainly have to practice in order to learn to drive. If we're building a robot to drive, though, it damn sure helps to have a ton of theory ready to use. Does this eliminate the need for testing? Of course not. But having a correct theory (to the necessary level of detail) means that testing can be done in months or years instead of decades.

Actually, I suspect that this is not only wrong, but terribly wrong. I might be wrong, but it seems to me that robotics has gradually progressed from having lots of complicated theories and sophisticated machinery towards simple control systems and improved sensory perception... and that this progression happened because the theories didn't work in practice.

So, AFAICT, the argument that "if you have a correct theory, things will go better" is itself one of those ideas that work better in theory than in practice, because usually the only way to get a correct theory is to go out and try stuff.

Hindsight bias tends to make us completely ignore the fact that most discoveries come about from essentially random ideas and tinkering. We don't like the idea that it's not our "intelligence" that's responsible, and we can very easily say that, in hindsight, the robotics theories were wrong, and of course if they had the right theory, they wouldn't have made those mistakes.

But this is delusion. In theory, you could have a correct theory before any practice, but in practice, you virtually never do. (And pointing to nuclear physics as a counterexample is like pointing to lottery winners as proof that you can win the lottery; in theory, you can win the lottery, but in practice, you don't.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-04T16:22:35.643Z · LW(p) · GW(p)

Actually, I suspect that this is not only wrong, but terribly wrong. I might be wrong

You are wrong. The above is a myth promoted by the Culture of Chaos and the popular media. Advanced modern robots use advanced modern theory - e.g. particle filters to integrate multiple sensory streams to localize the robot (a Bayesian method).

Replies from: Technologos
comment by Technologos · 2009-07-04T17:00:41.934Z · LW(p) · GW(p)

And this is even more true when considering elements in the formation of a robot that need to be handled before the AI: physics, metallurgy, engineering, computer hardware design, etc.

Without theory--good, workably-correct theory--the search space for innovations is just too large. The more correct the theory, the less space has to be searched for solution concepts. If you're going to build a rocket, you sure as hell better understand Newton's laws. But things will go much smoother if you also know some chemistry, some material science, and some computer science.

For a solid example of theory taking previous experimental data and massively narrowing the search space, see RAND's first report on the feasibility of satellites here.

comment by Nick_Tarleton · 2009-07-02T06:43:59.609Z · LW(p) · GW(p)

IAWYC but

Conversely, as long as you learn correct procedures, it doesn't matter if you have a horrible or even ludicrously incorrect grasp of any of the theories involved.

Procedures are brittle. Theory lets you generalize procedures for new contexts, which you can then practice.

comment by thomblake · 2009-07-01T16:53:33.426Z · LW(p) · GW(p)

the space of correct beliefs is much smaller than the space of all beliefs, and in particular smaller than the space of incorrect beliefs.

I'm not sure I'd grant that unless you can show it mathematically. It seems to me there are infinite beliefs of all sorts, and I'm not sure how their orders compare.

Replies from: Technologos
comment by Technologos · 2009-07-02T05:01:43.459Z · LW(p) · GW(p)

A heuristic method that underlies my reasoning:

Select an arbitrary true predicate sentence Rab. That sentence almost certainly (in the mathematical sense) is false if an arbitrary c is substituted for b. Thus, whatever the cardinality of the set of true sentences, for every true sentence we can construct infinitely many false sentences, where the opposite is not true. Thus, the cardinality of the set of true sentences is greater than the set of false sentences.

Replies from: thomblake
comment by thomblake · 2009-07-02T13:57:26.697Z · LW(p) · GW(p)

I don't think that's as rigorous as you'd like it to be. I don't grant the "almost certainly false" step.

Take a predicate P which is false for Pab but true in all other cases. Then, you cannot perform the rest of the steps in your proof with P. Consider that there is also the predicate Q such that Qab is true about half the time for arbitrary a and b. How will you show that most situations are like your R?

I'm also not sure your proof really shows a difference in cardinality. Even if most predicates are like your R, there still might be infinitely many true sentences you can construct, even if they're more likely to be false.

Replies from: Technologos
comment by Technologos · 2009-07-02T20:38:44.310Z · LW(p) · GW(p)

It's definitely not rigorous, and I tried to highlight that by calling it a heuristic. Without omniscience, I can't prove that the relations hold, but the evidence is uniformly supportive.

Can you name such a predicate other than the trivial "is not" (which is guaranteed for be true for all but one entity, as in A is not A) which is true for even a majority of entities? The best I can do is "is not describable by a message of under N bits," but even then there are self-referential issues. If the majority of predicates were like your P and Q, then why would intelligence be interesting? "Correctness" would be the default state of a proposition and we'd only be eliminating a (relatively) small number of false hypotheses from our massive pool of true ones. Does that match either your experience or the more extensive treatment provided in Eliezer's writings on AI?

If you grant my assertion that Rab is almost certainly false if c is substituted for b, then I think the cardinality proof does follow. Since we cannot put the true sentences in one-to-one correspondence with the false sentences, and by the assertion there are more false sentences, the latter must have a greater (infinite?) cardinality than the former, no?

Replies from: JGWeissman, thomblake
comment by JGWeissman · 2009-07-02T21:06:02.580Z · LW(p) · GW(p)

The cardinality of the sets of true and false statements is the same. The operation of negation is a bijection between them.

Replies from: Technologos
comment by Technologos · 2009-07-03T21:33:31.160Z · LW(p) · GW(p)

You're right. I was considering constructive statements, since the negation of an arbitrary false statement has infinitesimal informational value in search, but you're clearly right when considering all statements.

comment by thomblake · 2009-07-02T20:46:31.776Z · LW(p) · GW(p)

If by "almost certainly false" you mean that say, 1 out of every 10,000 such sentences will be true, then no, that does not entail a higher order of infinity.

Replies from: Technologos
comment by Technologos · 2009-07-03T21:12:29.273Z · LW(p) · GW(p)

I meant, as in the math case, that the probability of selecting a true statement by choosing one at random out of the space of all possible statements is 0 (there are true statements, but as a literal infinitesimal).

It's possible that both infinities are countable, as I am not sure how one would prove it either way, but that detail doesn't really matter for the broader argument.

Replies from: Technologos
comment by Technologos · 2009-07-04T06:13:50.828Z · LW(p) · GW(p)

See the note by JGWeissman--this is only true when considering constructively true statements (those that carry non-negligible informational content, i.e. not the negation of an arbitrary false statement).

comment by derekz · 2009-06-29T21:33:22.848Z · LW(p) · GW(p)

"Useful" is negatively correlated with "Correct theory"... on a grand scale.

Sure, having a correct theory has some positive correlation with "useful",

Which is it?

I think all the further you can go with this line of thought is to point out that lots of things are useful even if we don't have a correct theory for how they work. We have other ways to guess that something might be useful and worth trying.

Having a correct theory is always nice, but I don't see that our choice here is between having a correct theory or not having one.

Replies from: pjeby
comment by pjeby · 2009-06-29T21:47:57.222Z · LW(p) · GW(p)

Which is it?

Both. Over the course of history:

Useful things -> mostly not true theories.

True theory -> usually useful, but mostly first preceded by useful w/untrue theory.

Replies from: pwno
comment by pwno · 2009-06-29T21:54:15.415Z · LW(p) · GW(p)

Aren't true theories defined by how useful they are in some application?

Replies from: Cyan, JustinShovelain
comment by Cyan · 2009-06-30T01:20:41.824Z · LW(p) · GW(p)

Perhaps surprisingly, statistics has an answer, and that answer is no. If in your application the usefulness of a statistical model is equivalent to its predictive performance, then choose your model using cross-validation, which directly optimizes for predictive performance. When that gets too expensive, use the AIC, which is equivalent to cross-validation as the amount of data grows without bound. But if the true model is available, neither AIC nor cross-validation will pick it out of the set of models being considered as the amount of data grows without bound.

comment by JustinShovelain · 2009-06-29T22:52:06.964Z · LW(p) · GW(p)

define: A theory's "truthfulness" as how much probability mass it has after appropriate selection of prior and applications of Bayes' theorem. It works as a good measure for a theory's "usefulness" as long as resource limitations and psychological side effects aren't important.

define: A theory's "usefulness" as a function of resources needed to calculate its predictions to a certain degree of accuracy, the "truthfulness" of the theory itself, and side effects. Squinting at it, I get something roughly like: usefulness(truthfulness, resources, side effects) = truthfulness * accuracy(resources) + messiness(side effects)

So I define "usefulness" as a function and "truthfulness" as its limiting value as side effects go to 0 and resources go to infinity. Notice how I shaped the definition of "usefulness" to avoid mention of context specific utilities; I purposefully avoided making it domain specific or talking about what the theory is trying to predict. I did this to maintain generality.

(Note: For now I'm polishing over the issue of how to deal with abstracting over concrete hypotheses and integrating the properties of this abstraction with the definitions)

Replies from: jimrandomh
comment by jimrandomh · 2009-06-29T23:11:29.757Z · LW(p) · GW(p)

Your definition of usefulness fails to include the utility of the predictions made, which is the most important factor. A theory is useful if there is a chain of inference from it to a concrete application, and its degree of usefulness depends on the utility of that application, whether it could have been reached without using the theory, and the resources required to follow that chain of inference. Measuring usefulness requires entangling theories with applications and decisions, whereas truthfulness does not. Consequently, it's incorrect to treat truthfulness as a special case of usefulness or vise versa.

Replies from: pjeby, JustinShovelain
comment by pjeby · 2009-06-29T23:54:45.141Z · LW(p) · GW(p)

Measuring usefulness requires entangling theories with applications and decisions, whereas truthfulness does not. Consequently, it's incorrect to treat truthfulness as a special case of usefulness or vise versa.

Thank you - that's an excellent summary.

comment by JustinShovelain · 2009-06-29T23:39:24.252Z · LW(p) · GW(p)

From pwno: "Aren't true theories defined by how useful they are in some application?"

My definition of "usefulness" was built with the express purpose of relating the truth of theories to how useful they are and is very much a context specific temporary definition (hence "define:"). If I had tried to deal with it directly I would have had something uselessly messy and incomplete, or I could have used a true but also uninformative expectation approach and hid all of the complexity. Instead, I experimented and tried to force the concepts to unify in some way. To do so I stretched the definition of usefulness pretty much to the breaking point and omitted any direct relation to utility functions. I found it a useful thought to think and hope you do as well even if you take issue with my use of the name "usefulness".

comment by Vladimir_Nesov · 2009-06-29T13:56:13.134Z · LW(p) · GW(p)

Actions of high utility are useful. Of a set of available actions, the correct action to select is the most useful one. A correct statement is one expressing the truth, or probabilistically, an event of high probability. In this sense, a correct choice of action is one of which it is a correct statement to say that it is the most useful one.

It's beside the point actually, since you haven't shown that your info is either useful or correct.

comment by Vladimir_Nesov · 2009-06-29T02:19:57.989Z · LW(p) · GW(p)

If convincing others is not your goal, then what is?

Helping others.

FYI, The Others) is a group of fictional characters who inhabit the mysterious island in the American television series Lost.

comment by Cyan · 2009-06-28T15:57:32.092Z · LW(p) · GW(p)

pjeby will be more likely to notice this proposition if you post it as a reply to one of his comments, not one of mine.

comment by Vladimir_Nesov · 2009-06-28T10:47:25.106Z · LW(p) · GW(p)

Nope. The fact that you, personally, experience winning a lottery, doesn't support a theory that playing a lottery is a profitable enterprise.

Replies from: conchis
comment by conchis · 2009-06-28T13:31:00.218Z · LW(p) · GW(p)

What? If the odds of the lottery are uncertain, and your sample size is actually one, then surely it should shift your estimate of profitability.

Obviously a larger sample is better, and the degree to which it shifts your estimate will depend on your prior, but to suggest the evidence would be worthless in this instance seems odd.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T14:18:28.375Z · LW(p) · GW(p)

It's impossible for playing a lottery to be profitable, both before you ever played it, and after you won a million dollars. The tenth decimal place doesn't really matter.

Replies from: Vladimir_Golovin, Benquo
comment by Vladimir_Golovin · 2009-06-28T15:02:57.038Z · LW(p) · GW(p)

It's impossible for playing a lottery to be profitable, both before you ever played it, and after you won a million dollars

I wonder what's your definition of 'profit'.

True story: when I was a child, I "invested" about 20 rubles in a slot machine. I won about 50 rubles that day and never played slot machines (or any lottery at all) again since then. So:

  • Expenses: 20 rubles.
  • Income: 50 rubles.
  • Profit: 30 rubles.

Assuming that we're using a dictionary definition of the word 'profit', the entire 'series of transactions' with the slot machine was de-facto profitable for me.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T15:12:19.022Z · LW(p) · GW(p)

It's obvious that to interpret my words correctly (as not being obviously wrong), you need to consider only big (cumulative) profit. And again, even if you did win a million dollars, that still doesn't count, only if you show that you were likely to win a million dollars (even if you didn't).

Replies from: conchis, Alicorn
comment by conchis · 2009-06-28T15:36:06.927Z · LW(p) · GW(p)

The only way I can make sense of your comment is to assume that you're defining the word lottery to mean a gamble with negative expected value. In that case, your claim is tautologically correct, but as far as I can tell, largely irrelevant to a situation such as this, where the point is that we don't know the expected value of the gamble and are trying to discover it by looking at evidence of its returns.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T15:48:40.166Z · LW(p) · GW(p)

That expected value is negative is a state of knowledge. We need careful studies to show whether a technique/medicine/etc is effective precisely because without such a study our state of knowledge shows that the expected value of the technique is negative. At the same time, we expect the new state of knowledge after the study to show that either the technique is useful, or that it's not.

That's one of the traps of woo: you often can't efficiently demonstrate that it's effective, and through intuition probably related to conservation of expected evidence you insist that if you don't have a better method to show its effectiveness, the best available method should be enough, because it's ridiculous to hold the claim to higher standard of proof on one side than on another. But you have to, the prior belief plays its part, the threshold to changing a decision may be too far away to cross by simple arguments. The intuitive thrust of the principle doesn't carry over to expected utility because of the threshold, it may well be that you have a technique for which there is a potential test that could demonstrate that it's effective, but the test is unavailable, and without performing the test the expected value of the technique remains negative.

Replies from: conchis
comment by conchis · 2009-06-28T20:25:10.450Z · LW(p) · GW(p)

I'm afraid I'm struggling to connect this to your original objections. Would you mind clarifying?

ETA: By way of attempting to clarify my issue with your objection, I think the lottery example differs from this situation in two important ways. AFAICT, the uselessness of evidence that a single person has won the lottery is a result of:

  1. the fact that we usually know the odds of winning the lottery are very low, so evidence has little ability to shift our priors; and

  2. that in addition to the evidence of the single winner, we also have evidence of incredibly many losers, so the sum of evidence does not favour a conclusion of profitability.

Neither of these seem to be applicable here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T20:36:53.414Z · LW(p) · GW(p)

The analogy is this: using speculative self-help techniques corresponds to playing a lottery, in both cases you expect negative outcome, and in both cases making one more observation, even if it's observation of success, even if you experience it personally, means very little for the estimation of expected outcome. There is no analogy in lottery for studies that support the efficacy of self-help techniques (or some medicine).

Replies from: Benquo
comment by Benquo · 2009-06-30T13:04:39.801Z · LW(p) · GW(p)

It sounds like you're saying:

1) the range of conceivably effective self-help techniques is very large relative to the number of actually effective techniques

2) a technique that is negative-expected-value can look positive with small n

3) consequently, using small-n trials on lots of techniques is an inefficient way to look for effective ones, and is itself negative-expected-value, just like looking for the correct lottery number by playing the lottery.

In this analogy, it is the whole self-help space, not the one technique, that is like a lottery.

Am I on the right track?

comment by Alicorn · 2009-06-28T15:34:30.701Z · LW(p) · GW(p)

I don't think the principle of charity generally extends so far as to make people reinterpret you when you don't go to the trouble of phrasing your comments so they don't sound obviously wrong.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T15:55:12.744Z · LW(p) · GW(p)

If you see a claim that has one interpretation making it obviously wrong and another one sensible, and you expect a sensible claim, it's a simple matter of robust communication to assume the sensible one and ignore the obviously wrong. It's much more likely that the intended message behind the inapt textual transcription wasn't the obviously wrong one, and the content of communication is that unvoiced thought, not the text used to communicate it.

Replies from: thomblake
comment by thomblake · 2009-06-28T16:44:41.792Z · LW(p) · GW(p)

it's a simple matter of robust communication to assume the sensible one and ignore the obviously wrong.

But if the obvious interpretation of what you said was obviously wrong, then it's your fault, not the reader's, if you're misunderstood.

the content of communication is that unvoiced thought, not the text used to communicate it.

All a reader can go by is the text used to communicate the thought. What we have on this site is text which responds to other text. I could just assume you said "Why yes, thoughtfulape, that's a marvelous idea! You should do that nine times. Purple monkey dishwasher." if I was expected to respond to things you didn't say.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T17:00:17.052Z · LW(p) · GW(p)

My point is that the prior under which you interpret the text is shaped by the expectations about the source of the text. If the text, taken alone, is seen as likely meaning something that you didn't expect to be said, then the knowledge about what you expect to be said takes precedence over the knowledge of what a given piece of text could mean if taken out of context. Certainly, you can't read minds without data, but the data is about minds, and that's a significant factor in its interpretation.

Replies from: pjeby
comment by pjeby · 2009-06-28T17:05:53.323Z · LW(p) · GW(p)

If the text, taken alone, is seen as likely meaning something that you didn't expect to be said, then the knowledge about what you expect to be said takes precedence

This is why people often can't follow simple instructions for mental techniques - they do whatever they already believe is the right thing to do, not what the instructions actually say.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T17:09:17.763Z · LW(p) · GW(p)

That's overconfidence, a bias, but so is underconfidence.

comment by Benquo · 2009-06-30T12:57:24.557Z · LW(p) · GW(p)

I don't see how that's relevant unless we already agree that this is like a lottery. My reading of conchis's reply to your comment is that conchis doesn't think we should have strong priors in that direction.

Why do you think this is a lottery-type situation?

comment by Roko · 2009-06-27T20:51:03.930Z · LW(p) · GW(p)

Two questions:

  1. The linked PDF is meant for non-rational, non-high IQ people who need everything in short sentences with relevant words in bold so that they can understand. Can PJ produce something that is a little less condescending to read, and is suited to the more intelligent reader? For example, less marketing, more scientific scepticism.

  2. How do I get onto PJ's mailing list that Kaj speaks of?

Replies from: pjeby, Kaj_Sotala
comment by pjeby · 2009-06-27T23:07:59.134Z · LW(p) · GW(p)
  1. See this comment.

  2. Given your statement #1, why would you want to be on a mailing list of "non-rational, non-high IQ" people? ;-)

(I'm joking, of course; I have many customers who read and enjoy OB and LW, though I don't think any have been top-level posters. Interestingly enough, my customers are so well-read that I usually receive more articles on recent research from them as emailed, "hey didja see"s, than I come across directly or see on LW!)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-27T23:49:29.103Z · LW(p) · GW(p)

More articles than you see on LW? That's absurd!

Replies from: pjeby
comment by pjeby · 2009-06-27T23:56:07.462Z · LW(p) · GW(p)

Huh? More articles than you see on LW? That's absurd!

I usually see more articles about recent scientific research from my paying customers than I encounter via LW postings.

Or more precisely, and to be as fair as possible, I remember seeing more articles emailed to me from my customers about relevant research of interest to me than I remember discovering via LW... or such memories are at any rate easier to recall. Less absurd now? ;-)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T00:21:38.484Z · LW(p) · GW(p)

That's called "irony", hinting to the fact that not a whole lot of articles are cited on LW, too few to warrant it a mention as a measure for the quantity of articles. Routine research browsing makes such quantity irrelevant, the only benefit might come from a mention of something you didn't think existed, because if you thought it existed, you'd be able to look it up yourself.

P.S. I deleted my comment (again) before seeing your reply, thought it's too mindless.

comment by Kaj_Sotala · 2009-06-27T23:23:55.612Z · LW(p) · GW(p)

I think I got on the mailing list here. Alternatively, it could've been a result of giving my e-mail addy on this page.

comment by timtyler · 2009-06-27T10:05:26.750Z · LW(p) · GW(p)

I found the article painful reading. Things like the section entitled "Desire minus Perception equals Energy" very rapidly make me switch off.

Replies from: jimrandomh, derekz, pjeby, SoullessAutomaton
comment by jimrandomh · 2009-06-27T16:58:16.128Z · LW(p) · GW(p)

I found the article painful reading.

I've heard this sort of statement repeatedly about pjeby's writing style, from different people, and I have a theory as to why. It's a timing pattern, which I will illustrate with some lorem ipsum:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec pharetra varius nisl, quis interdum lectus porta vel...

Main point!

Nullam sit amet risus nibh. Suspendisse ut sapien et tellus semper scelerisque.

The main points are set off from the flow of the text by ellipses and paragraph breaks. This gives them much more force, but also brings to mind other works that use the same timing pattern. Most essays don't do this, or do it exactly once when introducing the thesis. On the other hand, television commercials and sales pitches use it routinely. It is possible that some people have built up an aversion to this particular timing pattern, by watching commercials and not wanting to be influenced by them. If that's the problem, then when those people read it they'll feel bothered by the text, but probably won't know why, and will attribute it to whatever minor flaws they happen to notice, even if unrelated. People who only watch DVDs and internet downloads, like me, won't be bothered, nor will people who developed different mechanisms for resisting commercials. This is similar to the "banner blindness" issue identified in website usability testing with eye trackers, where people refuse to look at anything that looks even remotely like a banner ad, even if it's not a banner ad but the very thing they're supposed to be looking for.

If this is true, then fixing the style issue is simply a matter of removing some of the italics, ellipses and paragraph breaks in editing. It should be possible to find out whether this is the problem by giving A/B tests to people who dislike your writing.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-27T17:19:26.467Z · LW(p) · GW(p)

This is a fascinating suggestion and might well be correct. Certainly, my inability to read more than a paragraph of PJ Eby's writing definitely has something to do with it "sounding like a sales pitch". May be a matter of word choice or even (gulp) content too, though.

comment by derekz · 2009-06-27T20:38:40.127Z · LW(p) · GW(p)

I suppose for me it's the sort of breathless enthusiastic presentation of the latest brainstorm as The Answer. Also I believe I am biased against ideas that proceed from an assumption that our minds are simple.

Still, in a rationalist forum, if one is to not be bothered by dismissing the content of material based on the form of its presentation, one must be pretty confident of the correlation. Since a few people who seem pretty smart overall think there might be something useful here, I'll spend some time exploring it.

I am wondering about the proposed ease with which we can purposefully rewire control circuits. It is counterintuitive to me, given that "bad" ones (in me at least) do not appear to have popped up one afternoon but rather have been reinforced slowly over time.

If anybody does manage to achieve lasting results that seem like purposeful rewiring, I'm sure we'd all like to hear descriptions of your methods and experience.

Replies from: pjeby
comment by pjeby · 2009-06-27T23:20:53.538Z · LW(p) · GW(p)

I am wondering about the proposed ease with which we can purposefully rewire control circuits. It is counterintuitive to me, given that "bad" ones (in me at least) do not appear to have popped up one afternoon but rather have been reinforced slowly over time.

This is one place where PCT is not as enlightening without adding a smidge of HTM, or more precisely, the memory-prediction framework.

The MPF says that we match patterns as sequences of subpattern: if one subpattern "A" is often followed by "B"", our brain compresses this by creating (at a higher layer) a symbol that means "AB". However, in order for this to happen, the A->B correlation has to happen at a timescale where we can "notice" it. If "A" happens today, and "B" tomorrow (for example), we are much less likely to notice!

Coming back to your question: most of our problematic controller structures are problematic at too long of a timescale for it to be easily detected (and extinguished). So PCT-based approaches to problem solving work by forcing the pieces together in short-term memory so that an A->B sequence fires off ... at which point you then experience an "aha", and change the intercontroller connections or reference levels. (Part of PCT theory is that the function of conscious awareness may well be to provide this sort of "debugging support" function, that would otherwise not exist.)

PCT also has some interesting things to say about reinforcement, by the way, that completely turn the standard ideas upside down, and I would really love to see some experiments done to confirm or deny. In particular, it has a novel and compact explanation of why variable-schedule reinforcement works better for certain things, and why certain schedules produce variable or "superstitious" action patterns.

Replies from: derekz
comment by derekz · 2009-06-27T23:42:11.427Z · LW(p) · GW(p)

Thank you for the detailed reply, I think I'll read the book and revisit your take on it afterward.

comment by pjeby · 2009-06-27T16:04:44.841Z · LW(p) · GW(p)

As SA says, I did not write the article for the LW audience. However, D-P=E is a straightforward colloquial reframing of PCT's "r-p=e" formula, i.e. reference signal minus perception signal equals error, which then gets multiplied by something and fed off to an effector.

comment by SoullessAutomaton · 2009-06-27T15:16:45.871Z · LW(p) · GW(p)

Obviously, it was written with a very different demographic in mind than LW. I imagine many of the people that article was written for would find the material here to be unfriendly, cryptic, and opaque.

This is probably a rational approach to marketing on P. J. Eby's part, but it does make it hard for some people here to read his other work.

comment by djcb · 2009-06-27T08:38:11.741Z · LW(p) · GW(p)

So we have:

  • A new metaphor to Finally Explain The Brain;

  • "While Eby provides few references and no peer-reviewed experimental work to support his case [...]"

  • A self-help book: "Thinking things Done(tm) The Effortless way to Start, Focus and finally Finish..." (really, I did not make this up).

I'd say some more skepticism is warranted.

Replies from: pjeby, Roko, Vladimir_Nesov
comment by pjeby · 2009-06-27T17:30:45.438Z · LW(p) · GW(p)

A new metaphor to Finally Explain The Brain;

Not even remotely new; "Behavior: The Control Of Perception" was written in 1973, IIRC. And yes, it's cited by other research, and cites prior research that provides evidence for specific control systems in the brain and nervous system, at several of the levels proposed by Powers.

provides few references and no peer-reviewed experimental work

I don't, but "Behavior: The Control Of Perception" has them by the bucket load.

comment by Roko · 2009-06-27T15:53:03.695Z · LW(p) · GW(p)

You are - I think - ignoring the potential value of this information.

When assessing how useful a post is, one should consider the product of the weight of evidence it brings to bear with the importance of the information. In this case, PJ Eby and Kaj are telling us something that is more important than - in my estimate - 99% of what your or I have ever read. We should thank them for this, and instead of complaining about lack of evidence or only weak evidence, we should go forth and find more, for example by doing a literature search or by trying the techniques.

Replies from: djcb, Vladimir_Nesov
comment by djcb · 2009-06-27T19:53:26.817Z · LW(p) · GW(p)

I wasn't saying the post wasn't useful - at least it brought my attention to Richard Kennaway's post on the interesting concept of explaining brain functions in terms of control systems.

But, the thing is that every day brings us new theories which have great potential value - if true. But most of them aren't. Given limited time, we cannot pursue each of them. We have to be selective.

So, when I open that PDF linked in the first line of the article... that is, to put it mildly, not up to LessWrong-standards. Is that supposed to be 'more important than [...] 99% of what you or I have ever read'? It even ends in a sales pitch for books and workshops.

So while Control Theory may be useful for understanding the brain, this material is a distraction at best.

Replies from: Roko, Roko
comment by Roko · 2009-06-27T20:25:02.221Z · LW(p) · GW(p)

that is, to put it mildly, not up to LessWrong-standards

yes, this is true. I wonder if PJ could produce something rigorous and not-for-idiots?

Replies from: pjeby
comment by pjeby · 2009-06-27T22:31:09.088Z · LW(p) · GW(p)

There are lots of PCT textbooks out there; I wrote based on two of them (combined with my own prior knowledge): "Behavior: The Control Of Perception" by William T. Powers, and "Freedom From Stress", by Edward E. Ford. The first book has math and citations by the bucketload, the latter is a layperson's guide to practical PCT applications written by a psychologist.

Replies from: Yvain, Vladimir_Nesov
comment by Scott Alexander (Yvain) · 2009-06-28T19:37:25.083Z · LW(p) · GW(p)

Wait a second. There's a guy who writes textbooks about akrasia named Will Powers? That's great.

Replies from: pjeby, Alicorn
comment by pjeby · 2009-06-28T21:25:27.919Z · LW(p) · GW(p)

Wait a second. There's a guy who writes textbooks about akrasia named Will Powers? That's great.

"Behavior: The Control of Perception" has very little to say about akrasia actually. The chapter on "Conflict" does a wee bit, I suppose, but only from the perspective of what a PCT perspective predicts should happen when control systems are in conflict.

I haven't actually seen a PCT perspective on akrasia, procrastination, or willpower issues yet, apart from my own.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T21:33:01.393Z · LW(p) · GW(p)

I haven't actually seen a PCT perspective on akrasia, procrastination, or willpower issues yet, apart from my own.

If I'm not mistaken, there is a little cottage industry researching it for years. See e.g.
Albert Bandura, Edwin A. Locke. (2003). Negative Self-Efficacy and Goal Effects Revisited. (PDF) (it's a critique, but there are references as well).

Replies from: pjeby
comment by pjeby · 2009-06-28T22:06:48.232Z · LW(p) · GW(p)

Fascinating. However, it appears that both that paper and the papers it's critiquing are written by people who've utterly failed to understand it, in particular the insight that aggregate perceptions are measured over time... which means you can be positively motivated to achieve goals in order to maintain your high opinion of yourself -- and still have it be driven by an error signal.

That is, the mere passage of time without further achievement will cause an increasing amount of "error" to be registered, without requiring any special action.

Both this paper and the paper it critiques got this basic understanding wrong, as far as I can tell. (It also doesn't help that the authors of the paper you linked seem to think that materialistic reduction is a bad thing!)

comment by Alicorn · 2009-06-28T20:37:20.327Z · LW(p) · GW(p)

It is in fact so great, that I suspect it might be a pen name.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-07-07T07:35:27.620Z · LW(p) · GW(p)

It really is his name. I know him personally. (But he is informally known as Bill, not Will.)

Replies from: None
comment by [deleted] · 2009-08-04T16:35:23.776Z · LW(p) · GW(p)

Can you tell him that many of the links on this page are broken? http://www.brainstorm-media.com/users/powers_w/

comment by Vladimir_Nesov · 2009-06-27T22:46:16.023Z · LW(p) · GW(p)

Then both are of little relevance. More recent studies and surveys will be closer to the truth.

comment by Roko · 2009-06-27T20:09:59.469Z · LW(p) · GW(p)

But, the thing is that every day brings us new theories which have great potential value

Can you name any other theories that have (in your opinion) as a great a potential value to you personally as this one that you read yesterday?

comment by Vladimir_Nesov · 2009-06-27T15:59:28.165Z · LW(p) · GW(p)

And how's that at all important? The info isn't unique, so the progress in its development and application doesn't depend on whether you or I study it. If the fruits of whatever this thing is (which remains meaningless to me until I study it) prove valuable, I'll hear about them in good time. There is little value in studying it now.

Replies from: Roko
comment by Roko · 2009-06-27T16:20:56.936Z · LW(p) · GW(p)

Firstly, this reasoning presents a tradgedy of the commons scenario.

Secondly, acceptance of this kind of theory - if it is true - could take say 20-30 years by the scientific community. You will then hear about it in the media, as will anyone else with half a brain.

This seems urgent enough to me that it is worth putting a lot of effort into it.

Replies from: SoullessAutomaton, Vladimir_Nesov, Vladimir_Nesov
comment by SoullessAutomaton · 2009-06-27T17:46:52.829Z · LW(p) · GW(p)

Perhaps you could clarify why you feel it is urgent?

I agree that if this theory is correct it is of tremendous importance--but I'm not sure I see why it is more urgent than any other scientific theory.

The only thing I can see is the "understanding cognition in order to build AI" angle and I'm not sure that understanding human cognition specifically is a required step in that.

comment by Vladimir_Nesov · 2009-06-27T18:05:25.092Z · LW(p) · GW(p)

I was literally asking about what in particular makes this topic so important as to qualify it as "something that is more important than - in my estimate - 99% of what your or I have ever read" (and doubting that anything could).

You gave only a meta-reply, saying that if anything important was involved and I chose to ignore it, my strategy would not be a good one. But I don't know that it's important, and it's a relevant fact to consider when selecting a strategy. It's decision making under uncertainty. Mine is a good strategy a priori: 99 times out of 100 when in fact info is dross, I make room the the sure shots.

Replies from: Roko
comment by Roko · 2009-06-27T20:33:44.246Z · LW(p) · GW(p)

I was literally asking about what in particular makes this topic so important as to qualify it as "something that is more important than - in my estimate - 99% of what your or I have ever read" (and doubting that anything could). You gave only a meta-reply

Well, it seems to me that the most important knowledge a person can be given is knowledge that will improve their overall productivity and improve the efficiency with which they achieve their goals. This piece (by Kaj) claims to have found a possible mechanism which prevents humans from applying self-help techniques in general. This knowledge is effectively a universal goal-attainment improver.

What were the 100 last pieces of text you read? Some technical documents about static program analysis, some other LW posts, maybe some news or wikipedia articles, etc. It seems to me that none of these would come close to the increased utility that this piece could offer you - if it is correct.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-27T20:53:12.381Z · LW(p) · GW(p)

The info I have gives me good confidence in the belief that studying PCT won't help me with procrastination (as I mentioned, it was out there for a lot of time without drastically visible applications of this sort, plus I skimmed some highly-cited papers via google scholar, but I can't be confident in what I read because I didn't grasp the outline of the field given how little I looked). The things I study and think about these days are good math, tools for better understanding of artificial intelligence. Not terribly good chances for making useful progress, but not woo either (unlike, say, a year ago, much worse two years ago).

comment by Vladimir_Nesov · 2009-06-27T18:45:30.525Z · LW(p) · GW(p)

Secondly, acceptance of this kind of theory - if it is true - could take say 20-30 years by the scientific community. You will then hear about it in the media, as will anyone else with half a brain.

By the way, PJ Eby mentions a relevant fact: PCT was introduced more than 30 years ago.

Replies from: pjeby
comment by pjeby · 2009-06-27T23:26:44.386Z · LW(p) · GW(p)

From the second edition of B:CP , commenting on changes in the field since it was first written:

Gradually, the existence of closed causal loops is beginning to demand notice in every field of behavioral science and biology, in cell biology and neuroscience. They are simply everywhere, at every level of organization in every living system. The old concepts are disappearing, not fast enough to suit me but quite fast enough for the good of science, which must necessarily remain conservative.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T00:06:30.992Z · LW(p) · GW(p)

Sure, there are lots of mentions of the terms, in particular "control system", as something that keeps a certain process in place, guarding it against deviations, sometimes overreacting and swinging the process in the opposite direction, sometimes giving in under the external influence. This is all well and good, but this is an irrelevant observation, one that has no influence on it being useful for me, personally, to get into this.

If it's feasible for me to develop a useful anti-procrastination technique based on this whatever, I expect that these techniques would already be developed, and their efficacy demonstrated. Given that no such thing conclusively exist (and people try, and this stuff is widely known!), I don't expect to succeed either.

I might get a chance if I study the issue very carefully for a number of years, as it'd place me in the same conditions as other people who studied it carefully for many years (in which case I don't expect to place too much effort into a particular toy classification, as I'd be solving the procrastination problem not PCT death spiral strengthening problem), but that's a different game, irrelevant to the present question.

Replies from: pjeby
comment by pjeby · 2009-06-28T00:22:35.675Z · LW(p) · GW(p)

That's not why I referenced the quote, it was to address the, "so if it came out 30 years ago, why hasn't anything happened yet?" question. i.e., many things have happened. That is, the general trend in the life sciences is towards discovering negative-feedback continuous control at all levels, from the sub-cellular level on up.

If it's feasible for me to develop a useful anti-procrastination technique

Actually, PCT shows why NO "anti-procrastination" technique that does not take a person's individual controller structure into account can be expected to work for very long, no matter how effective it is in the short run.

That is, in fact, the insight that Kaj's post (and the report I wrote that inspired it) are intended to convey: that PCT predicts there is no "silver bullet" solution to akrasia, without taking into account the specific subjective perceptual values an individual is controlling for in the relevant situations.

That is: no single, rote anti-procrastination technique will solve all problems for all people, nor even all the problems of one person, even if it completely solves one or more problems for one or more people.

This seems like an important prediction, when made by such a simple model!

(By contrast, I would say that Freudian drives and hypnotic "symptom substitution" models are not actually predicting anything, merely stating patterns of observation of the form, "People do X." PCT provides a coherent model for how people do it.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T00:28:57.933Z · LW(p) · GW(p)

Rote, not-rote, it doesn't really matter. A technique is a recipe for making the effect happen, whatever the means. If no techniques exist, if it's shown that this interpretation doesn't give a technique, I'm not interested, end of the story.

That's not why I referenced the quote, it was to address the, "so if it came out 30 years ago, why hasn't anything happened yet?" question. i.e., many things have.

The exact quote is "If the fruits of whatever this thing is (which remains meaningless to me until I study it) prove valuable, I'll hear about them in good time", by which I meant applications to procrastination in particular.

Replies from: pjeby
comment by pjeby · 2009-06-28T00:35:00.694Z · LW(p) · GW(p)

A technique is a recipe for making the effect happen, whatever the means. If no techniques exist, if it's shown that this interpretation doesn't give a technique, I'm not interested, end of the story.

To most people, a "technique" or "recipe" would involve a fixed number of steps that are not case-specific or person-specific. At the point where the steps become variable (iterative or recursive), one would have an "algorithm" or "method" rather than a "recipe".

PCT effectively predicts that it is possible for such algorithms or methods to exist, but not techniques or recipes with a fixed number of steps for all cases.

That still strikes me as a significant prediction, since it allows one to narrow the field of techniques under consideration - if the recipe doesn't include a "repeat" or "loop until" component, it will not work for everything or everyone.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-28T00:46:05.323Z · LW(p) · GW(p)

The statement of results needs to be clear. There are no results, there might be results given more research. It's not knowably applicable as yet. You may try it at home, but you may whistle to the wind as well.

My usage of "technique" was appropriate, e.g. surgery is also very much patient-dependent; you cut out a cancer from wherever it is in a particular patient, not only in rigid pre-specified places.

Since I made my meaning clear in the context, and you understood it, debating it was useless.

comment by Vladimir_Nesov · 2009-06-27T09:23:45.216Z · LW(p) · GW(p)

Which is fishy, given there is large literature on interpretation of behavior in terms of control systems. Just look at google scholar. But forming a representative sample of these works with adequate understanding of what they are about would take me, I think, a couple of days, so I'd rather someone else more interested in the issue do that.

Replies from: djcb
comment by djcb · 2009-06-27T10:51:16.771Z · LW(p) · GW(p)

There is also a large literature on understanding the brain in terms of chaos theory, cellular automata, evolution, .... , and all of those can shed light on some aspects. The same is definitely true for control systems theory.

The trouble comes when extrapolating this to universal hammers or to the higher cognitive levels; the literature I could find seems mostly about robotics. Admittedly, I did not search very thoroughly, but then again, life is short and if the poster wants to convince me, the burden of proof lies not on my side.

Replies from: jimrandomh
comment by jimrandomh · 2009-06-27T15:17:46.614Z · LW(p) · GW(p)

There is also a large literature on understanding the brain in terms of chaos theory, cellular automata, evolution, .... , and all of those can shed light on some aspects.

This statement strikes me as false. Evolution says things about what the brain does, and what it ought to do, but nothing about how it does it. Chaos theory and cellular automata are completely unrelated pieces of math. Everything else is either at the abstraction level of neurons, or at the abstraction level of "people like cake"; PCT is the only model I am aware of which even attempts to bridge the gap in between.

life is short and if the poster wants to convince me, the burden of proof lies not on my side.

Reality does not care who has the burden of proof, and it does not always provide proof to either side.

Replies from: Kaj_Sotala, Vladimir_Nesov, djcb, pjeby
comment by Kaj_Sotala · 2009-06-27T23:20:08.231Z · LW(p) · GW(p)

Evolution says things about what the brain does, and what it ought to do, but nothing about how it does it.

Neural Darwinism?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-27T23:51:44.147Z · LW(p) · GW(p)

In name only, and probably woo.

comment by Vladimir_Nesov · 2009-06-27T15:27:46.572Z · LW(p) · GW(p)

Reality does not care who has the burden of proof, and it does not always provide proof to either side.

If I'm only willing to expend a certain amount of effort for gaining understanding of a given aspect of reality, then I won't listen to any explanation that requires more effort than that. Preparing a good explanation that efficiently communicates a more accurate picture of that aspect of reality is the burden of proof in question, a quite reasonable requirement in this case, where the topic doesn't appear terribly important.

comment by djcb · 2009-06-29T18:42:05.706Z · LW(p) · GW(p)

I don't see anything 'false' about the statement. I simply stated some other fields that have been used to explain aspects of the brain as well, and that, while PCT may be a useful addition, I have seen no evidence yet that it is 'life changing'.

I enjoy reading LW for all the bright people and new ideas things to learn. In this case however, I was a bit disappointed, mainly because of the self-help-fluff. There are enough places for that kind of material already, I think.

Of course, I cannot demand anything, it's just some (selfish?) concern for LW's S/N-ratio.

comment by pjeby · 2009-06-27T17:37:52.806Z · LW(p) · GW(p)

PCT is the only model I am aware of which even attempts to bridge the gap in between.

FWIW, Hawkins's HTM model (described in "On Intelligence") makes another fair stab at it, and has many similar characteristics to some of PCT's mid-to-high layers, just from a slightly different perspective. HTM (or at least the "memory-prediction framework" aspect of it) also makes much more specific predictions about what we should expect to find at the neuroanatomy level for those layers.

OTOH, PCT makes more predictions about what we should see in large-scale human behavioral phenomena, and those predictions match my experience quite well.