What Are You Tracking In Your Head?

post by johnswentworth · 2022-06-28T19:30:06.164Z · LW · GW · 81 comments

Contents

  Pay Attention To Extra Information Tracking
  Ask “What Are You Tracking In Your Head?”
  Returns to Excess Cognitive Capacity
  Other Examples?
None
81 comments

A large chunk - plausibly the majority -  of real-world expertise seems to be in the form of illegible skills: skills/knowledge which are hard to transmit by direct explanation. They’re not necessarily things which a teacher would even notice enough to consider important - just background skills or knowledge which is so ingrained that it becomes invisible.

I’ve recently noticed a certain common type of illegible skill which I think might account for the majority of illegible-skill-value across a wide variety of domains.

Here are a few examples of the type of skill I have in mind:

The common pattern among all these is that, while performing a task, the expert tracks some extra information/estimate in their head. Usually the extra information is an estimate of some not-directly-observed aspect of the system of interest. From outside, watching the expert work, that extra tracking is largely invisible; the expert may not even be aware of it themselves. Rarely are these mental tracking skills explicitly taught. And yet, based on personal experience, each of these is a central piece of performing the task well - arguably the central piece, in most cases.

Let’s assume that this sort of extra-information-tracking is, indeed, the main component of illegible-skill-value across a wide variety of domains. (I won’t defend that claim much; this post is about highlighting and exploring the hypothesis, not proving it.) What strategies does this suggest for learning, teaching, and self-improvement? What else does it suggest about the world?

Pay Attention To Extra Information Tracking

I had a scheme, which I still use today when somebody is explaining something that I’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball) – disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, ‘False!’

- Feynman

A lot of people have heard Feynman’s “hairy green ball thing” quote. It probably sounds like a maybe-useful technique to practice, but not obviously more valuable than any of a dozen other things.

The hypothesis that extra-information-tracking is the main component of illegible-skill-value shines a giant spotlight on things like Feynman’s examples technique. It suggests that a good comparison point for the value of tracking a prototypical example while reading/writing math is, for instance, the value of tracking the probable contents of opponents’ hands while playing poker.

More generally: my guess is that most people reading this post looked at the list of examples, noticed a few familiar cases, and thought “Oh yeah, I do that! And it is indeed super important!”. On the other hand, I’d also guess that most people also saw some unfamiliar cases, and thought “Yeah, I’ve heard people suggest that before, and it sounds vaguely useful, but I don’t know if it’s that huge a value-add.”.

The first and most important takeaway from this post is the hypothesis that the unfamiliar examples are about as important to their use-cases as the familiar examples. Take a look at those unfamiliar examples, and imagine that they’re as important to their use-cases as the examples you already use.

Ask “What Are You Tracking In Your Head?”

Imagine that I’m a student studying under Feynman. I know that he’s one of the great minds of his generation, but it’s hard to tell which things I need to pick up. His internal thoughts are not very visible. In conversation with mathematicians, I see him easily catch errors in their claims, but I don’t know how he does it. I could just ask him how he does it, but he might not know; a young Richard Feynman probably just implicitly assumes that everyone pictures examples in their head, and has no idea why most people are unable to easily catch errors in the claims of mathematicians!

But if I ask him “what were you tracking in your head, while talking to those mathematicians?” then he’s immediately prompted to tell me about his hairy green ball thing.

More generally: for purposes of learning/teaching, the key question to ask of a mentor is “what are you tracking in your head?”; the key question for a mentor to ask of themselves is “what am I tracking in my head?”. These extra-information-tracking skills are illegible mainly because people don’t usually know to pay attention to them. They’re not externally-visible. But they’re not actually that hard to figure out, once you look for them. People do have quite a bit of introspective access into what extra information they’re tracking. We just have to ask.

Returns to Excess Cognitive Capacity

Mentally tracking extra information is exactly the sort of technique you’d expect to benefit a lot from excess cognitive capacity, i.e. high g-factor. Someone who can barely follow what’s going on already isn’t going to have the capacity to track a bunch of other stuff in parallel.

… which suggests that extra-information-tracking techniques are particularly useful investments for people with unusually high g. (Hint: this post is on LW, so “unusually high g” probably describes you!) They’re a way to get good returns out of excess cognitive capacity.

The same argument also suggests a reason that teaching methods aren’t already more focused on mentally tracking extra information: such techniques are probably more limited for the median person. On the other hand, if your goal is to train the great minds of the next generation, then figuring out the right places to invest excess cognitive capacity is likely to have high returns.

Other Examples?

Finally, the obvious question: what extra information do you mentally track, which is crucial to performing some task well? If the hypothesis is right, there’s probably high-value mental-tracking techniques which some, but not all, people reading this already use. Please share!

81 comments

Comments sorted by top scores.

comment by Robert Miles (robert-miles) · 2022-07-01T19:43:20.425Z · LW(p) · GW(p)

I was thinking you had all of mine already, since they're mostly about explaining and coding. But there's a big one: When using tools, I'm tracking something like "what if the knife slips?". When I introspect, it's represented internally as a kind of cloud-like spatial 3D (4D?) probability distribution over knife locations, roughly co-extentional with "if the material suddenly gave or the knife suddenly slipped at this exact moment, what's the space of locations the blade could get to before my body noticed and brought it to a stop?". As I apply more force this cloud extends out, and I notice when it intersects with something I don't want to get cut. (Mutatis mutandis for other tools of course. I bet people experienced with firearms are always tracking a kind of "if this gun goes off at this moment, where does the bullet go" spatial mental object)

I notice I'm tracking this mostly because I also track it for other people and I sometimes notice them not tracking it. But that doesn't feel like "Hey you're using bad technique", it feels like "Whoah your knife probability cloud is clean through your hand and out the other side!"

Replies from: tamgent
comment by tamgent · 2022-07-04T17:29:58.801Z · LW(p) · GW(p)

I was explicitly taught to model this physical thing in a wood carving survivalist course.

comment by Algon · 2022-06-28T23:34:58.261Z · LW(p) · GW(p)

This post is probably right that illegible skills rely on tracking non-obvious bits of information. But I don't think that discovering that info is as simple as asking "What Are You Tracking in Your Head". Remember that there's a lot of inferential distance between the you and an expert, and they've likely forgotten all that you don't know. 


Thankfully the problem of getting tacit knowledge out of someone has a growing literature on it that is quite useful. The field of  Naturalistic Decision Making developed some techniques to do this, one of which is fairly simple. It is called Applied Cognitive Task Analysis. Here's a summary of it from CommonCog [1]:

There are four techniques in ACTA, and all of them are pretty straightforward to put to practice:

  1. You start by creating a task diagram. A task diagram gives you a broad overview of the task in question and identifies the difficult cognitive elements. You'll want to do this at the beginning, because you'll want to know which parts of the task are worth focusing on.
  2. You do a knowledge audit. A knowledge audit is an interview that identifies all the ways in which expertise is used in a domain, and provides examples based on actual experience.
  3. You do a simulation interview. The simulation interview allows you to better understand an expert’s cognitive processes within the context of an single incident (e.g. a firefighter arrives at the scene of a fire; a programmer is handed an initial specification). This allows you to extract cognitive processes that are difficult to get at using a knowledge audit, such as situational assessment, and how such changing events impacts subsequent courses of action.
  4. You create a cognitive demands table. After conducting ACTA interviews with multiple experts, you create something called a ‘cognitive demands table’ which synthesises all that you’ve uncovered in the previous three steps. This becomes the primary output of the ACTA process, and the main artefact you’ll use when you apply your findings to course design or to systems design.

The blog post goes in depth on this method, the theory that undergirds it and how to notice and acquire the perception that experts possess.

 

As to your actual question, I guess I'd say that the same holds true for video games. If you want to beat a difficult boss, then try to gather info first on what the timing is like, what cues there are for attacks and so forth. 

Another area is when doing QFT calculations, you need to keep track of the interaction terms in the lagrangian and the free field terms in order to turn the time translation operator into a series of Feynman diagrams without ever bother to expand out the power series and use Wick's theorem and whatever. Makes writing out scattering amplitudes less of a chore.

Also, Feynman's trick applies to reading near anything. Keep an example in your head and see if it matches what the text says about it. Most of the time when I get confused by a text, doing this will clear things up. 

P.S.

This is unrelated to your post, but if you could 

choose anyone to work on AI alignment, who'd you pick? 

  1. ^

    A fantastic blog that is concerned with applied rationality. It outlines how to find and acquire tacit knowledge. I'd recommend starting from the Tacit Knowledge Series.

Replies from: johnswentworth, Emrik North
comment by johnswentworth · 2022-06-29T00:27:49.195Z · LW(p) · GW(p)

On first glance, CommonCog looked kinda MBA-flavored bullshitty (especially alongside the ACTA thing, which also sounds MBA-flavored bullshitty). But after reading a bit, it is indeed pretty great! Thanks for the link.

comment by Emrik (Emrik North) · 2022-07-05T15:51:32.253Z · LW(p) · GW(p)

I'd be very sceptical of applying something like this on experts in a rich-domain/somewhat-pre-paradigmatic field like, say, conceptual alignment. Their expertise is their particular set of tools. And in a rich domain like this, there are likely to be many other tools that lets you work on the problems productively. Even if you concluded that the paradigmatic tools seem most suited for the problems, you may still wish to maximise the chance that you'll end up with a productively different set of tools, just because they allow you to pursue a neglected angle of attack. If you look overmuch to how experts are doing it, you'll Einstellung yourself into their paradigm and end up hacking at an area of the wall that's proven to be very sturdy indeed.

Replies from: Algon
comment by Algon · 2022-07-05T20:43:00.209Z · LW(p) · GW(p)

For pre-paradigmatic fields, I agree that the insights you extract have a good chance of not being useful. But if you some people who are talking past each other because they can't understand each others viewpoints, then I would expect this sort of thing to help make both groups legible to one another. Which is certainly true of the AI safety field. And communicating each other's models is precisely what is advocating now, and by the looks of it, not much progress has been made. 

To me, it is pretty plausible that Yudkowsky's purported knowledge is tacit, given his failures to communicate it so far. Hence, I think it would be valuable if someone took tried ACTA on Yudkowsky. He seems to be focusing on communicating his views and giving his brain a break, so now would be a good time to try.

comment by Kaj_Sotala · 2022-06-29T14:20:11.646Z · LW(p) · GW(p)

Seems at least partially related to cognitive apprenticeship, a type of teaching that aims at explicitly making the teacher's "thinking visible", so that the pupils can find out what it is that the teacher is tracking at any moment when solving the problem. For instance, they might carry out an assignment in front of pupils and try to explicitly speak out loud their thoughts while doing it.

For instance, when writing an essay:

Assignment

(Suggested by students)

Write an essay on the topic “Today’s Rock Stars Are More Talented than Musicians of Long Ago.”

THINKING-ALOUD EXCERPT

I don’t know a thing about modern rock stars. I can’t think of the name of even one rock star. How about, David Bowie or Mick Jagger… But many readers won’t agree that they are modern rock stars. I think they’re both as old as I am. Let’s see, my own feelings about this are… that I doubt if today’s rock stars are more talented than ever. Anyhow, how would I know? I can’t argue this… I need a new idea… An important point I haven’t considered yet is… ah… well… what do we mean by talent? Am I talking about musical talent or ability to entertain—to do acrobatics? Hey, I may have a way into this topic. I could develop this idea by…

or doing math:

A MATHEMATICIAN THINKS OUT LOUD

(from Schoenfeld, 1983)

Problem

Let P(x) and Q(x) be two polynomials with "reversed" coefficients:

 where an ≠ 0 ≠ a0. What is the relationship between the roots of P(x) and those of Q(x)? Prove your answer.

Expert Model

What do you do when you face a problem like this? I have no general procedure for finding the roots of a polynomial, much less for comparing the roots of two of them. Probably the best thing to do for the time being is to look at some simple examples and hope I can develop some intuition from them. Instead of looking at a pair of arbitrary polynomials, maybe I should look at a pair of quadratics: at least I can solve those. So, what happens if

and

The roots are

and

respectively.

That's certainly suggestive, because they have the same numerator, but I don't really see anything that I can push or that'll generalize. I'll give this a minute or two, but I may have to try something else...

Well, just for the record, let me look at the linear case. If P(x) = ax + b and Q(x) = bx + a, the roots are –b/a and –a/b respectively.

They're reciprocals, but that's not too interesting in itself. Let me go back to quadratics. I still don't have much of a feel for what's going on. I'll do a couple of easy examples, and look for some sort of a pattern. The clever thing to do may be to pick polynomials I can factor; that way it'll be easy to keep track of the roots. All right, how about something easy like (x + 2)(x + 3)?

Then P(x) = x2+ 5x + 6, with roots -2 and -3. So, Q(x) = 6x2 + 5x + 1 = (2x + 1)(3x + 1), with roots - 1/2 and -1/3.

Those are reciprocals too. Now that's interesting.

How about P(x) = (3x + 5)(2x - 7) = 6x2 - 11x - 35? Its roots are -5/3 and 7/2; Q(x) = -35x2 - 11x + 6 = -(35x2 + 11x - 6) = -(7x - 2)(5x + 3).

All right, the roots are 2/7 and -3/5. They're reciprocals again, and this time it can't be an accident. Better yet, look at the factors: they're reversed! What about

P(x) = (ax + b)(cx + d) = acx2 + (bc + ad)x + bd? Then

Q(x) = bdx2 + (ad + bc)x + ac = (bx + a)(dx + c).

Aha! It works again, and I think this will generalize…

At this point there are two ways to go. I hypothesize that the roots of P(x) are the reciprocals of the roots of Q(x), in general. (If I'm not yet sure, I should try a factorable cubic or two.) Now, I can try to generalize the argument above, but it's not all that straightforward; not every polynomial can be factored, and keeping track of the coefficients may not be that easy. It may be worth stopping, re-phrasing my conjecture, and trying it from scratch:

Let P(x) and Q(x) be two polynomials with "reversed" coefficients. Prove that the roots of P(x) and Q(x) are reciprocals.

All right, let's take a look at what the problem asks for. What does it mean for some number, say r, to be a root of P(x)? It means that P(r) = 0. Now the conjecture says that the reciprocal of r is supposed to be a root to Q(x). That says that Q(1/r) = 0. Strange. Let me go back to the quadratic case, and see what happens.

Let P(x) = ax2 + bx + c, and Q(x) = cx2 + bx + a. If r is a root of P(x), then P(r) = ar2 + br + c = 0. Now what does Q(1/r) look like?

Q(1/r) = c(1/4)2 + b(1/4) + a = (c + br + ar2)/r2 = P(r)/r2 = 0

So it works, and this argument will generalize. Now I can write up a proof.

Proof:

Let r be a root of P(x), so that P(r) = 0. Observe that r ≠ 0, since a0 ≠ 0. Further, Q(1/r) = a0(1/r)n + a1(1/4)n-1+… +an-2(1/r) + an = (1/rn)(a0 + a1r + a2r2 +… + an-2rn-2 + an-1rn-1 + anrn) – (1/rn)P(r) = 0, so that (1/r) is a root of Q(x).

Conversely, if S is a root of Q(x), we see that P(1/S) = O. Q.E.D.

All right, now it's time for a postmortem. Observe that the proof, like a classical mathematical argument, is quite terse and presents the results of a thought process. But where did the inspiration for the proof come from? If you go back over the way that the argument evolved, you'll see there were two major breakthroughs.

The first had to do with understanding the problem, with getting a feel for it. The problem statement, in its full generality, offered little in the way of assistance. What we did was to examine special cases in order to look for a pattern. More specifically, our first attempt at special cases—looking at the quadratic formula—didn’t provide much insight. We had to get even more specific, as follows: Look at a series of straightforward examples that are easy to calculate, in order to see if some sort of pattern emerges. With luck, you might be able to generalize the pattern. In this case, we were looking for roots of polynomials, so we chose easily factorable ones. Obviously, different circumstances will lead to different choices. But that strategy allowed us to make a conjecture.

The second breakthrough came after we made the conjecture. Although we had some idea of why it ought to be true, the argument looked messy, and we stopped to reconsider for a while. What we did at that point was important, and is often overlooked: We went back to the conditions of the problem, explored them, and looked for tangible connections between them and the results we wanted. Questions like "what does it mean for r to be a root of P(x)?", "what does the reciprocal of r look like?" and "what does it mean for (1/r) to be a root of Q(x)?" may seem almost trivial in isolation, but they focused our attention on the very things that gave us a solution.

Replies from: johnswentworth, Gunnar_Zarncke
comment by johnswentworth · 2022-06-29T16:14:20.667Z · LW(p) · GW(p)

I tried to follow my own thoughts on the polynomial example. They were pretty brief; the whole problem took only a few seconds. Basically:

  • Whelp, roots are a PITA
  • Can I transform Q(x) in a way which swaps the order of the coefficients?
    • Pattern match: yup! I've done that before. Divide by x^n.
  • Oh I see, roots of one will be reciprocals of roots of the other.

... so I guess +1 point for the "bag of tricks" model of expertise.

comment by Gunnar_Zarncke · 2022-07-26T18:31:03.819Z · LW(p) · GW(p)

There are a lot of 1/4 instead of 1/r in the formulas (I guess you wrote some of them with 1/4 initially but then replaced but overlooked some).

comment by Emrik (Emrik North) · 2022-06-29T00:52:18.679Z · LW(p) · GW(p)

Kinda surprised you didn't mention purpose-tracking, for while you're trying to do a thing--any thing. Arguably the most important skill I acquired from the Sequences [LW · GW], and that's a high bar.

"Your sword has no blade. It has only your intention. When that goes astray you have no weapon."

comment by Dweomite · 2022-07-01T22:39:36.922Z · LW(p) · GW(p)

In resource management games, I typically have a set of coefficients in my head for the current relative marginal values of different resources, and my primary heuristic is usually maximizing the weighted sum of my resources according to these coefficients.

In combat strategy games, I usually try to maximize (my rate of damage) x (maximum damage I can sustain before I lose) / (enemy rate of damage) x (damage I need to cause before I win).

These don't seem especially profound to me.  But I've noticed a surprising number of video games that make it distressingly hard to track these things; for instance, by making it so that the data you need to calculate them is split across three different UI screens, or by failing to disclose the key mathematical relationships between the public variables and the heuristics I'm trying to track.  ("You can choose +5 armor or +10 accuracy.  No, we're not planning to tell you the mathematical relationship between armor or accuracy and observable game outcomes, why do you ask?")

It's always felt odd to me that there isn't widespread griping about such games.

As a result of reading this post, I have started explicitly tracking two hypotheses that I wasn't before:  (1) that the value of tracking things-like-these is much less obvious than I think, and (2) that a lot of people lack the spare cognitive capacity to track the things I'm tracking.

Though I'm not sure yet whether they're going to steal much probability from my previous leading hypothesis, "most players are not willing to do mental multiplication in order to play better."

Replies from: Linda Linsefors, Gunnar_Zarncke
comment by Linda Linsefors · 2022-07-02T13:17:52.343Z · LW(p) · GW(p)

I'm very certain that you hypothesis are correct. Most people play to have fun, not to win. Winning is instrumental to fun, but for most people it is not worth the cost of doing some math, which is anti-fun. I like math in general, but I still would not make this explicit calculation, because it is the wrong type of math for me to enjoy. (Not saying it is wrong for you to enjoy it, just that it's unusual.)

I think that making the game design such that it is hard or impossible to do the explicit math is a feature. Most people don't want to do the math. The math is not supposed to be part of the game. Most people don't want the math nerds to have that advantage, because then they'll have to do the math too, or loose.

Replies from: Dweomite
comment by Dweomite · 2022-07-02T17:02:33.406Z · LW(p) · GW(p)

That seems like it could only potentially be a feature in competitive games; yet I see it all the time in single-player games with no obvious nods to competition (e.g. no leaderboards).  In fact, I have the vague impression games that emphasize competition tend to be more legible--although it's possible I only have this impression from player-created resources like wikis rather than actual differences in developer behavior.  (I'll have to think about this some.)

Also, many of these games show an awful lot of numbers that they don't, strictly speaking, need to show at all.  (I've also played some games that don't show those numbers at all, and I generally conclude that those games aren't for me.)  Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player.

Also, in several cases the developers have been willing to explain more of the math on Internet forums when people ask them.  Which makes it seem less like a conscious strategy to withhold those details and more that it just didn't occur to them that players would want them.

There certainly could be some games where the developers are consciously pursuing an anti-legible-math policy, but it seems to me that the examples I have in mind do not fit this picture very well.

Replies from: causal-chain
comment by Causal Chain (causal-chain) · 2022-07-06T11:35:35.622Z · LW(p) · GW(p)

 > Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player.

When I imagine a game which offers "+armor" or "+accuracy" vs a game which offers "+5 armor" or "+10 accuracy", the latter feels far more comfortable even if I do not intend to do the maths. I suspect it gives something for my intuition to latch onto, to give me a sense of scale.

Replies from: Dweomite
comment by Dweomite · 2022-07-07T07:41:48.704Z · LW(p) · GW(p)

Do you mean that it's more comfortable because you feel it provides some noticeable boost to your ability to predict game outcomes (even without consciously doing math), or is it more of an aesthetic preference where you like seeing numbers even if they don't provide any actual information?  (Or something else?)

If you're applying a heuristic anything like "+10 accuracy is probably bigger than +5 armor, because 10 is bigger than 5", then I suspect your heuristic is little better than chance.  It's quite common for marginal-utility-per-point to vary greatly between stats, or even within the same stat at different points along the curve.

If you're strictly using the numbers to compare differently-sized boosts to the same stat (e.g. +10 accuracy vs +5 accuracy) then that's reasonably safe.

Replies from: causal-chain
comment by Causal Chain (causal-chain) · 2022-07-07T12:53:00.652Z · LW(p) · GW(p)

The improvement to my intuitive predictive ability is definitely a factor to why I find it comforting, I don't know what fraction of it is aesthetics, I'd say a poorly calibrated 30%. Like maybe it reminds me of games where I could easily calculate the answer, so my brain assumes I am in that situation as long as I don't test that belief.

I'm definitely only comparing the sizes of changes to the same stat. My intuition also assumes diminishing returns for everything except defense which is accelerating returns - and knowing the size of each step helps inform this.

Replies from: Dweomite
comment by Dweomite · 2022-07-07T18:33:30.830Z · LW(p) · GW(p)

That seems opposed to what Linda Lisefors said above:  You like the idea that you could calculate an answer if you chose to, while Linda thinks the inability to calculate an answer is a feature.

(Nothing wrong with the two of you wanting different things.  I am just explicitly de-bucketing you in my head.)

My intuition also assumes diminishing returns for everything except defense which is accelerating returns

My model says that the trend in modern games is towards defense having diminishing returns (or at least non-escalating returns), as more developers become aware of that as a thing to track.  I think of armor in WarCraft 3 as being an early trendsetter in this regard (though I haven't gone looking for examples, so it could be that's just the game I happened to play rather than an actual trendsetter).

I am now explicitly noticing this explanation implies that my model contains some sort of baseline competence level of strategic mathematics in the general population that is very low by my standards but slowly rising, and that this competence is enough of a bottleneck on game design that this rise is having noticeable effects.  This seems to be in tension with the "players just don't want to multiply" explanation.

comment by Gunnar_Zarncke · 2022-07-26T19:16:35.788Z · LW(p) · GW(p)

No, we're not planning to tell you the mathematical relationship between armor or accuracy and observable game outcomes

You wouldn't have that in reality either, and in reality, the relationship would be even more complicated. I think a fair compromise would be to give you a simplified relationship like "+1 armor increases the damage it can absorb by 20%" when it is more complicated than that (min/max damage, non-linearity).

Replies from: Dweomite
comment by Dweomite · 2022-07-28T21:26:22.060Z · LW(p) · GW(p)

Many years ago, I used to think it would be great if a game gave you just the information that you would have had "in reality" and asked you to make decisions based on what "would realistically work".

After trying to play a bunch of games this way, I no longer think this is a sensible approach.  Game rules necessarily ignore vast swathes of reality, and there's no a priori way to know what they're going to model and what they're going to cut.  I end up making a bunch of decisions optimized around presumed mechanics that turn out not to exist, while ignoring the ones that actually do exist, because the designer didn't happen to model exactly the same things that I guessed they'd model.

Fundamentally, losing a game because you made incorrect guesses about the rules is Not Fun.  (For me.)

My current philosophy is that rules should usually be fully transparent, and I've found that any unrealism resulting from this really doesn't bother me.  My primary exception to this philosophy is if the game is specifically designed so that figuring out the rules is part of the game, which I think can be pretty neat if done well, but requires a lot of work to do well.

In most of the games I've played where the rules were not transparent, it didn't look (to me) like they were trying to build gameplay around rules-discovery, or carefully calculating the optimum amount of opacity; it looked (to me) like they just ignored the issue, and the game (in my opinion) suffered for it.

Also, "in reality", if there were important stakes, and you didn't know the rules, you'd probably do a lot of experimentation to learn the rules.  You can do this in games, too, but in most games this is boring and I'd rather just skip to the results.

comment by Gunnar_Zarncke · 2022-06-28T21:10:05.739Z · LW(p) · GW(p)

Another example from Feynman: Besides the object level of what the math or physics described symbolically, he was tracking what that meant in real life. Not as obvious as you'd think. See, e.g., the anecdote about Brewster's angle. The most common form is Guessing the Teacher's Password [LW · GW] - which happens if there is no spare capacity to track what all these symbols mean in real life. Tracking the symbols is difficult enough if you are at the limits of your ability (though it might also result from investing as little effort as possible to pass).   

Replies from: M. Y. Zuo, tomcatfish
comment by M. Y. Zuo · 2022-06-30T12:02:03.601Z · LW(p) · GW(p)

One other thing I could never get them to do was to ask questions. Finally, a student explained it to me: "If I ask you a question during the lecture, afterwards everybody will be telling me, `What are you wasting our time for in the class? We're trying to learn something. And you're stopping him by asking a question'." It was a kind of one-upmanship, where nobody knows what's going on, and they'd put the other one down as if they did know. They all fake that they know, and if one student admits for a moment that something is confusing by asking a question, the others take a high-handed attitude, acting as if it's not confusing at all, telling him that he's wasting their time.
I explained how useful it was to work together, to discuss the questions, to talk it over, 
but they wouldn't do that either, because they would be losing face if they had to ask someone else. It was pitiful! All the work they did, intelligent people, but they got themselves into this funny state of mind, this strange kind of self-propagating "education" which is meaningless, utterly meaningless!

  • Feynman

An all too common folly.

comment by Alex Vermillion (tomcatfish) · 2022-07-26T15:23:05.648Z · LW(p) · GW(p)

Damn, I just used up half a cup of sugar and the only result I got was learning sugar packs into the grooves of my pliers INCREDIBLY WELL. I will have to try again later, maybe after making some larger crystals (so that the pliers are capable of breaking them apart).

Edit: Dissolving the sugar (in coldish water, just by stirring) and then letting that dry worked! Little greenish flashes. Fun

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-07-26T18:22:27.006Z · LW(p) · GW(p)

Should this reply have gone somewhere else? I don't get it.

UPDATE: Ah, now I remember it. +1 for going out and actually doing the experiment!

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-07-26T22:03:24.704Z · LW(p) · GW(p)

The link on "anecdote about Brewster's angle" goes to a story about Richard Feynman contains the paragraphs:

Therefore I am brave enough to flip through the pages now, in front of this audience, to put my finger in, to read, and to show you. So I did it. Brrrrrrrup-I stuck my finger in, and I started to read: "Triboluminescence. Triboluminescence is the light emitted when crystals are crushed ..." I said, `And there, have you got science? No! You have only told what a word means in terms of other words. You haven't told anything about nature-what crystals produce light when you crush them, why they produce light. Did you see any student go home and try it? He can't.

"But if, instead, you were to write, `When you take a lump of sugar and crush it with a pair of pliers in the dark, you can see a bluish flash. Some other crystals do that too. Nobody knows why. The phenomenon is called "triboluminescence." ' Then someone will go home and try it. Then there's an experience of nature." I used that example to show them, but it didn't make any difference where I would have put my finger in the book; it was like that everywhere.

comment by lincolnquirk · 2022-06-28T21:18:01.992Z · LW(p) · GW(p)

Nice post!

One of my fears is that the True List is super long, because most things-being-tracked are products of expertise in a particular field and there are just so many different fields.

Nevertheless:

  • In product/ux design, tracking the way things will seem to a naive user who has never seen the product before.
  • In navigation, tracking which way north is.
  • I have a ton of "tracking" habits when writing code:
    • types of variables (and simulated-in-my-head values for such)
    • refactors that want to be done but don't quite have enough impetus for yet
    • loose ends, such as allocated-but-not-freed resources, or false symmetry (something that looks like it should be symmetric but isn't in some critical way), or other potentially-misleading things that need to be explained
    • [there are probably a lot more of these that I am not going to write down now]
Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-06-28T21:19:11.309Z · LW(p) · GW(p)

I could imagine a website full of such lists, categorized by task or field. Could imagine getting lost in there for hours...

comment by Linda Linsefors · 2022-07-02T12:50:29.672Z · LW(p) · GW(p)

There is an interview technique called Experiential Array which is designed to pull out this sort of information (and some other stuff too). Matt Goldenberg conducted this type of interview on me on the topic of designing and running events. This experience gave me the ability to communicate the invisible parts of event design.

Read here for more details 

comment by Karl von Wendt · 2022-07-02T06:28:02.310Z · LW(p) · GW(p)

While writing, track an estimate of the mental state of a future reader - confusion, excitement, eyes glossing over, etc.

This may be true if you write a scientific paper, an essay or a non-fiction book. As a professional writer, when I write a novel, I usually don't think about the reader at all (maybe because, in a way, I am the reader). Instead, I track a mental state of the character I'm writing about. This leads to interesting situations when a character "decides" to do something completely different from what I intended her to do, as if she had her own will. I have heard other writers describe the same thing, so it seems to be a common phenomenon. In this situation, I have two options: I can follow the lead of the character (my tracking of her mental state) and change my outline or even ditch it completely, or I can force her to do what the outline says she's supposed to do. The second choice inevitably leads to a bad story, so tracking the mental state of your characters indeed seems to be essential to writing good fiction. 

I assume that readers do a similar thing, so if a character in a book does something that doesn't fit the mental model they have in mind, they often find it "unbelievable" or "unrealistic", which is one of the reasons while "listen to your characters" seems to be good advice while writing.

comment by romeostevensit · 2022-06-28T21:22:33.383Z · LW(p) · GW(p)

Not type tracking seems to be the thing that makes people susceptible to bad philosophical intuition pumps. In particular, that the referent of a token shifts between a proposition being used at different points in the problem.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-07-26T15:30:34.620Z · LW(p) · GW(p)

I'm actually amazed how little it seems that most people track the definition of words in a conversation to see if they're changing. Something like the points made in Arguing "By Definition" [? · GW] or Scott Alexander's popularization of the term "Motte and Bailey" should be obvious. When someone makes one of these arguments to me, I am confused what is literally going on in their head. Unless the speaker does not care if their argument is sound, I have no map of what it is like to expect the switcheroo to work. In my brain, I resolve words into concepts, but it seems that 2 concepts that share the same symbols when written down are genuinely confusing to many people, suggesting this is a separate skill?

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-07-26T15:31:59.163Z · LW(p) · GW(p)

Goofus: "Let us, for this argument, define 'horse' to mean 'human'."

Gallant: "Alright."

Goofus: "So you accept then, that humans should wear horseshoes?"

Gallant: "What??!"

comment by Ponder Stibbons · 2022-07-02T07:59:41.344Z · LW(p) · GW(p)

In Advanced Driving courses a key component was (and may still be -it’s been awhile) commentary driving. You sit next to an instructor and listen to them give a commentary on everything they are tracking, for instance other road users, pedestrians, road signs,  bends, obstacles, occluders of vision etc; and how these observations affect their decision making, as they drive.  Then you practice doing the same, out loud, and, ideally, develop the discipline to continue practising this after the course. I found this was a very effective way of learning from an expert, and I’m sure my driving became the safer because of it.

comment by A. Weber (a-weber) · 2022-06-29T02:08:46.213Z · LW(p) · GW(p)

I have a couple frameworks that seem to fit into this:

One is the Greek word "kairos," which means... something kinda like "right now," but in the context of rhetoric means something more like "the present environment and mood." A public speaker, when giving a speech, should consider the kairos and tailor their speech accordingly. This cashes out in stuff like bands yelling out, "How are you tonight, Houston?!" or a comedian riffing off of a heckler. It's the thing that makes a good public speaker feel like they're not just delivering a canned speech they've given hundreds of times before, even if they have.

The other framework to me is the "language" of objects. When you're first learning to drive a car, for example, you don't fully understand the way the steering wheel affects the direction the car will turn, or how the angle of the gas pedal affects the acceleration. But as you get more adept, you can "speak" the language of the car. You know what every growl of the engine means, or the quirks of adjusting the seatbelt, or the spaces you can squeeze into. At one point, I was calling this the "machine spirit" of the object--there's almost an animistic sense to this idea. You act, and the object "responds" in some way. A rubber band gives moments before it snaps, or the tone of a kettle changes as the water starts to boil. 

comment by tailcalled · 2022-06-28T21:52:08.101Z · LW(p) · GW(p)

My comment here is a bit narrow, but re

  • While working on math, physics, or a program, track types/units

A lot of people get surprised at how quickly and easily I intuit linear algebra things, and I think a major factor is this. For linear algebra, you can think of vector spaces as being the equivalent of types/units. E.g. the rotation map for a diagonalization maps between the original vector space and the direct sum of the eigenvectors. Sort of.

It's always the first question I ask when I see a matrix or a tensor - what spaces does it map between?

Replies from: Oliver Sourbut, None
comment by Oliver Sourbut · 2022-06-29T08:15:50.729Z · LW(p) · GW(p)

Nice callout. I definitely think that 'typeful thinking' (units and dimensions) is a massive boost in mathematics, computer science and philosophy.

One hypothetical reason is that knowing 'types' means knowing 'what you can do with it' (in particular, the manipulations you've done or witnessed on like-typed things before become generators of new insight). I think this is at least one piece of a description of how we humans do concept abstraction and recomposition in mundane and intellectual situations alike.

comment by [deleted] · 2022-06-28T22:03:44.173Z · LW(p) · GW(p)

It has always bewildered me how you can represent multi-dimensional concepts in a two dimensional array of numbers. The position of a vector/matrix by itself can be arbitrary, but when you apply an operation on it with another vector/matrix, that's when the positions of the variables become interlocked and fixed. Then I realized the 2D array is just a form of organization rather than having intrinsic meaning regarding the vector/matrix itself.

comment by lm (leah-mccuan) · 2022-07-02T01:13:23.207Z · LW(p) · GW(p)

Tracking an estimate of how warm food is while being cooked and how its consistency changes (ex. what the bottom of a slice of eggplant looks like without flipping it over).

Precisely estimating the note I will sing before singing it. I'm never totally accurate, but I find it extremely helpful in order to become more accurate.

Tracking an estimate of how my/another's body will feel after I massage it in an area/move it in a way.

Tracking an estimate of the risk of germs on my hands.

Tracking an estimate of the line that is at the center of my body's mass and runs in the direction of gravity's force (helps me balance).

Tracking an estimate of what percentage of people will read my entire post.


My ears are stopped up, and I can't hear as much as I normally can. I was petting my fuzzy chair and couldn't hear it when I expected to be able to.

I estimated what I would hear if I were able to and wondered if my mind might be able to pick up on analogous cues (the feeling or smell of the fuzz) until, after practice, I could actually hear, but through other sensory mechanisms than my ears.

I track how helpful this idea may be for informing the connection between mechanical senses and experience.

I track how much clearer of an insight I might be able to share if I spend more time thinking about this.

comment by Raemon · 2022-07-01T18:03:07.546Z · LW(p) · GW(p)

Curated. I think figuring out how to transfer hidden/illegible skills is a major bottleneck and like this post for digging into that. 

I do agree with Algon's comment [LW · GW] that simply asking an expert what they're tracking may not be good enough – some expert's brains seem nicely legible and accessible to themselves, sometimes they're tracking so much stuff it's  hard to articulate.

comment by Elias Schmied (EliasSchmied) · 2022-06-29T17:02:53.190Z · LW(p) · GW(p)

Great post. Would add as an example: "While thinking about something and trying to figure out your viewpoint on it, track internal feelings of cognitive dissonance and confusion"

comment by Rob Harrison · 2023-06-22T01:16:02.673Z · LW(p) · GW(p)

While in a conversation tracking how the other person is trying to interpret the motives behind what I'm saying and trying to control that by what I say.  This can get multiple levels of complex fast.  I recently had a really important conversation and I ended up saying things like "I mean exactly what I'm saying" and "I'm not anxious, I just can't afford to let you misunderstand me".  Unfortunately this made it seem like I was definitely anxious, and meant something other than I was saying.

comment by Aleksi Liimatainen (aleksi-liimatainen) · 2022-06-29T09:40:09.560Z · LW(p) · GW(p)

The world is full of scale-free regularities that pop up across topics not unlike 2+2=4 does. Ever since I learned how common and useful this is, I've been in the habit of tracking cross-domain generalizations. That bit you read about biology, or psychology, or economics, just to name a few, is likely to apply to the others in some fashion.

ETA: I think I'm also tracking the meta of which domains seem to cross-generalize well. Translation is not always obvious but it's a learnable skill.

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-06-28T21:09:08.732Z · LW(p) · GW(p)

As a complement or intro to this technique, I find it helpful to create checklists. This helps me identify the most important items to track. I can either do it in my head, or externally via the checklist. It's often easy to come up with a reasonable checklist if you can define the topic specifically enough. Once I've created a checklist, and worked with it enough to commit it to memory, I find that new relevant information is easy to synthesize. If I encounter new information with no checklist, on the other hand, it's very hard for me to remember or make sense of it.

comment by shminux · 2022-06-28T21:50:57.086Z · LW(p) · GW(p)

Hmm, I don't think this kind of tacit knowledge and skills is at all obvious to the holder. In most cases it's like asking a centipede how exactly it walks. Feynman was unusually introspective about this, not an easy example to follow for mere mortals. 

A lot of items in your list are about modeling other agents and yourself. In an embedded agency abstraction hierarchy it would be close to the top (model the general environment, model other agents, model self), so probably a recent evolutionary development, not very well entrenched into the genome, that's why we have trouble "just doing it" and need to introspect to make it make sense.

comment by MSRayne · 2022-06-28T21:45:18.328Z · LW(p) · GW(p)

I'm a pretty good poet. I usually don't share my poetry except with close friends, but take my word for it, my poetry is good enough that I think the majority of people who heard it would like it. What am I tracking when I write a poem?

Well, it happens almost automatically - if I have inspiration, the poem just comes out; if I don't have inspiration, I can sort of try to write something but it doesn't work. So it's a partly subconscious process already and not necessarily something that can be analyzed like this; but I know at the very least I am tracking meter, rhyme, and alliteration.

It seems like, if this hypothesis is correct, there must be other things I am tracking - you can be great at meter, rhyme, and alliteration and not make a very interesting poem - but I'm not sure what else.

Style and themes, perhaps. But both of those are very "needs a neural net to recognize and can't be explicitly defined" things.

comment by Raemon · 2024-01-17T01:15:11.997Z · LW(p) · GW(p)

I think this concept is important. It feels sort of... incomplete. Like, it seems like there are some major follow threads, which are:

  • How to teach others what useful skills you have.
  • How to notice when an expert has a skill, and 
    • how to ask them questions that help them tease out the details.

This feels like a helpful concept to re-familiarize myself with as I explore the art of deliberate practice [LW · GW], since "actually get expert advice on what/how to practice" is one of the most centrally recommended facets.

comment by TurnTrout · 2022-07-11T04:37:26.586Z · LW(p) · GW(p)

While absorbing claims/information, track an estimate of the physical process which produced the information, and how that process entangles the information with physical reality.

Can you give an example?

Replies from: johnswentworth
comment by johnswentworth · 2022-07-11T05:30:56.287Z · LW(p) · GW(p)

When I read a technical paper about an experiment/study, I track in the back of my head a best-guess of what was actually going on during the experiment/study, separate from the authors' claims and analysis. So e.g. "ok, the authors sure do seem to think Y happened, so maybe Y happened, but what else would make the authors think Y happened?". Usually this includes things like "obviously X, Y, Z would be confounders", and then checking whether the authors controlled for those things. Or "maybe the person doing this part was new to the technique and they just fucked up the experiment?". Or "they say in the abstract that they controlled for X, but the obvious way of controlling for X would not actually fully control for it". Or "this is one of those fields where the things-people-say-happened are mostly determined by political flavor, and basically not coupled to observation". Etc.

More generally, when applied reflectively, "track an estimate of the physical process which produced the information, and how that process entangles the information with physical reality" is just the fundamental technique of epistemic rationality: what do you think you know and how do you think you know it? [LW · GW]

The fundamental question of rationality, "What do you think you know and how do you think you know it?", is on its strictest level a request for a causal model of how you think your brain ended up mirroring reality - the causal process which accounts for this supposed correlation.

comment by bideup · 2022-07-02T21:14:10.078Z · LW(p) · GW(p)

I track my confidence in a given step of a hypothesised chain of mathematical reasoning via a heuristic along the lines of “number of failed attempts at coming up with a counterexample”.

comment by Ben Pace (Benito) · 2022-07-02T00:12:27.059Z · LW(p) · GW(p)

This post reminded me of the exercises in Calibrating with Cards [LW · GW], a post which very nicely advises what to pay attention to during magic practice.

comment by Martin Čelko (martin-celko) · 2022-09-23T08:50:04.717Z · LW(p) · GW(p)

I  started writing down things I am tracking.

I actually never realized I am tracking so many things.

The problem and issue is, I rarely remember or know what to do with the tracked information.

Lets say I am trying to be engaging and have a discussion.

There could be a number of things to track, from motives, meanings, or specific reasons something is said.

Other thing to track is filling in the gaps. Lets say someone says something incomplete, one should when engaged fill in the gaps and ask question or find a way to follow up.

Another thing is to know you are actually communicating the things you think you are communicating.

Or further when you track your words whether they actually are understood or misinterpreted while maybe simultaneously trying to get feedback form the person, be it verbal or non verbal reaction.

For me big part is also facial expression, or being properly engaged. 

Lets say I have a good focus and aim to do this.

Well realistically this will have no real impact as its a massive number of tasks. 

Not to mention being self conscious in the moment about the various errors. 

The other key factor for me is keep track of communication to know that whatever I said or made clear, is actually inline with my intent. 

I also have to track what not to say or do. Which depends on context and that can be rather trick to figure in the moment now.

comment by DirectedEvolution (AllAmericanBreakfast) · 2022-07-12T21:46:46.095Z · LW(p) · GW(p)

Finally, the obvious question: what extra information do you mentally track, which is crucial to performing some task well?

I track the age of data.

Here are a couple examples of how this is helpful:

  1. Wikipedia has case counts by country for the 2022 monkeypox outbreak. Portugal was one of the leading countries for a while in number of confirmed cases, but it has since been surpassed by others. However, on closer inspection, the numbers for Portugal haven't been updated since the 7th of July, 5 days ago. In the context of exponential growth, that matters a lot!
  2. I'm developing an aptamer for my MS thesis. Currently, I'm looking at the highest-affinity aptamers in the main online database, which has ~750 examples. This aptamer has impressive affinity, in the fM range. However, I notice that the paper announcing it was published in 1997. This suggests either that the database is not well-maintained (in which case I shouldn't rely on it to be an accurate representation of the state of the field), or that the field of aptamer research is not improving in terms of its ability to produce high-affinity aptamers (in which case that is a worrisome sign for either technology itself or capacity of the research community to achieve its potential).
comment by Emrik (Emrik North) · 2022-07-07T19:04:50.974Z · LW(p) · GW(p)

An "isthmus" and a "bottleneck" are opposites. An isthmus provides a narrow but essential connection between two things (landmass, associations, causal chains). A bottleneck is the same except the connection is held back by its limited bandwidth. In the case of a bottleneck, increasing its bandwidth is top priority. In the case of an isthmus, keeping it open or discovering it in the first place is top priority.

I have a habit of making up pretty words for myself to remember important concepts, so I'm calling it an "isthmus variable" when it's the thing you need to keep mentally keep track of in order to connect input with important task-relevant parts of your network.

When you're optimising the way you optimise something, consider that "isthmus variables" is an isthmus variable for this task.

Replies from: alexander-gietelink-oldenziel
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2022-08-21T12:22:43.525Z · LW(p) · GW(p)

I like the word and I like the idea of an opposite to "bottleneck"

comment by Aaron Bergman (aaronb50) · 2022-07-02T21:18:06.728Z · LW(p) · GW(p)

One answer to the question for me:

While writing, something close to "how does this 'sound' in my head naturally, when read, in an aesthetic sense?"

I've thought for a while that "writing quality" largely boils down to whether the writer has an intuitively salient and accurate intuition about how the words they're writing come across when read. 

comment by birdy (AyeletSperling) · 2022-07-02T17:24:56.639Z · LW(p) · GW(p)

Now that I read this, I notice that I automatically do this when i'm in school, and that it's much more automatic and frequent in subjects I find easy (I wonder whether it's the tracking that makes it easy, or whether less effort frees up brain space to track?).

In history class, I always keep a mental map of when something happened, why it happened, and what resulted from it. I was very surprised when I found out none of my friends do anything similar, because it's such an obvious tool for seeing the bigger picture and remembering how things fit together for me.

I also tend to track "what do think was the creator's though process was here?" a lot, both casually and critically. Highly recommend, it helps you see the work and thought that went into it much better, which I personally enjoy a lot (though it also significantly raised my standards for practically all kinds of media as well, so it might not be for everyone). 

Tracking seems crucial to deeper understanding of abstract things, and put like this, I believe most people who are very good at something specific probably track something automatically all the time. Either way, it seems promising to test your hypothesis. I will definitely be asking some people the "what are you tracking in your head?", maybe something useful will come of it.

comment by Mary Chernyshenko (mary-chernyshenko) · 2022-07-02T07:40:33.463Z · LW(p) · GW(p)

Another immediate question is "for what tasks you don't have to do that?", partly because then one can ask the following question of why. For example, I think now that one doesn't have to track extra information when feeding livestock. (A not-too-variable time-consuming routine.) But I haven't yet really tracked what I do when I do that.

comment by Rami Rustom (rami-rustom) · 2022-07-01T21:50:04.652Z · LW(p) · GW(p)

minor feedback...

Mentally tracking extra information is exactly the sort of technique you’d expect to benefit a lot from excess cognitive capacity, i.e. high g-factor. Someone who can barely follow what’s going on already isn’t going to have the capacity to track a bunch of other stuff in parallel.

note that as one practices tracking something, the effort needed to track it goes down.

i don't think it makes sense to think of it like needing excess cognitive capacity to track things. i think our skill improves to the point of needing little to no excess cognitive capacity. so we only need excess cognitive capacity for new things we want to track. 

Replies from: Linda Linsefors, rami-rustom
comment by Linda Linsefors · 2022-07-02T13:11:01.878Z · LW(p) · GW(p)

That only works for task where you get to do a similar enough thing, enough times. This seems true for driving, but less so for most types of research. 

My capacity to track information in my head varies from day to day, depending on mood, sleep, etc. I can notice a clear difference in what I can and can't do depending on this. When I have more limited mental capacity, I can still absorb facts, but I struggle to follow complex reasoning or draw independent conclusions (e.g. if this fact is true, what does that predict about the world?).

comment by Rami Rustom (rami-rustom) · 2022-07-01T21:53:02.316Z · LW(p) · GW(p)

and then there's the possibility of slowing down the activity we're doing (or even chopping up the activity into separate phases so you can track things in-between the phases), allowing for more capacity to be used for tracking new things. 

comment by kithpendragon · 2022-06-29T14:56:03.342Z · LW(p) · GW(p)

It may be critical to note that tracking estimates of the internal states of other entities often feels like just having a clue about what's going on. If someone asks us how we came to our intuitions, without careful introspection, we might answer with "I just know" or "I pay attention is how!" or similar.

To unpack a mundane example, here's a somewhat rambly account of some of what I'm tracking in my head while I operate a motor vehicle:

When I'm driving, I don't actively scan through all the sounds and smells and tactile events and compare them with past experiences with this and other vehicles. But I sure notice if the steering is a little tight when my foot is on the brake! Likewise, I couldn't tell you anything about steering and pedal angles, but I can note subtle differences in the way I'm operating the car when there's reason to believe the road may be slick.

Interrogating these facts can reveal that I must be tracking things like the weather, what drive mode I'm in, the sounds of the engine and the tires on the road, the texture of the steering feedback, &c.; but I notice that actually catching myself doing that monitoring can be tricky - especially if I want to perform the task of driving without error. Doing so can detract from my ability to effectively monitor other things like the position of my vehicle on the road, the configuration of traffic in my immediate vacinity, the current distance to my next turn, the distance I'm keeping from the vehicle in front of me relative to my current speed and road conditions ...

But monitoring an estimate of my own internal state is a skill that can be practiced too, and one I must develop if I am to drive safely! I can rightly think of my body-mind-complex as a component in the system I'm trying to operate here, and if the body is drowsy or hungry or the attention is variable in this moment I would do well to adjust my driving to compensate.

Add to this the process of continuously correlating the behavior of other driver-vehicle-systems with possible and likely future movements.

I'm not sure if driving is unusually complex, or if most things go this deep. (Somebody do walking or typing or eating and see if those activities are similarly complicated.) I do know I didn't list everything I know goes into building my intuitions on the road. (Feel free to shout out the important things I missed.)

Replies from: Ericf, kithpendragon
comment by Ericf · 2022-06-30T12:34:53.055Z · LW(p) · GW(p)

Driving is the most complex and demanding thing the median US resident does on a regular basis. Some jobs, like playing sports professionally, surgery, courtroom lawyer, high-end chef are probably also demanding, I've never done any of those, but they seem like good examples of needing to monitor all sorts of different things and adjust to changes. No comment on how cognitively taxing they are compared to driving.

Replies from: johnswentworth
comment by johnswentworth · 2022-06-30T16:16:02.350Z · LW(p) · GW(p)

Interesting point, I had never thought of it before.

comment by kithpendragon · 2022-06-29T15:27:35.758Z · LW(p) · GW(p)

Belatedly, it occurs to me that all that is for highway driving. Local driving requires a whole different model. Though many of the inputs come from the same places, the processing is often entirely different.

comment by Xahkafka · 2024-04-17T05:27:26.454Z · LW(p) · GW(p)

Two thoughts:

Many with unusually high "g"appear to have a tendency toward social blindness and trouble with "theory of mind" and tracking others internal states. I wonder if many have reminded themselves to consciously track any cues to other's internal states. Example: hmm that person is exhibiting signs of boredom and body language indicating a desire to flee when I discuss the coding problem I've been struggling with.

As someone prone to impulsivity in youth I've come to appreciate a tendency toward intrusive terrible images or "intrusive thoughts". They dont require any conscious effort but if I'm engaged in something where a lack of attention or a wrong move could cause devastation my mind will happily flash a fully formed detailed image of an intense, exaggerated unpleasant potential outcome. Though I have had to develop ways to dissipate the images so they don't cling.

comment by TeaTieAndHat (Augustin Portier) · 2023-07-20T06:25:37.138Z · LW(p) · GW(p)

There’s something really odd about that: it made me notice that I have either trouble tracking many things like that, or trouble bothering to track these things (since it wasn’t so much of an issue a few years ago, maybe the latter is more likely). Can anyone relate or am I weird? How do I learn to track more?

Also, the advice of asking mentors what they track seems really good!

comment by martinkunev · 2023-03-28T22:57:00.339Z · LW(p) · GW(p)

When outside, I'm usually tracking location and direction on a mental map. This doesn't seem like a big deal to me but in my experience few people do it. On some occasions I am able to tell which way we need to go while others are confused.

comment by Causal Chain (causal-chain) · 2022-07-07T23:23:28.024Z · LW(p) · GW(p)

This reminds me of dual N-back training. Under this frame, dual N-back would improve your ability to track extra things. It's still unclear to me whether training it actually improves mental skills in other domains.

comment by Richard Korzekwa (Grothor) · 2022-07-04T19:31:03.592Z · LW(p) · GW(p)

When thinking about a physics problem or physical process or device, I track which constraints are most important at each step. This includes generic constraints taught in physics classes like conservation laws, as well as things like "the heat has to go somewhere" or "the thing isn't falling over, so the net torque on it must be small".

Another thing I track is what everything means in real, physical terms. If there's a magnetic field, that usually means there's an electric current or permanent magnet somewhere. If there's a huge magnetic field, that usually means a superconductor or a pulsed current. If there's a tiny magnetic field, that means you need to worry about the various sources of external fields. Even in toy problems that are more like thought experiments than descriptions of the real world, this is useful for calibrating how surprised you should be by a weird result (e.g. "huh, what's stopping me from doing this in my garage and getting a Nobel prize?" vs "yep, you can do wacky things if you can fill a cubic km with a 1000T field!").

Related to both of these, I track which constraints and which physical things I have a good feel for and which I do not. If someone tells me their light bulb takes 10W of electrical power and creates 20W of visible light, I'm comfortable saying they've made a mistake*. On the other hand, if someone tells me about a device that works by detecting a magnetic field on the scale of a milligauss, I mentally flag this as "sounds hard" and "not sure how to do that or what kind of accuracy is feasible".

*Something else I'm noticing as I'm writing this: I would probably mentally flag this as "I'm probably misunderstanding something, or maybe they mean peak power of 20W or something like that"

comment by demarquis · 2022-07-04T19:01:04.048Z · LW(p) · GW(p)

The extent to which you can benefit from asking what someone is tracking in their head, and the degree to which they can usefully explain it to you, will depend critically on how much information, basic to the topic at hand, you two of you already share.

You can learn more from a master, using this technique, the more you already knew.

If “cognitive capacity” is the amount of information useful to some specific domain of problem solving one has in one’s head, then everyone on Earth has more or less the same cognitive capacity (excepting only people with some diagnosable mental disability). Everyone has, more or less, the same amount of information in their long term memory as everyone else, applicable to the problems they have had the most experience with during their lives to that point (controlling for age).

If you ask why some people demonstrate greater ability to master certain problem-solving domains than other people do, then the answer lies in the way that human long term memory is organized. Our ability to learn new categories of things depends on our prior possession of categories of similar things: people assimilate new knowledge to old knowledge that shares similar characteristics. For example, the more types of animals one already knows, the easier it is to add another type of animal to memory, which will allow that person to make finer distinctions between specific animals (that’s not just a cat, it’s an American Longhair).

“Tracking” information while considering someone’s explanation of a problem probably includes observing how closely new information corresponds to previously encoded categories. That certainly seems to be what Feynman was doing: he was able to apply a schema based on the characteristics of balls, along with information he knew about set theory, to a proposed set of new information from another expert in his field. When the new information deviated significantly from what he knew about sets of actual balls, he concluded that the new idea was “false” (ie, he could not assimilate the new information into the categories he already had).

Therefore, my prediction is that the more advanced the student in a specific domain, the more they will be able to benefit from asking the teacher what they are tracking in their head, and the more elaborate the set of knowledge categories the teacher has already developed the easier it will be for the teacher to report tracking something that would be of benefit to the student (plus the degree to which the teacher is able to self-reflect on what is going on in their own head, an entirely different set of skills).

In other words, it’s a technique that is most appropriate for advanced students, and skilled teachers.

comment by topherhunt · 2022-07-04T09:33:51.411Z · LW(p) · GW(p)

When programming, I track a mixed bag of things, top of which is readability: Will me-6-months-from-now be able to efficiently reconstruct the intention of this code, track down the inevitable bugs, etc.?

comment by David Gretzschel (david-gretzschel) · 2022-07-03T07:48:08.536Z · LW(p) · GW(p)

"Finally, the obvious question: what extra information do you mentally track, which is crucial to performing some task well?"

When I try to cook something complicated by recipe, I go over each line of the recipe and previsualize all the corresponding physical actions.
I previsualize the state, amount, location and the transitions for each  object. Objects = {pots, pans, ingredients, oil, condiments, package, piece of trash, volume of water, stove, task-completion times, hands, free seconds/minutes for cleaning during the cook, towel, tissue paper...}.
This tells me where the recipe is underspecified or needs to be adapted to my kitchen and allows me to fix the uncertainty beforehand, instead of giving me a puzzle in the moment where a bunch of parralelized tasks severely limit the estimated available interruption-free cognitive capacity. I try to go for a high-fidelity visual simulation and run it multiple times (obviously I speed it up). 
If the recipe is already chunked into stages, I mentally review them seperately. I also think of the "why" of the steps within the recipe. It's far easier to memorize a complex structure, if I can logically appreciate why it looks like that. Also I mentally set markers for expected free minutes, where I have time to re-review the next stage.

If I do all of that, cooking something complex becomes quite joyful and easy, instead of stressful.
I am not really a visual thinker. Visual thinking is aversive to me.
Or perhaps... it's more anti-mimetic, as it's just not a cognitive option that naturally occurs to me. Because I'm just far more performant in thinking by combining verbal abstractions. Path dependency and all that.
However, intellectually I know, that if I could sharpen and practice my visual thinking subskill, I can in time dramatically increase my cognitive capabilities.
For example, I recently found the recursive formula for cubes, just by visualizing it whilst drinking coffee in my gym. (no written notes)
m:= n+1
m^3= n^3 + 3n^2 + 3n + 1
[normally, you'd only use n instead of defining m as the successor, but I find this to be needlessly difficult, because it causes a ton of interference for me]

It was a bit challenging, but also something that I just started spontaneously doing for fun. And I'm pretty sure I could find the general formula for n^m (m being a natural number) too, next time I have a liminal context, that usually ends up seeing me preoccupied with fantasies and mentally rehearsing arguments.
For a true visual thinker, this probably is "just obvious" but this is me shrinking the gap. So... baby steps.

But during my day-to-day cognitive operations the hyerbolic utility functions (fancy way of saying "impatience") means, I don't want to use those underdeveloped skills.
Practicing unusual thought patterns with no clear momentary payoff is frustrating and cognitively exhausting. And if I'm drained like that, I'm at very high risk for the YouTube/book/daydreaming/websurfing-etc. -cognitohazards.

But for cooking something difficult, visualization has proven so extremely useful, that I'll always do it there now. Because when I am lazy and just read and execute the recipe as I go along (my prior default), the whole process is far more cumbersome, frustrating and the outcome is unsatisfying. And I don't actually get much better or more comfortable at cooking itself. Because visualization has such incredibly high applicability in this domain, I actually have far less internal resistance when using it. Therefore can visualize far better in this context, than normally. And by updating what I'm actually already capable of, I'm slowly making it more salient/less anti-mimietic/less aversive as an option.

Replies from: tomcatfish, qv^!q
comment by Alex Vermillion (tomcatfish) · 2022-07-26T15:37:52.371Z · LW(p) · GW(p)

I like the bit, I took the time to try it out in my head too and it was a fun puzzle. I wonder if I can actually get better at visualization practicing problems of that difficulty level?

Replies from: david-gretzschel
comment by David Gretzschel (david-gretzschel) · 2022-07-26T16:04:22.469Z · LW(p) · GW(p)

Of course. Till they become too easy, then you'd need something harder.
Or you practice speed, I suppose.

comment by qvalq (qv^!q) · 2023-07-05T13:19:13.822Z · LW(p) · GW(p)

I tried to solve (n+1)^4 visually. I spent about five minutes, and was unable to visualise well enough.

comment by Lorz · 2022-07-02T06:50:35.128Z · LW(p) · GW(p)

In the main, isn’t this post mostly representing the idea of tacit knowledge, albeit under a different name?

https://en.m.wikipedia.org/wiki/Tacit_knowledge

comment by One Step for Animals (one-step-for-animals) · 2022-07-03T18:54:32.623Z · LW(p) · GW(p)

This sounds a lot like mindfulness.  :-)

comment by flipshod · 2022-07-02T11:14:07.167Z · LW(p) · GW(p)

The example that immediately popped into my head was the difficulty I had training lawyers to conduct a jury trial.

You spend so much time building up a persuasive story to tell, but at trial most of your mental effort is elsewhere.

You have the jury, individual jurors, the judge, the witnesses, and opposing counsel, your story, the opposing story, and rules of evidence and procedure. All of these things are moving and changing at the same time.

It felt unteachable. I've said things like "it takes extreme observation".

comment by [deleted] · 2022-07-03T12:01:26.761Z · LW(p) · GW(p)

Cool post :)

comment by [deleted] · 2022-06-28T20:33:40.453Z · LW(p) · GW(p)

This is essentially what machine learning, especially ANN, is.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-06-28T21:01:00.908Z · LW(p) · GW(p)

I think you are downvoted because it is not clear what you are referring to. Maybe you can elaborate?

Replies from: None
comment by [deleted] · 2022-06-28T21:25:32.771Z · LW(p) · GW(p)

These are meta data that's derived from something in reality, whether you are working on a computer program, or working through some math. The act of working on them is similar to how a ML algorithm runs, it collects data (i.e. we read the equations, and we form a graph of variables of components that interact with each other, as even these mental models can be considered as input to the ANN) and form hidden layers in a ANN and turns into output (i.e. actions, writing down a variable, add some computation steps, etc.). These meta data mentioned by the author that we keep track of when doing these tasks are essentially the hidden layers of the ANN algorithm. Also everyone runs on different ANN. The meta data that the author mentioned are something that I personally have used as well, so there is some rational process to forming these useful metrics that help us complete the tasks.