Observation
post by LoganStrohl (BrienneYudkowsky) · 2022-02-19T18:47:08.917Z · LW · GW · 23 commentsContents
24 comments
Imagine that you meet someone you’re attracted to at a party. At one point, they smile at you, and you notice. You’re pretty sure they like you, but you really want to know whether they like you like you.
You don’t act on this in any particular way, but you do spend the whole next week thinking about it. You think about other people who have been into you, and about people who have not, and the differences between them. You muse about what sort of taste in romantic partners you imagine the person might have. By the end of the week, you’re weighing your virtues and vices, trying to decide whether you’re even worthy of love.
(If this seems alien to you, I hope it is at least true to your experiences of some humans.)
In the moment when you noticed you were attracted to the person, you made an observation. In the moment when you noticed their smile, you made another. In the moment when you noticed your curiosity, you made another.
But as soon as you vanished into your own musings, you were no longer making observations. You were no longer collecting data. Instead, you were interpolating, extrapolating, filling in the gaps with stories and guesses, processing and reprocessing. Everything that followed, in the week after the party, took place inside your map—analysis, interpretation, reasoning, reflection.
In Arthur Conan Doyle’s “A Scandal in Bohemia,” Sherlock Holmes lectures Watson on the difference between seeing and observing:
“You see, but you do not observe. The distinction is clear. For example, you have frequently seen the steps which lead up from the hall to this room.”
“Frequently.”
“How often?”
“Well, some hundreds of times.”
“Then how many are there?”
“How many? I don't know.”
“Quite so! You have not observed. And yet you have seen. That is just my point. Now, I know that there are seventeen steps, because I have both seen and observed.”
I don’t know how many steps there are on the staircase up to my own living room, either. Setting aside the question of prioritization, and whether I should be turning my attention there—what is it, exactly, that Watson and I are doing with the steps?
My guess is that we’ve taken some initial impressions—a few moments of impact from the external world—and used those points to draw a constellation. Every time we walk up the steps, we do almost all of our processing on the constellation, rather than on the points of light in the sky.
Most of our “seeing” the stairs is happening inside of our maps. We observe just enough to recognize that we’re about to encounter the well-understood “stairs” entity, and then we superimpose our “stairs” concept over whatever sensations are happening to us, and stop paying attention. To the extent that our brains record anything, it’s that we “climbed up the stairs,” rather than that we felt some number of impacts under each of our feet, while the muscles in our legs contracted and our heart rate climbed slightly, etc.
Imagine that you do end up asking the cute person from the party to meet you for coffee, but when the day comes, you’re extremely distracted by a disaster at work, one you’ll have to return to as soon as the date is over. Despite a whole hour of conversation, you leave feeling like you’ve learned almost nothing about them.
Crucial data was all around you, but while you saw it, you failed to observe any of it.
It is hardest to make fresh observations about things you have seen many times. The stairs, long-held beliefs, attitudes you were raised with. The more often you superimpose your drawing of a constellation over points of light in the sky, the more opaque your drawing becomes.
It probably doesn’t really matter that I have seen-but-failed-to-observe my stairs. I never miss a step, and I’m not in a murder mystery whose solution might depend on how many steps there are.
It certainly does matter, though, if I have seen-but-failed-to-observe the way I make requests of my child, especially if I haven’t even noticed the distinction. If I believe I’ve observed when I’ve really only seen, I’m much less likely to start paying attention, or to hypothesize that I may have gotten something wrong. If we’re going to be close for a long time, we need to be able to communicate with each other, not just with the cartoon drawings we habitually plaster over each other’s faces.
It also matters if I’ve seen-but-failed-to-observe the factors that cause me to continue on my current career path, what I count as evidence, or my default response when my expectations are violated.
Seeing-but-not-observing is a failure to make contact with a bit of territory that is right in front of you. It is standing at the bank of a river while staring at the part of your map labeled “river”. Often that’s good enough; but sometimes the river is flooded when you need to cross, and then you really have to lower your map and make contact with crucial data. You have to look at the world itself, or else you’ll drown.
In the sentence “Knowing the territory takes patient and direct observation,” this is what I mean by “observation.” I mean actual contact with the territory. Looking at the stars themselves, instead of letting the constellation fill your mind as your eyes glaze over.
Knowing the territory takes patient and direct contact with the territory.
In the next two essays, I’ll talk about two ways of being in contact with the territory: directly, and patiently.
23 comments
Comments sorted by top scores.
comment by Benya Fallenstein (benya-fallenstein) · 2022-02-20T01:57:09.243Z · LW(p) · GW(p)
Given our discussion on the "territory" essay about how the "in contact with the territory vs. all in the map" distinction has been confusing me, I've been trying to find a way to think about the "observing vs. merely seeing" distinction without identifying it with the other one.
{My first attempt to phrase it that seems to be actually at all helping with my confusion} is this: "Observation (in the relevant sense) is bringing my {anticipations / implicit models} in contact with something that might {contradict / resist / collide with / set / correct / change} them in a way that would make them better reflect the territory." (Where by "might" I mean something like "has a reasonably good chance to".)
Thus under this attempted definition, doing math about, say, star formation could be "observation" in the relevant sense (about the stars, not just about math) even though it doesn't involve directly getting data from the stars, because it can collide with a person's implicit models of how star formation works in a way that would tend to cause them to reflect reality better. And, of course, directly recording data from a telescope and using it to test hypotheses would be observation.
On the other hand, repeatedly walking up the stairs without paying attention would not be (much) observation of the stairs, because it would be very unlikely to change my anticipations about things like how many steps there are. And moreover, counting how many numbers there are on my bedroom clock would also not be much observation (of what the clock is like), even though it does involve getting data directly from the clock, because it would also be very unlikely to change my anticipations about the clock (because I am very confident that I know the answer).
I'm not sure whether "observation" is actually a good handle for the cluster I'm drawing here, but I think I probably do think that the cluster I'm drawing here helps me with cashing out the phrase "contact with the territory" in "knowing the territory takes patient and direct contact with the territory" in a way that isn't based on the "in contact with the territory vs. all in the map" dichotomy.
I keep wanting to add something to this comment about how this attempted definition might apply to something like Googling or asking an expert, but I think I actually still have too much confusion there, and I guess the comment seems sufficiently worthwhile to me even without that.
[meta note: what i'm doing above is trying to find articulations of the intuitive picture that is emerging in my mind, which is hopefully in the vicinity of what you Logan meant to communicate but might not be]
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-20T03:35:21.370Z · LW(p) · GW(p)
I had a different conversation with Robin in the draft documents that I think was very relevant, but I can't find that one either. Blerg.
Anyway, "the thing that resists expectation" is my current best way of identifying "the territory", at least inside my own head. This has been true since gyroscopes and my study of fabricated options. I think the takeaway from that conversation I can't find was something like, "The ease with which your incorrect expectations encounter resistance-that-you-perceive-as-resistance corresponds to the directness of your observation." Which sounds to me a lot like what you're saying here.
[I'm just gonna explicitly flag here that my confidence throughout this comment goes down, and my feelings of confusion and other signs that I'm swimming in pre-theoretic soup go up. I'd be surprised to find in a year that nothing I say here is outright false, even the things that are descriptions of my immediate experiences.]
I currently expect that ultimately, my concept of "the territory" needs to be one where math absolutely is a study of the territory, and if I'm using a version of the "territory" concept that lacks that property, I don't have it right yet.
I'm not sure about math in general, but according to my current understanding, logic is is a study of maps. Uh, hm, I can't belief report the previous sentence. Let's try again: According to my current understanding, formal semantics, as a field, is a study of maps. No that's not quite right either. (My understanding of) formal semantics is like, "We have these rules. What systems obey these rules? If some part of the world obeyed these rules, what might that part of the world look like?" So formal semantics makes maps of a special kind, a kind that follow sets of logical rules. It's sort of backwards cartography. Instead of trying to draw pictures of the world, formal semantics is trying to draw pictures of how the world would be if it had certain properties. Like illustrating a novel, but for math kids instead of art kids.
Logic itself... Is chess a map? Is it even a drawing? I... do not think so, no! A logic is a set of formulation and derivation rules, and what a logician does is study the behaviors of that collection of entities. A logician is like, "How do the thingies behave if you swap this derivation rule for that one? What are the relationships between the old set and the new set? What happens if this particular bananas connective gets to be part of a well formed formula? Let's find out!" (Maybe. I'm obviously talking about myself. I don't know what real logicians do.) Maybe if you're some kind of applied logician or something, you spend a lot of time trying to find logics that govern whatever bit of the world you're interested in, like databases or something, and then you spend a bunch of time making maps so you can study the relationship of the logic and the map. (Is that what programmers are? Applied logicians in this sense?) But my point (it turns out) is that logicians are not mainly working with maps, any more than basket weavers are.
I'm even less clear on what math is than on what logic is, but my only attempts to understand what math is that I've ever been at all satisfied with have lead me to think of it as a special case of logic. Math is what happens when you pick a logic, then pick a set of assumptions, then just leave those foundational assumptions in place forever until you realize there's something you don't like about them, at which point maybe you lead a revolt and start a new religion where you're not just trivially allowed to pick out the smallest number from each of a set of sets of natural numbers, or whatever. And if math is some kind of special case of logic, then, seems like it doesn't have any more to do with maps than basket weaving does, either?
Anyway, perhaps that was all nonsense, I really have no idea. Hope some of it bumps into you in useful ways.
↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-20T03:45:21.774Z · LW(p) · GW(p)
I found the conversation with Robin. (Or, rather, Robin found the conversation. Thanks, Robin.)
Robin: [shared with permission]
[talking about a bit I was trying to re-write in the next essay, Direct Observation]
First paragraph feels great.
the the last line, though, "The processing distance is finite" immediately raises the question of "WOAH what would infinite / extremely large processing distance be...?" and I'm off thinking about that.
Second paragraph: makes sense, and I agree with you, but I have some expectation of... explanation? You've stated that the processing distance is shorter, but without further words on how you know / what that's made out of.
(And how does a person know? By poking something and examining the sensation, and comparing it to the sensation of planning something? It might be a good moment to prompt the reader to try it and examine the difference?)
Logan:
I tried to write about this, but it was long and clumsy. I'm not sure if I'll take a second pass at it, if I'll just leave it out (especially since it's a pretty new and not-very-investigated line of thought for me). But I thought you'd like to see what I have:
Here’s a test for estimating processing distance: If your expectations happened to be incorrect, how easy would it be to feel the resistance of reality?
I have some expectations around what kinds of foods a cat will enjoy. If you’d asked me two weeks ago, “What treats should I offer my cat while helping him become more comfortable around vacuum cleaners?” I’d recommend little chunks of fresh chicken, beef, or seafood. That was my implicit belief for many years, while I did not actually know any cats.
Recently, while preparing to adopt a kitten, I looked into this question, and multiple websites confirmed that super high-value treats for cats include fresh poultry, meat, and seafood.
But when I actually got the kitten, reality resisted my expectations by literally spitting out chunks of fresh chicken, grilled steak, and shrimp, right in front of me. It would have been hard for me not to notice this happening. It may be true that cats in general enjoy fresh chicken; but it was immediately apparent to me, as I watched Adagio literally turn up his nose at the tasty morsel, that this one does not.
If I’d never offered Adagio chicken, I could have gone on believing indefinitely that he likes chicken. There would have been no opportunity to encounter resistance through my imagination, through books by cat experts, even through personal interactions with other cats. It was only by offering chicken directly to my real-life chicken-hating cat that I gained the opportunity to notice I was wrong.
When I imagined Adagio eating chicken, my imagination filled in a picture of him munching eagerly, even trying to swipe additional bits from my hand. The processing distances between cat palates and human imaginings of cats is quite large, so there’s little opportunity for expectation to encounter resistance. When I read accounts of cats, the distance is a bit less, because those words are downstream of observation of actual cats; if I’d been so wrong as to expect Adagio will like oranges, googling probably could have set me right. Cats supposedly hate citrus. (I offered him an orange just now, to be sure. He got up and backed away from it.)
But even if my priors are strong, and even if I’m very stubborn or invested in my expectations—if I’ve just spent a lot of money on fancy cat food featuring fresh chicken, for instance—I’d have to work pretty hard not to notice my surprise when I offer an actual chicken-hating cat an actual piece of chicken from my own hand.
[end attempted essay section, begin talking directly to Robin again]
so to answer your question, according to my working model, infinite processing distances are those that offer no opportunity whatsoever to encounter resistance from reality when your expectations are incorrect. the distance between snakes and cello sonatas is extremely large, because snakes don't have ears. (not a perfect example, since they do have some other methods of sensing vibrations.) if a snake is somehow mistaken about the properties of a cello sonata, it will have to work extremely hard to even find an opportunity to discover this, nevermind making use of that opportunity. it might have to, i don't know, learn to read musical notation, and even then a lot of potentially crucial information (like the overtones created by the shape of the cello body, or a particular artist's interpretation) will not be available.
(i don't know why i'm talking about snakes. deaf humans also exist.)
so it's like, how tightly entangled is any given experience with the bit of reality you're hoping to learn about from that experience? the totally sound-free experience of smelling a tasty cricket in front of your tongue is not very entangled with a musical performance, even if the performance is happening just meters away from you. a scratchy recording of that performance on an ancient beaten-up record heard decades later on the other side of the planet, much more so.
Robin:
(I did want to see this, yes!)
>”Here’s a test for estimating processing distance: ‘If your expectations happened to be incorrect, how easy would it be to feel the resistance of reality?’”
Oh, I like this. (Though, “the resistance of reality” throws a little ‘?’ For me when I look at it too closely; I can tell my felt-sense for “noticing surprise/confusion” is bubbling up to take the space of the phrase when I don’t look too closely, though, so I must think you mean something like that.)
>”When I read accounts of cats, the distance is a bit less, because those words are downstream of observation of actual cats; if I’d been so wrong as to expect Adagio will like oranges, googling probably could have set me right.”
I feel torn on casting reading/hearing another’s words as being any closer to the territory than imagining, since it doesn’t actually provide an opportunity to encounter the ‘resistance of reality’. (Instead, it seems like straightforwardly an updating-my-map-with-your-map activity?)
That said… you can update your map using someone else’s map so that it’s easier to actually encounter something in the territory. (For example, you could’ve gone your whole life without offering an orange to a cat; but you did try, you moved to encounter that territory, ~because someone shared a map that said you would see a particular thing (cat-upset) at a particular place (the interaction between cat and orange)).
There is a sort of… temporal closeness there?
I feel like something important is solidifying for me here; something about how maps are actually useful as maps, as in they might point you toward the appropriate sections of territory for whatever it is you want to look at. Before I was just thinking of ‘maps’ as a useful metaphor for abstraction. Hm.
also... something something the ability to manipulate the territory as a sign that you are maximally close to it?
I typed that, then switched to reading A Process Model by Gendlin, and the following sentences popped out at me:
"As ongoing interaction with the world, our bodies are also a bodily knowing of the world. It is what Merleau-Ponty calls 'the knowing body' (le corps connais- sant). Such knowing consists in much more than external observation; it is the body always already interacting directly with the situation it finds itself in."
The (seeming) relevance made me laugh.
Where does 'interaction' fit in all of this anyway?
Logan:
it somehow fits into the heart of deep mastery [from "Knowing"]
[end of my conversation with Robin]
↑ comment by Benya Fallenstein (benya-fallenstein) · 2022-02-21T01:02:46.994Z · LW(p) · GW(p)
Where does 'interaction' fit in all of this anyway?
Logan:
it somehow fits into the heart of deep mastery [from "Knowing"]
Ooh huh hmmm!
I had missed this before, but… I think achieving deep mastery is actually not the goal of {the part of my work I consider most important}. Or, to be more precise, it's not the job of this part of my work to produce deep mastery. I think.
(The Knowing article describes deep mastery as "extensive familiarity, lots of factual knowledge, rich predictive and explanatory models, and also practical mastery in a wide variety of situations".)
The job of this part of my work is to make contact at all, and to nurture this contact just enough that it becomes possible to deepen that contact with more ordinary methods, like actual mathematical models. (Which are also an important part of my work, but do not as much seem like the bottleneck.) This part of my work isn't supposed to produce extensive familiarity, lots of factual knowledge, etc.
Metaphorically, it's like an expedition that travels deep into a jungle trying to find a viable route for a road, or something. They never see the actual road–once they've just marked off where the road may one day go, they move on to the next project. Their work is actually quite different from that of the people coming in after, who cut the trees and build the bridges and pave the road. Those latter people always have the road behind them which connects them to civilization, so they can truck in supplies and it's basically a normal construction job, if one at the frontier. The expedition people are on their own, and can't carry enough food to last the whole expedition, so they need to live off the jungle.
That... probably explains some of my confusion.
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-21T22:10:21.017Z · LW(p) · GW(p)
yeah that makes sense.
i think this sequence is probably meant as a letter to aspiring rationalists in particular. to some extent, it's like, "look if you're trying to learn rationality and you're not using methods that are aimed at deep mastery then you are doing it wrong".
↑ comment by justinpombrio · 2022-02-21T16:53:17.353Z · LW(p) · GW(p)
There's a piece I think you're missing with respect to maps/territory and math, which is what I'll call the correspondence between the map and the territory. I'm surprised I haven't this discussed on LR.
When you hold a literal map, there's almost always only one correct way to hold it: North is North, you are here. But there are often multiple ways to hold a metaphorical map, at least if the map is math. To describe how to hold a map, you would say which features on the map correspond to which features in the territory. For example:
- For a literal map, a correspondence would be fully described (I think) by (i) where you currently are on the map, (ii) which way is up, and (iii) what the scale of the map is. And also, if it's not clear, what the marks on the map are trying to represent (e.g. "those are contour lines" or "that's a badly drawn tree, sorry" or "no that sea serpent on that old map of the sea is just decoration"). This correspondence is almost always unique.
- For the Addition map, the features on the map are (i) numbers and (ii) plus, so a correspondence has to say (i) what a number such as 2 means and (ii) what addition means. For example, you could measure fuel efficiency either in miles per gallon or gallons per mile. This gives two different correspondences between "addition on the positive reals" and "fuel efficiencies", but "+" in the two correspondences means very different things. And this is just for fuel efficiency; there are a lot of correspondences of the Addition map.
- The Sleeping Beauty paradox [? · GW] is a paradoxical because it describes an unusual situation in which there are two different but perfectly accurate correspondences between probability theory and the (same) situation.
- Even Logic has multiple correspondences. " and "" mean in various correspondences: (i) " holds for every x in this model" and " holds for some x in this model"; or (ii) "I win the two-player game in which I want to make be true and you get to pick the value of x right now" and "I win the two-player game in which I want to make be true and I get the pick the value of x right now"; or (iii) Something about senders and receivers in the pi-calculus.
Maybe "correspondence" should be "interpretation"? Surely someone has talked about this, formally even, but I haven't seen it.
↑ comment by Aleksi Liimatainen (aleksi-liimatainen) · 2022-02-20T06:05:58.208Z · LW(p) · GW(p)
On the map/territory distinction for math, I feel like a formal system instantiates a territory, operating on the system maps that territory, and correspondences between the system and things outside it are map-like.
↑ comment by Benya Fallenstein (benya-fallenstein) · 2022-02-20T10:06:12.199Z · LW(p) · GW(p)
the conversation with robin you quoted did feel relevant, but the parent comment felt like it was too focused in on math and thereby somewhat orthogonal to or missing the point of what i was trying to figure out. (the real thing i'm interested in isn't even about math but about philosophical intuitions.)
this made me want to try to say the thing differently, this time using the concept of gears-level models:
https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding [LW · GW]
(maybe everything i'm saying below is obvious already, but then again, maybe it'll help.)
suppose that you are looking at a drawing of a simple mechanism, like the second image in the article above, which i'll try to reproduce here:
if the drawing is detailed enough and you are able to understand the mechanism well enough, then you can reason out how the mechanism will behave in different circumstances; in the example, you can figure out that if the gear on the top-left turns counter-clockwise, the gear on the bottom-right will turn counter-clockwise as well.
but you haven't interacted with the mechanism at all! everything you've done has happened inside your map!
nevertheless, if you understand how gears work and you look at the drawing and think in detail about how each gear will turn, your model resists the idea that the bottom-right gear can turn clockwise while the top-left one turns counter-clockwise. it might be that your model of how gears work is wrong, or it might be that the drawing doesn't accurately represent how the mechanism works, or you might be misunderstanding it, or you might be making a mistake while thinking about it. but if none of these is true, and the top-left gear turns counter-clockwise, then the bottom-right gear has to turn counter-clockwise as well.
when you work out from the drawing how the bottom-right gear has to turn, are you in contact, not just with any part of the territory, but with the part of the territory that is the actual physical gears? even though you are not physically interacting with the actual gears at all, just thinking about a map of the gears?
the way i'm thinking about it in the top-level comment is that since this process is able to resist misconceptions you may have, and is thereby able to bring your {anticipations / implicit models} about the physical gears more in line with reality, therefore yes it is "contact with that part of the territory" in the sense that is relevant to "knowledge of the territory takes patient and direct contact with the territory."
"a map that clings super tightly to the territory" is a phrase from duncan's reply to my comment on "The Territory" [LW(p) · GW(p)] which seems to me to describe gears-level models well.
– # –
i should note that the thing i'm ultimately interested in, namely the way i use philosophical intuitions in my work on agi alignment, isn't even anywhere as detailed as a gears-level model. nevertheless, i still think that these intuitions cling sufficiently tightly to the territory that this work is well worth doing. in the ontology of my top-level comment, my work is betting on these intuitions being good enough to be able to resist and correct my implicit models of agi alignment, and to therefore constitute significant contact with this region of the territory.
something i don't know how to reflect well in a comment like this, and think i should say explicitly, is that the game i'm playing here is not just to find a version of logan's sentence that covers the kind of work i do. it is to find a version that does that and additionally does not lose what i thought i understood when i was taking "contact with the territory" to be the opposite of "it all happens in your map", and therefore would have taken {thinking about a drawing of the mechanism} as not being in contact with the territory, since it consists entirely of thinking about a map.
for some reason i haven't really figured out yet, it seemed really important for this to say that in order to be "contact with the territory", an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
(i tried to say some more things about this here, but apparently it needs to gestate more first. it wouldn't be very surprising if i ended up later disagreeing with my current model, since it's not even particularly clear to me yet.)
Replies from: BrienneYudkowsky, benya-fallenstein↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-21T22:07:22.968Z · LW(p) · GW(p)
>for some reason i haven't really figured out yet, it seemed really important for this to say that in order to be "contact with the territory", an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
yes i absolutely agree, and i think this intuition that we share (...or, the apparent similarity between our two intuitions?) is a lot of what's behind the Knowing essay. something something deep mastery.
i'm only saying one thing in this whole essay series, just from a bunch of different angles, and this bit of your comment for sure picks out one of the angles.
↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-21T22:12:42.505Z · LW(p) · GW(p)
or maybe more importantly, if you're trying to develop rationality, as an art, to be practiced by a community whose actions matter, and you aren't somehow aimed at deep mastery, then you're doing it wrong.
[edit: oh i think i somehow accidentally put this in the wrong sub thread]
↑ comment by Benya Fallenstein (benya-fallenstein) · 2022-02-21T23:41:45.582Z · LW(p) · GW(p)
but the parent comment felt like it was too focused in on math
er, sorry, too focused in on math for it to help me with the thing i'm trying to figure out, in a way i was quickly able to recognize, i meant. i didn't mean to assert that it was just too focused in on math for a comment, in some generic purpose-independent way! 😛
comment by Phil Scadden (phil-scadden) · 2022-02-19T18:54:21.913Z · LW(p) · GW(p)
I am pretty interested to see where this goes. Making good observations seems very dependent on what we are looking for, (and having a vocabulary for the observation). Ways to break through some of these borders?
Replies from: Daunting↑ comment by RobinGoins (Daunting) · 2022-02-19T20:58:15.480Z · LW(p) · GW(p)
I like where your mind is at here, particularly that you’re gesturing at the want for vocabulary.
Further questions:
Where does vocabulary even come from? How does it get made? What’s the process of creating new words for a field? Is observation actually dependent on having relevant vocabulary? What is a new concept made of?
What if you want to make progress in a new field that has no vocab yet? (How do you even know there's a place to explore if no vocab exists yet? How is it found?)
Replies from: phil-scadden, BrienneYudkowsky↑ comment by Phil Scadden (phil-scadden) · 2022-02-20T19:43:02.269Z · LW(p) · GW(p)
To me vocabulary (which I think is a brain shortcut to a category/concept) is a big help in seeing. I read "Landmarks" (Robert MacFarlane) which was about specialised vocabularies and I enjoyed some of the odd words. One was "smeuse" - a hole in hedge or fence made by repeated passage of animals. The thing is, once I had read about it, I suddenly started noticing them. But to your question as where do the words come from? The vocabularies in Landmarks come from specialised needs of people in particular environments. Peat-diggers need more specialised words to describe peat bogs to survive and proper.
So observation does proceed vocabulary. Science is full of it -every field has to develop of specialized vocab to communicate observation. But once there is a vocab, then its strongly assists observation. Can this hinder seeing? Yes, that too. The brain will take whatever shortcut it can and schemata will miss plenty when the brain has more urgent things to do. Watson's excuse for the not knowing the no. of stairs would be that he never needed to - he had more important things to think about.
But I think there are ways to employ both. Early in my career, I had do a fair amount of mudlogging from coal exploration wells - a boring but vital job. We had a standardized vocabulary for describing what we saw that was structured into a list. Working your way through it, metre by metre, kept you observing what was important even when bored out of your skull. And at the end of list was - "what is different?". A key to make a novel observation that was outside the parameters of the list.
↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-20T01:08:33.286Z · LW(p) · GW(p)
As a relatively non-verbal person, I always feel like someone is walking upside down with legs sticking out of their head when they make claims about vocabulary being necessary for things besides talking to each other. There must be quite the inferential gap here. Wh... Whyy? Why would having vocabulary for the observation be important to making a good observation? Maybe you mean something I'm not expecting by "good"? Or by "vocabulary"?
I also don't understand "Making good observations seems very dependent on what we are looking for". Do you mean something like, "Whether or not we deem an observation to be 'good' depends on why we're making observations, since 'goodness' only exists in relation to goals?"
Perhaps I just somehow completely don't understand this comment at all. But I guess Robin did? I wonder what Robin heard.
↑ comment by Phil Scadden (phil-scadden) · 2022-02-20T19:55:42.898Z · LW(p) · GW(p)
Do you mean something like, "Whether or not we deem an observation to be 'good' depends on why we're making observations, since 'goodness' only exists in relation to goals?"
Frankly, yes. I would be regarded as a very absent-minded person, for the usual reason of spending a lot of time thinking and pretty much oblivious to other things. I like my daily life structured by habit so brain is unencumbered by paying attention to the mundane. I dont claim this as a good thing, but it is the what I am. The meaning of "Observation" to me is strongly rooted in my training and something I "turn on" when required (or when it suits as when out tramping or exercising). I notice landscapes, I notice plants, I notice rocks as these are things that I have some training in seeing. I like people-watching but I would say that I am very much observing in relation to a goal.
↑ comment by Harmless · 2022-02-20T02:41:53.295Z · LW(p) · GW(p)
A surprising amount of human cognition is driven purely verbally/symbolicaly - I recall a study showing that on average people with a native language that had much more concise wording/notation for numbers could remember much longer numbers. As a relatvely verbal person, my intuition about the relationship between observation and vocabulary would be that to know something is to be able to say what it means to know it, but then again it's possible that my case doesn't generalise and that I just happen to rely on symbol-pushing for most of my abstract cognition (at least, that portion of abstract cognition that isn't computed using spacial reasoning).
I was going to write
"Making an observation isn't an atomic action. In order to compress noisy, redundant short-term sensory data into an actual observation stored in long-term memory you need to perform some work of compression/pattern recognition, e.g. the sensory data of ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ is compressed into the observation 17 steps
, and how you do that is a partially conscious decision where you have to choose what type of data to convert ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ into."
But in retrospect it's possible that from your perspective ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ is the thing-you've-been-using-the-word-observation-to-mean, and you can store that in your long tem memory just fine and I just happen to throw away or refuse to reason about everything that isn't sufficiently legible.
↑ comment by LoganStrohl (BrienneYudkowsky) · 2022-02-20T04:53:10.254Z · LW(p) · GW(p)
I don't think that observation has much to do with what you do or do not store in long term memory. I think that the more direct an observation is (the more it is like observation rather than seeing), the more difficult it will be to retrieve using your pre-existing conceptual framework as search methods. It will be harder, not easier, because the thought will be more like the world and less like you, but the you who looks for it later will be more like you and less like the world.
I think I spend an unusual percentage of my time making unusually direct observations, and I think this has something to do with how my memory is weird. In some ways I have an extraordinarily fantastic memory, and in other ways I have such an awful memory that I'd be worried about dementia if it hadn't been like this my whole life. If you ask me what a movie I watched was about, I'll say things like... Actually, how about I do that for real, so I can show you.
Recently I watched a movie whose name I've forgotten. Duncan will know what it was. The movie was about... I'm not sure, maybe something about independence, and I think the plot involved a funeral. I could probably piece it together and translate it into words if I thought about it for a while. But mostly I remember that there were chickens in a bus at the end, and a lot of green, and somebody killed a deer while wearing mud on their face, and a pretty red-haird girl reading a Brian Green book that was white, and there were really a lot of green plants, and a scary hospital beeping, and "Let's go shoppiiiiiiingggg!", and guitar music, and the guy shaved, and the dumb kids whose parents lied to them had to take showers. Just for a start.
But even though I can tell you details about the chickens a lot more easily than I can tell you about the plot (the chickens were white, and the eggs were tan, and the chickens made certain sounds I could imitate for you if you could hear my voice, and there were at least two of them, and I can describe the camera angle, and the clothes of the kids who were taking their eggs, and so forth), the overall large artistic message of the film had an enduring impact on me. I think slightly different kinds of thoughts now that I did before I saw it. I could probably approximately characterize the differences for you in words if I worked at it, but the fact that I would have to work at it, and that I wouldn't be perfectly satisfied with the result, doesn't change the fact that it meant something to me and I probably even understood the movie as a whole, in the sense that it had a (probably intended) impact on my thoughts and feelings and behaviors long after it was over, one that I continue to dialogue with and weigh and integrate.
I'm not sure what my point is here. I'm just saying things that apparently I wanted to say after reading your comment. I hope you like some of them.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-02-20T04:55:48.509Z · LW(p) · GW(p)
(Captain Fantastic)
↑ comment by tcheasdfjkl · 2024-06-23T20:47:05.974Z · LW(p) · GW(p)
I think, for me, memory is not necessary for observation, but it is necessary for that observation to... go anywhere, become part of my overall world model, interact with other observations, become something I know?
and words help me stick a thing in my memory, because my memory for words is much better than my memory for e.g. visuals.
I guess that means the enduring world maps I carry around in my head are largely made of words, which lowers their fidelity compared to if I could carry around full visual data? But heightens their fidelity compared to when I don't convert my observations into words - in that case they kind of dissolve into a vague cloud
...oh but my memory/models/maps about music are not much mediated by words, I think, because my music memory is no worse than my verbal memory. are my music maps better than my everything-else maps? not sure maybe!
comment by lynettebye · 2022-06-24T11:51:38.670Z · LW(p) · GW(p)
(Copying over FB reactions from while reading) Hmm, I'm confused about the Observation post. Logan seems to be using direct observation vs map like I would talk about mindfulness vs mindlessness. Except for a few crucial differences: I would expect mindfulness to mean paying attention fully to one thing, which could include lots of analysis/thinking/etc. that Logan would put in the map category. It feels like we're cutting reality up slightly differently.
Replies from: lynettebye↑ comment by lynettebye · 2022-06-24T11:52:21.384Z · LW(p) · GW(p)
Speculating here, I'm guessing Logan is pointing at a subcategory of what I would call mindfulness - a data point-centered version of mindfulness. One of my theories of how experts build their deep models is that they start with thousands of data points. I had been lumping frameworks along with individual observations, but maybe it's worth separating those out. If this is the case, frameworks help make connections more quickly, but the individual data points are how you notice discrepancies, uncover novel insights, and check that your frameworks are working in practice.
Replies from: lynettebye↑ comment by lynettebye · 2022-06-24T11:53:11.214Z · LW(p) · GW(p)
(Also, just saw the comment rules for the first time while copying these over - hope mindfulness mention doesn't break them too hard)