Do meta-memes and meta-antimemes exist? e.g. 'The map is not the territory' is also a map
post by M. Y. Zuo · 2022-08-07T01:17:43.916Z · LW · GW · 31 commentsContents
31 comments
The idea that the map is not the territory seems to be itself a map, due to the fact that all words are written in symbolic form, and symbols by definition cannot be the territory itself.
So although on at least one level of abstraction 'The map is not the territory' is useful insight, on another level, it may be meaningless and/or undecidable.
This seems to lead to a credible argument, advanced elsewhere, that the meta-rational is a higher level of abstraction.
But this then seems to lead to infinite regress. As ''The map is not the territory' is also a map' is also a map. Thus implying the possibility of a meta-meta-rationality and so on unto infinity.
A corresponding possibility is that certain assertions or logical arguments may be invalid/undecidable on one level, but valid/decidable on another level, a meta-validity if you will.
There does not seem to be a way to avoid this problem as long as humans think in words, or any other system of symbols.
But this does seem to impinge on some aspect of human cognition that suggest there exist higher levels of abstraction beyond what is commonly perceived or discussed.
-
Is this proof of existence of meta-memes, or meta-antimemes?
It's difficult to say, since 'proof of existence' is itself a meme/antimeme/memeplex/antimemeplex. Do we then need a 'meta-proof of existence'?
And is not 'existence' itself some kind of meme, as all expressible ideas are?
If we rephrase the question to be as general as possible:
Is this proof and/or meta-proof of existence and/or meta-existence of meta-memes, or meta-antimemes?
This seems too esoteric to be a basis for further discussion. However, the human language may not be equipped to pose such questions in a concise way.
-
Perhaps a better way of approaching this would be:
What are some differentiating features of meta-memes, or meta-antimemes?
How could we reliably detect their operation on our thoughts, if any?
31 comments
Comments sorted by top scores.
comment by Olomana · 2022-08-07T06:23:47.090Z · LW(p) · GW(p)
What's the problem with infinite regress? It's turtles all the way up.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-07T12:22:50.587Z · LW(p) · GW(p)
There may or may not be a vicious infinite regress so I left it ambiguous as to whether that itself is a 'problem'. In any case, it seems extremely difficult to derive anything from a meta-meta-etc-rationality. How exactly would it be applied?
Replies from: Olomana↑ comment by Olomana · 2022-08-08T08:42:52.628Z · LW(p) · GW(p)
I am a fan of PKM systems (Personal Knowledge Management). Here the unit at the bottom level is the "note". I find that once I have enough notes, I start to see patterns, which I capture in notes about notes. I tag these notes as "meta". Now I have enough meta notes that I'm starting to see patterns... I'm not quite there yet, but I'm thinking about making a few "meta meta notes".
Whether we're talking about notes, memes or rationality, I think the usefulness of higher levels of abstraction is an emergent property. Standing at the base level, it's hard to anticipate how many levels of abstraction would eventually be useful, but standing at abstraction level n, one might have a better idea of whether to go to level n+1. I wouldn't set a limit in advance.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-08T15:30:54.230Z · LW(p) · GW(p)
A meta-meta-note seems straightforward to construct because interactions with notes, meta-notes, meta-meta-notes, etc., are carried out in the same manner, i.e. linearly. But thoughts are different, since you cannot combine several dozen thoughts into a meta-meta-thought, unlike notes. (maybe that would work in a hive mind?)
How would you think meta-meta-rationally? Can you give an example?
Replies from: Olomana↑ comment by Olomana · 2022-08-12T04:47:00.411Z · LW(p) · GW(p)
I don't see your distinction between thoughts and notes. To me, a note is a thought that has been written down, or captured in the PKM.
No, I don't have an example of thinking meta-meta-rationally, and if I did, you'd just ask for an example of thinking meta-meta-meta-rationally. I do think that if I got to a place where I needed another level of abstraction, I'd "know it when I see it", and act accordingly, perhaps inventing new words to help manage what I was doing.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-12T15:43:21.961Z · LW(p) · GW(p)
If you haven’t ever experienced it, how did you ascertain that meta-meta-thoughts exist?
Also, you don’t believe there’s a distinction between notes stored on paper, or computer, and thoughts stored in human memory?
Replies from: Olomana↑ comment by Olomana · 2022-08-13T06:49:04.574Z · LW(p) · GW(p)
I view my PKM as an extension of my brain. I transfer thoughts to the PKM, or use the PKM to bring thoughts back into working memory. You can make the distinction if you like, but I find it more useful to focus on the similarities.
As for meta-meta-thoughts, I'm content to let those emerge... or not. It could be that my unaided brain can only manage thoughts and meta-thoughts, but with a PKM boosted by AI, we could go up another level of abstraction.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-13T14:04:18.838Z · LW(p) · GW(p)
I’m having trouble visualizing any ‘PKM’ system that has any recognizable similarity at all with the method of information storage employed by the human brain, though this is not fully understood either. Can you explain how you organize yours in more detail?
Replies from: Olomana↑ comment by Olomana · 2022-08-14T06:43:07.614Z · LW(p) · GW(p)
We're using language to have a discussion. The fact that the Less Wrong data center stores our words in a way that is unlike our human brains doesn't prevent us from thinking together.
Similarly, using a PKM is like having an extended discussion with myself. The discussion is what matters, not the implementation details.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-14T13:37:14.572Z · LW(p) · GW(p)
Isn’t that exactly what is in question here? Words on a screen via LessWrong, the ‘implementation details’, may or may not be what’s preventing us from having a cogent discussion on meta-memes…
If implementation details are irrelevant a priori, then there should be nothing stopping you from clearly stating why you believe so, one way or the other.
Replies from: Olomana↑ comment by Olomana · 2022-08-16T06:00:30.879Z · LW(p) · GW(p)
I don't see this as a theoretical question that has a definite answer, one way or the other. I see it as a practical question, like how many levels of abstraction are useful in a particular situation. I'm inclined to keep my options open, and the idea of a theoretical infinite regress doesn't bother me.
I did come up with a simple example where 3 levels of abstraction are useful:
- Level 1: books
- Level 2: book reviews
- Level 3: articles about how to write book reviews
↑ comment by M. Y. Zuo · 2022-08-16T13:42:44.210Z · LW(p) · GW(p)
In your example, shouldn’t level 3 be reviews of book reviews?
EDIT: Or perhaps more generally it should be books about books about books?
Replies from: Olomana↑ comment by Olomana · 2022-08-17T06:23:47.496Z · LW(p) · GW(p)
I was looking for real-life examples with clear, useful distinctions between levels.
The distinction between "books about books" and "books about books about books" seems less useful to me. However, if you want infinite levels of books, go for it. Again, I see this as a practical question rather than a theoretical one. What is useful to me may not be useful to you.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-17T15:11:54.920Z · LW(p) · GW(p)
If a clear delineation between ’books about books’ and ’books about books about books’ does not exist, how can we be so sure of the same between meta-thoughts and meta-meta-thoughts, which are far more abstract and intangible? (Or a meta-meta-rationality for that matter?)
But before that, I can’t think of even a single concrete example of a meta-meta-book, and if you cannot either, then that seems like a promising avenue to investigate. If none truly exists, we are unconstrained in imagining what it may look like.
Replies from: Olomana↑ comment by Olomana · 2022-08-18T06:13:28.618Z · LW(p) · GW(p)
I copied our discussion into my PKM, and I'm wondering how to tag it... it's certainly meta, but we're discussing multiple levels of abstraction. We're not at level N discussing level N-1, we're looking at the hierarchy of levels from outside the hierarchy. Outside, not necessarily above. This reinforces my notion that structure should emerge from content, as opposed to trying to fit new content into a pre-existing structure.
Replies from: M. Y. Zuocomment by Valentine · 2022-08-12T17:21:14.504Z · LW(p) · GW(p)
This is a feature of maps. Maps can't model the map/territory correspondence unless they also create a simulated territory. Then you can ask whether the modeled map/territory correspondence is accurate, which creates another layer of modeling, ad infinitum.
This isn't a real problem though. It's more a display of the limitations of maps in the maps' terms.
The standard way around this conundrum is to fold self-reference into your map, instead of just recursion. Then going "up" a layer of abstraction lands you right where you started.
…which is in fact part of what you're doing by noticing this glitch.
But the problem vanishes when you stop insisting that the map include everything. If I point at a nearby road, and then point at a roadmap and say "That road over there is this line", there's no problem. You can follow the map/territory correspondence just fine. That's what makes the roadmap potentially useful in the first place.
It's just an issue when you try to encapsulate that process in a model. How are you modeling it? Ah, oops, groundless recursion.
Which is to say, maps aren't the basis of thinking. They're extensions of thinking.
Sadly, language is based on maps. So describing this clearly can be a pain.
Hence "The Tao that can be said is not the true Tao."
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-13T02:26:58.462Z · LW(p) · GW(p)
How does one learn to recognize their first self-referential map? (Or if it’s instinctual, how did the first of our progenitors recognize the very first self-referential map?)
EDIT: I’m not even sure how it can be possible to deterimine if one’s maps are sufficiently self-referential. What do we compare it to?
comment by Richard_Kennaway · 2022-08-10T18:41:50.178Z · LW(p) · GW(p)
The ladder of abstraction [LW · GW] can be continued indefinitely. This is not a problem.
Replies from: M. Y. Zuocomment by Anon User (anon-user) · 2022-08-07T21:54:17.130Z · LW(p) · GW(p)
Once you have a level capable of self-reference / introspection, there is no need for more levels about it. A single layer of introspective rationality capable of expressing both regular maps and higher-order "map is not a territory" relationships could be sufficient.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-08-09T01:33:54.450Z · LW(p) · GW(p)
How did you ascertain there is no need for higher levels beforehand?
Replies from: causal-chain, anon-user, anon-user↑ comment by Causal Chain (causal-chain) · 2022-08-10T22:14:06.383Z · LW(p) · GW(p)
The phrase "the map is not the territory" is not just a possibly conceivable map, it's part of my map.
Thinking in terms of programming, it's vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write *(s.p) == s. Or go further and write *(*(s.p).p) == s.
As far as I want with only the tools offered to me by my current map.
Replies from: M. Y. Zuo↑ comment by Anon User (anon-user) · 2022-09-11T21:04:43.524Z · LW(p) · GW(p)
How did you ascertain there is no need for higher levels beforehand
Meta-level is pure logical reasoning over the universal truths. Since it does not depend in any way on the kinds maps and territories you encounter, it can be established beforehand. If the meta-level is sufficiently expressive (with appropriate reflective capacity), then you are all set.
Note that the OP does not say anything whatsoever about specific maps and territories, but rather reasons in the generic realm of universal truths. Think of a logical theory powerful enough to capture the OP argument and reason about whether the OP argument is true, and also able to express and reason about my argument about it, etc. That's your ultimate meta-level. The insight is that when you have a [countably] infinite tower of expanding logical theories, you can take their union as your ultimate logical theory. Theoretically, this never stops (you can then take another tower over the union, repeat that process itself infinitely many times, etc, and at any step you can take a union of those), but you quickly ran out of things that you'd ever care about in practice.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-09-11T22:16:20.003Z · LW(p) · GW(p)
‘Established’ by whom?
(If ‘established’ by yourself, haven’t you just replaced an infinite regress with circularity?)
Replies from: anon-user↑ comment by Anon User (anon-user) · 2022-09-11T22:46:53.567Z · LW(p) · GW(p)
If ‘established’ by yourself, haven’t you just replaced an infinite regress with circularity?
Kind of, and it's unavoidable - by definition, you cannot justify your own axioms, and you cannon reason without some axioms. See https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection [LW · GW] and https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom [LW · GW] for some meta-justification of why it is an OK thing to do.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-09-11T22:54:18.769Z · LW(p) · GW(p)
Axioms necessarily must be consensual, i.e. shared by 2 or more interlocutors. If everyone invents their own personal axioms, they cease to be axioms and are just personal opinions.
What are the axioms that you believe are commonly shared, that lead to this conclusion?
Replies from: anon-user↑ comment by Anon User (anon-user) · 2022-09-12T19:33:22.892Z · LW(p) · GW(p)
Axioms necessarily must be consensual, i.e. shared by 2 or more interlocutors
Rarely happens in practice, at least not without a lot of work for people to sync up their intuitions and agree on a common subset.
What are the axioms that you believe are commonly shared
Not sure it exists. But see the links I shared to Eliezer's post expressing the axioms he is /hoping/ would be commonly shared.
↑ comment by Anon User (anon-user) · 2022-09-05T00:30:03.381Z · LW(p) · GW(p)
"Beforehand" in which sense? Any sufficiently powerful non-inconsistent logical theory is necessary incomplete - you need to know what kinds of logical inferences you care to be making to pick the appropriate logical theory.
Replies from: M. Y. Zuo