Problems Involving Abstraction?
post by johnswentworth · 2020-10-20T16:49:39.618Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 11 Zack_M_Davis 5 Adele Lopez 5 Adam Shimi 4 Charlie Steiner None 2 comments
I'm working on a post of examples for how to formulate problems involving abstraction (using the abstraction formulation here [? · GW]). This isn't going to solve problems, just show how to set them up mathematically.
To that end, I'd like to hear particular problems people are interested in which intuitively seem to involve abstraction. Examples of the sort of thing I have in mind:
- Humans generally seem to care about abstract objects, not individual atoms, so it seems like abstraction should be relevant to impact measures [? · GW]. How would we formalize that?
- Humans can figure out what a new word means with ridiculously few examples, suggesting that we already have some "latent space" with a simple representation of the-class-of-things-corresponding-to-the-new-word. That sounds like it has something to do with abstraction. What's going on there?
- The sort of "maps" we use in the real world (like street maps, for instance) are lossy, abstract representations of the territory (i.e. streets). How can we usefully formulate map-territory correspondence for such abstract maps? Is possible for a system to use its abstract map to recognize flaws in its own abstract-map-making process [LW · GW]?
There is a high chance that your request (or at least something very similar to it) will be incorporated in the post. So, what examples would people like to see?
Answers
How do socially-constructed concepts work?!
Negative example: trees [LW · GW]. Trees exist, and trees are not socially constructed. An alien AI observing Earth from behind a Cartesian veil would be able to compress its observations [LW · GW] by formulating a concept that pretty closely matches what we would call tree, because the atom-configurations we call "trees" robustly have a lot of things in common [LW · GW]: once the AI has identified something as a "tree" by observing its trunk and leaves, the AI can make a lot of correct predictions about the "tree" having roots, this-and-such cellular walls, &c. without observing them directly, but rather by inference from knowledge about "trees" in general.
Positive example: Christmas. Christmas exists. An alien AI observing Earth from behind a Cartesian veil would be able to make better predictions about human behavior in many places around Epoch time 62467200 ± 31557600·n (for integer n ) by formulating the concept of "Christmas". However, Christmas is socially constructed: if humans didn't haven't a concept of "Christmas", there would be no Christmas (the AI-trick for improving predictions using the idea of "Christmas" would stop working), but if humans didn't have a concept of trees, there would still be trees (the AI-trick for improving predictions using the idea of "trees" would still work).
Semi-positive example: adulthood. Adulthood exists. There's a Sorites situation on exactly how old a human has to be to be an "adult", and different human cultures make different choices about where to draw that line. But this isn't just a boring Sorites non-problem, where different agents might use different communication signals [LW · GW] without disagreeing about about the underlying reality (like when I say it's "hot" and you say it's "not hot, just warm" and our friend Hannelore says it's "heiß", but we all agree that it's exactly 303.6 K): an alien AI observing Earth from behind a Cartesian veil can make better predictions about whether I'll be allowed to sign contracts by reasoning about whether my Society considers me an "adult", not by directy using the simple measurement test [LW · GW] that Society usually uses to make that determination, with exceptions like minor emancipation.
My work-in-progress take: an agent outside Society observing from behind a Cartesian veil, who only needs to predict, but never to intervene, can treat socially-constructed concepts the same as any other: "Christmas" is just a pattern of behavior in some humans, just like "trees" are a pattern of organic matter. What makes social construction special is that it's a case where a "map" is exerting control over the "territory": whether I'm considered an "adult" isn't just putting a semi-arbitrary line on the spectrum of how humans differ by age (although it's also that); which Schelling point the line settles on is used as an input into decisions—therefore, predictions that depend on those decisions also need to consider the line, a self-fulfilling prophecy. Alarmingly, this can give agents an incentive to fight over shared maps [LW · GW]!
↑ comment by Vladimir_Nesov · 2020-10-21T11:27:07.852Z · LW(p) · GW(p)
Trees exist, and trees are not socially constructed.
A lot of problems with socially constructed concepts rely on their malleability: culture changes them all the time. But if culture had the power and technology to similarly change and create physical things on both sides of the border of (the extension of) the concept of trees, that concept could have similar problems, especially if people cared to fight over it.
So maybe the concept of chairs is a better example? Are chairs socially constructed? What about topological spaces? I'm guessing presence of a fight over a concept is more central to it being "socially constructed" in a problematic way than its existence primarily in minds. When there is a fight over a concept, existing outside of minds can help it persevere, but only to the extent that the capability to physically change it is limited.
↑ comment by johnswentworth · 2020-10-21T16:48:50.934Z · LW(p) · GW(p)
Given how much the comments on this one diverge, sounds like there's a lot of confusion around it (some of which is confusion around how words work more generally). Guess I'd better talk about it.
I will be focused more on the abstraction aspects than the game-theoretic aspects, though.
↑ comment by Dagon · 2020-10-21T15:52:41.855Z · LW(p) · GW(p)
Trees exist. The category "tree", as opposed to "shrub" or "plant" or "region of space-time" is a modeling choice - a question of tradeoff between compression efficiency and precision in the domain you're predicting.
Likewise "Christmas", and "Adult". If you get better understanding of what it means to individuals or regions, you can predict better how they'll behave.
Which leads to my definition of "socially constructed" - these are categories or definitions where it's necessary (or at least very convenient) to have shared understanding of the heuristics used in generating and executing a mental model for communication and behavior. Almost all language fits into this framework. Everything worth talking about (perhaps everything possible to talk about) is socially constructed.
Basically, IMO, this is just standard abstraction - we call it "socially constructed" when a given abstraction has some social pressure or mechanism to be adopted at scale across a group.
Entropy and temperature inherently require the abstraction of macrostates from microstates. Recommend reading this: http://www.av8n.com/physics/thermo/entropy.html if you haven't seen this before (or just want an unconfused explanation).
↑ comment by johnswentworth · 2020-10-20T23:57:09.352Z · LW(p) · GW(p)
At some point I need to write a post on purely Bayesian statistical mechanics, in a general enough form that it's not tied to the specifics of physics.
I can probably write a not-too-long explanation of how abstraction works in this context. I'll see what I can do.
One we already talked about together is the problem of defining the locality of goals [AF · GW]. From an abstraction point of view, local goals (goals about inputs) and non-local goals (goals about properties of the world) are both abstractions: they throw away information. But with completely different results!
↑ comment by johnswentworth · 2020-10-20T20:42:06.139Z · LW(p) · GW(p)
This plays well with impact measures, too. I can definitely include it.
When do we learn abstractions bottom-up (like identifying regularities in sense data) versus top-down (like using a controlled approximation to a theory that you can prove will converge to the right answer)? What are the similarities between what you get out at the end?
↑ comment by johnswentworth · 2020-10-22T05:05:02.463Z · LW(p) · GW(p)
Abstraction learning in general is an area where I'm not yet fully satisfied with my own understanding, but I'll see if I can set up anything interesting around this.
2 comments
Comments sorted by top scores.
comment by Adele Lopez (adele-lopez-1) · 2020-10-20T23:05:08.068Z · LW(p) · GW(p)
Not quite sure how specifically this connects, but I think you would appreciate seeing it.
As a good example of the kind of gains we can get from abstraction, see this exposition of the HashLife algorithm, used to (perfectly) simulate Conway's Game of Life at insane scales.
Replies from: johnswentworthEarlier I mentioned I would run some nontrivial patterns for trillions of generations. Even just counting to a trillion takes a fair amount of time for a modern CPU; yet HashLife can run the breeder to one trillion generations, and print its resulting population of 1,302,083,334,180,208,337,404 in less than a second.
↑ comment by johnswentworth · 2020-10-20T23:37:08.019Z · LW(p) · GW(p)
Ooh, good one. If I remember the trick to the algorithm correctly, it can indeed be cast as abstraction.