Modelling Model Comparisonspost by elriggs · 2019-04-04T17:26:45.565Z · score: 12 (3 votes) · LW · GW · 2 comments
1. Similar Relationships [Type 1 comparison] 2. (Object1, Relationship) = (Object2) [Type 2 comparison] Human Language and Communicating Models Future Work: None 2 comments
Define a model as a set of objects and their relationships. For example, when discussing a model of music, "notes" would be the objects and a possible relationship would be "harmony". [Technically, the objects would be (frequency, volume, time), and from their you could define the relationships "harmony", "tempo", "crescendo", etc.]
Using this, there are two different ways to compare models: similar relationships and (Object, Relationship) = (Object)
1. Similar Relationships [Type 1 comparison]
Using the relationship "Repetition", there is repetition in musical phrases (Fur Elise), in poetic phrases (Annabel Lee), in song chorus's, themes in movies, etc. Even though these four examples contain different objects, we're able to find the similar relationships and compare them.
I can think of two uses for this type of comparison. The first is in metaphors. The second is in generalization. Elaborating on the latter, I find a piece of music annoying if it's too repetitive. The same is true, for poems, songs, and movies; however, I very much enjoy a song if it strikes a good balance between repetition and novelty. Learning to strike that balance when writing music has generalized to doing the same in writing lyrics and dance choreography.
The same generalization can happen in non-art fields as well, such as realizing that two different functions are both convex or that two different problems can be solved recursively, and so on.
2. (Object, Relationship) = (Object) [Type 2 comparison]
A good example to start is:
But this is more than just connecting quarks/electrons to atoms or atoms to objects in language. You can connect models of various levels together. With the model of language, we can connect to the low-level object "quarks" and the high-level object "diamond". This has helped my understanding of how Multi-level [LW · GW] Models [LW · GW] relate to transparency: if an AI can find the correct Type 2 comparison between what it's doing and our human language, then transparency is solved; however, human language is very complex.
Human Language and Communicating Models
Let's say you're talking with Alice. Two failures in communicating models are when you're (1) not discussing the same objects (Alice's definition of "sound" your definition) or (2) you don't know how Alice believes those objects relate or vice versa.
One might naively say that the model of language is simply "words" and how they relate; however, it's more like "words, your models that relate to those words, and your model of Alice's models that relate to those words". The two failures above arise from thinking in the naive model of language, and can be alleviated by the more complex model. Two helpful thoughts when talking to Alice are:
1. "Are we discussing the same objects?"
2. "How does Alice think these objects relate?"
and of course asking Alice the relevant questions to figure that out.
1. Type 1 comparisons:
- What are the very useful relationships used to solve past problems, and could they be used to solve current problems (ex. using the same math technique to solve an entire class of math problems)
- Is there a way to find those relationships systematically?
2. Type 2 comparisons with human language/transparency
3. Creating lots of examples of Type 1 and Type 2 comparisons to un-confuse myself (I predict that what I wrote in this post is confused but at least points in the right direction)
4. A better word for Type 1, 2 comparisons
Comments sorted by top scores.