What is ontology?
post by Adam Zerner (adamzerner) · 2023-08-02T00:54:14.432Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 19 ArtyomKazak 7 Zach Stein-Perlman 5 shminux 2 rhollerith_dot_com 1 YimbyGeorge None 4 comments
Over the years I've picked up on more and more phrases that people on LessWrong use. However, "ontology" is one of them that I can't seem to figure out. It seems super abstract and doesn't seem to have [? · GW] a reference post.
So then, please ELI5: what is ontology?
Answers
I'll give you an example of an ontology in a different field (linguistics) and maybe it will help.
This is WordNet, an ontology of the English language. If you type "book" and keep clicking "S:" and then "direct hypernym", you will learn that book's place in the hierarchy is as follows:
... > object > whole/unit > artifact > creation > product > work > publication > book
So if I had to understand one of the LessWrong (-adjacent?) posts mentioning an "ontology", I would forget about philosophy and just think of a giant tree of words. Because I like concrete examples.
Now let's go and look at one of those posts.
https://arbital.com/p/ontology_identification/#h-5c-2.1 , "Ontology identification problem":
Consider chimpanzees. One way of viewing questions like "Is a chimpanzee truly a person?" - meaning, not, "How do we arbitrarily define the syllables per-son?" but "Should we care a lot about chimpanzees?" - is that they're about how to apply the 'person' category in our desires to things that are neither typical people nor typical nonpeople. We can see this as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them.
My "tree of words" understanding: we classify things into "human minds" or "not human minds", but now that we know more about possible minds, we don't want to use this classification anymore. Boom, we have more concepts now and the borders don't even match. We have a different ontology.
From the same post:
In this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds.
My understanding: You learned more about carbon and now you have new concepts in your ontology: carbon-12 and carbon-14. You want to know if a "diamond" should be "any carbon" or should be refined to "only carbon-12".
Let's take a few more posts:
https://www.lesswrong.com/posts/LeXhzj7msWLfgDefo/science-informed-normativity [LW · GW]
The standard answer is that we say “you lose” - we explain how we’ll be able to exploit them (e.g. via dutch books). Even when abstract “irrationality” is not compelling, “losing” often is. Again, that’s particularly true under ontology improvement. Suppose an agent says “well, I just won’t take bets from Dutch bookies”. But then, once they’ve improved their ontology enough to see that all decisions under uncertainty are a type of bet, they can’t do that - or at least they need to be much unreasonable to do so.
My understanding: You thought only [particular things] were bets so you said "I won't take bets". I convinced you that all decisions are bets. This is a change in ontology. Maybe you want to reevaluate your statement about bets now.
https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit
Ontology identification is the problem of mapping between an AI’s model of the world and a human’s model, in order to translate human goals (defined in terms of the human’s model) into usable goals (defined in terms of the AI’s model).
My understanding: AI and humans have different sets of categories. AI can't understand what you want it to do if your categories are different. Like, maybe you have "creative work" in your ontology, and this subcategory belongs to the category of "creations by human-like minds". You tell the AI that you want to maximize the number of creative works and it starts planting trees. "Tree is not a creative work" is not an objective fact about a tree; it's a property of your ontology; sorry. (Trees are pretty cool.)
↑ comment by ArtyomKazak · 2023-08-02T03:40:29.387Z · LW(p) · GW(p)
Also, to answer your question about "probability" in a sister chain: yes, "probability" can be in someone's ontology. Things don't have to "exist" to be in an ontology.
Here's another real-world example:
- You are playing a game. Maybe you'll get a heart, maybe you won't. The concept of probability exists for you.
- This person — https://youtu.be/ilGri-rJ-HE?t=364 — is creating a tool-assisted speedrun for the same game. On frame 4582 they'll get a heart, on frame 4581 they won't, so they purposefully waste a frame to get a heart (for instance). "Probability" is not a thing that exists for them — for them the universe of the game is fully deterministic.
The person's ontology is "right" and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn't be. You don't even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.
Actually, let's do a 2x2 matrix for all combinations of, let's say, "probability" and "luck" in one's personal ontology:
- Person C: probability and luck both exist. Probability is partly influenced/swayed by luck.
- Person D: probability exists, luck doesn't. ("You" are person D here.)
- Person E: luck exists, probability doesn't. If you didn't get a heart, you are unlucky today for whatever reason. If you did get a heart, well, you could be even unluckier but you aren't. An incredibly lucky person could well get a hundred hearts in a row.
- Person F: probability and luck both don't exist and our lives are as deterministic as the game; using the concepts of probability or luck even internally, as "fake concepts", is useless because actually everything is useless. (Some kind of fatalism.)
//
Now imagine somebody who replies to this comment saying "you could rephrase this in terms of beliefs". This would be an example of a person saying essentially "hey, you should've used [my preferred ontology] instead of yours", one where you use the concept of "belief" instead of "ontology". Which is fine!
↑ comment by ArtyomKazak · 2023-08-02T03:15:56.240Z · LW(p) · GW(p)
I'll also give you two examples of using ontologies — as in "collections of things and relationships between things" — for real-world tasks that are much dumber than AI.
- ABBYY attempted to create a giant ontology of all concepts, then develop parsers from natural languages into "meaning trees" and renderers from meaning trees into natural languages. The project was called "Compreno". If it worked, it would've given them a "perfect" translating tool from any supported language into any supported language without having to handle each language pair separately. To my knowledge, they kept trying for 20+ years and it probably died because I google Compreno every once in a few years and there's still nothing.
- Let's say you are Nestle and you want to sell cereal in 100 countries. You also want to be able to say "organic" on your packaging. For each country, you need to determine if your cereal would be considered "organic". This also means that you need to know for all of your cereal's ingredients whether they are "organic" by each country's definition (and possibly for sub-ingredients, etc). And there are 50 other things that you also have to know about your ingredients — because of food safety regulations, etc. I don't have first-hand knowledge of this, but I was once approached by a client who wanted to develop tools to help Nestle-like companies solve such problems; and they told me that right now their tool of choice was custom-built ontologies in Protege, with relationships like is-a, instance-of, etc.
An ontology is a collection of sets of objects and properties (or maybe: a collection of sets of points in thingspace). An agent's ontology determines the abstractions it makes.
For example, "chairs"_Zach is in my ontology; it is (or points to) a set of (possible-)objects (namely what I consider chairs) that I bundle together. "Chairs"_Adam is in your ontology, and it is a very similar set of objects (what you consider chairs). This overlap makes it easy for me to communicate with you and predict how you will make sense of the world.
(Also necessary for easy-communication-and-prediction is that our ontologies are pretty sparse, rather than full of astronomically many overlapping sets. So if we each saw a few chairs we would make very similar abstractions, namely to "chairs"_Zach and "chairs"_Adam.)
(Why care? Most humans seem to have similar ontologies, but AI systems might have very different ontologies, which could cause surprising behavior. E.g. the panda-gibbon thing. Roughly, if the shared-human-ontology isn't natural [i.e. learned by default] and moreover is hard to teach an AI, then that AI won't think in terms of the same concepts as we do, which might be bad.)
[Note: substantially edited after Charlie expressed agreement.]
↑ comment by Charlie Steiner · 2023-08-02T02:12:52.336Z · LW(p) · GW(p)
Just to paste my answer below yours since I agree:
There's "ontology" and there's "an ontology."
Ontology with no "an" is the study of what exists. It's a genre of philosophy questions. However, around here we don't really worry about it too much.
What you'll often see on LW is "an ontology," or "my ontology" or "the ontology used by this model." In this usage, an ontology is a set of building blocks used in a model of the world. It's the foundational stuff that other stuff is made out of or described in terms of.
E.g. minecraft has "an ontology," which is the basic set of blocks (and their internal states if applicable), plus a 3-D grid model of space.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-08-02T03:06:47.850Z · LW(p) · GW(p)
Hm, I think I see. Thanks. But what about abstract things? Things that never boil down to the physical. Like "probability". Would the concept of probability be something that would belong to someone's ontology?
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2023-08-02T03:33:59.296Z · LW(p) · GW(p)
It could be! People don't use the same model of the world all the time. E.g. when talking about my living room I might treat a "chair" as a basic object, even though I could also talk about the atoms making up the chair if prompted to think differently.
When talking about math, people readily reason using ontologies where mathematical objects are the basic building blocks. E.g. "four is next to five." But if talking about tables and chairs, statements like "this chair has four legs" don't need to use "four" as part of the ontology, the "four-ness" is just describing a pattern in the actual ontologically basic stuff (chair-legs).
↑ comment by Valentine · 2023-08-02T03:35:29.190Z · LW(p) · GW(p)
I also agree. I was going to write a similar answer. I'll just add my nuance as a comment to Zach's answer.
I said a bunch about ontologies in my post on fake frameworks [LW · GW]. There I give examples and I define reductionism in terms of comparing ontologies. The upshot is what I read Zach emphasizing here: an ontology is a collection of things you consider "real" together with some rules for how to combine them into a coherent thingie (a map, though it often won't feel on the inside like a map).
Maybe the purest example type is an axiomatic system. The undefined terms are ontological primitives, and the axioms are the rules for combining them. We usually combine an axiomatic system with a model to create a sense of being in a space. The classic example of this sort being Euclidean geometry.
But in practice most folk use much more fuzzy and informal ontologies, and often switch between seemingly incompatible ones as needed. Your paycheck, the government, cancer, and a sandwich are all "real" in lots of folks' worldview, but they don't always clearly relate the kinds of "real" because how they relate doesn't usually matter.
I think ontologies are closely related to frames [? · GW]. I wonder if frames are just a special kind of ontology, or maybe the term we give for a particular use of ontologies. Mentioning this in case frames feel more intuitive than ontologies do.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-08-02T04:10:48.070Z · LW(p) · GW(p)
(I agree. I think frames and ontologies are closely related; in particular, ontologies are comprehensive while frames just tell you what to focus on, without needing to give an account of everything.)
ELI5: Ontology is what you think the world is, epistemology is how you think about it.
Epistemic status: shaky. Offered because a quick answer is often better than a completely reliable one.
An ontology is a comprehensive account of reality.
The field of AI uses the term to refer to the "binding" of the AI's map of reality to the territory. If the AI for example ends up believing that the internet is reality and all this talk of physics and galaxies and such is just a conversational ploy for one faction on the internet to gain status relative to another faction, the AI has an ontological failure.
ADDED. A more realistic example would be the AI's confusing its internal representation of the thing to be optimized with the thing the programmers hoped the AI would optimize. Maybe I'm not the right person to answer because it is extremely unlikely I'd ever use the word ontology in a conversation about AI.
↑ comment by Isha Yiras Hashem (tzjuniper@gmail.com) · 2023-08-10T19:21:47.963Z · LW(p) · GW(p)
So epistemic means: confidence of knowing?
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-08-16T18:01:18.502Z · LW(p) · GW(p)
Yes, the "epistemic status" is me telling you how confident I am.
Always confuse this with Deontology ;-)
If Ontology is about "what is?" why is Deontology not "What is not?"
↑ comment by Adam Zerner (adamzerner) · 2023-08-02T16:35:35.959Z · LW(p) · GW(p)
Good question! Although I think it would be appropriate to move it to a comment instead of an answer.
4 comments
Comments sorted by top scores.
comment by Zach Stein-Perlman · 2023-08-02T04:27:22.667Z · LW(p) · GW(p)
Others seemed to like my answer, so I adapted it into the wiki page [? · GW]. But I don't think it's amazing. Others should feel free to edit.
comment by Chris_Leong · 2023-08-02T06:42:55.061Z · LW(p) · GW(p)
It's worth noting that ontology is used in at least two distinct ways.
One way is the casual usage as the set of objects and/or properties in your mental map: for example, chair, red, fast, ect.
Another way comes out of philosophy and is an attempt to choose some fundamental objects and/or properties in terms of which others are described. For example, chairs are really molecules, which are really atoms, which are really in a quantum wave function. When talking about things being fast, well that's really defined in terms of space and time and reference frames, ect.
(Of course, it is possible to challenge the notion of some objects being more "fundamental than others")
↑ comment by Adam Zerner (adamzerner) · 2023-08-02T19:05:14.826Z · LW(p) · GW(p)
Oh, interesting. So with the more formal/philosophical meaning, there's only one ontology, not many, and the goal is to figure out what that ontology looks like? Ie. what the most fundamental building blocks of the universe are?
Replies from: Chris_Leong↑ comment by Chris_Leong · 2023-08-02T19:14:12.818Z · LW(p) · GW(p)
So with the more formal/philosophical meaning, there's only one ontology, not many, and the goal is to figure out what that ontology looks like?
Not necessarily. Philosophers can debate whether there is such a thing as an ontology and whether it is unique and whether it is subjective. Basically everything that can be debated in philosophy is debated.