Ontologies Should Be Backwards-Compatible
post by Thoth Hermes (thoth-hermes) · 2023-05-14T17:21:03.640Z · LW · GW · 3 commentsThis is a link post for https://thothhermes.substack.com/p/ontologies-should-be-backwards-compatible
Contents
3 comments
Posting these slightly out-of-order from when I had published them on my blog. If you downvote this post, please consider also leaving a comment for me (heck, if you upvote it, too). Thank you!
In 2017, I wrote several blog posts on LessWrong:
- Mode Collapse and the Norm One Principle [LW · GW]
- One-Magisterium Bayes [LW · GW]
- Expert Iteration From the Inside [LW · GW]
At or around the time I wrote One-Magisterium Bayes, I was put-off by the reception these posts received, even though all three have positive “karma” as of today. This led to me deciding to take a hiatus from actively engaging with the community for quite some time.
The posts I submitted after having recently come back to LessWrong have all received negative scores thus far. I wanted to be able to account for this: After all, it should be possible to see why these newer posts are doing so poorly, by comparing my viewpoint evolution to what it used to be. I hadn’t even thought about these older posts in quite some time, so I wasn’t sure what to expect.
To my surprise, after re-reading those old posts, I found that they are all fairly good predictors of what my views would be 5 to 6 years in the future.
The first post, Mode Collapse and the Norm One Principle, is - in layman’s terms - an argument that we should promote discussion norms that discourage information-less criticism. This means that implementation of upvoting / downvoting on the forum is less desirable than simply commenting. Criticisms should aim for being “norm one” by pointing in the direction that the target of the criticism ought to move towards, and by being reasonable in magnitude (not overtly hostile, not overly praising). Otherwise, and what I would predict to happen under the site’s norms then (and now), the community would experience “mode collapse” in the sense that only a few sets of ideas would get repeated over and over, everything else would be dismissed or bumped off the site.
One-Magisterium Bayes argues for “strong Bayesianism” as an ontology, as opposed to using it “toolbox-style.” Essentially, it means that Bayes’ Theorem gives us an alternative to ordinary logic, and therefore, we should use it as such, given that we seem to agree that it is superior in at least a few domains. This one was weakly upvoted but received some pretty harsh criticism in the comments (at least was my memory of what happened at the time). However, I hadn’t come to find out until yesterday that someone else had cited it [LW · GW] in a post a month later, about a paper that more formally argued for the very same thing.
I don’t see why my post should be considered wrong: If it is, that essentially means we stick to classical propositional logic as our primary ontology, and only use Bayesian probability in a few specific domains (usually, that overlap with frequentist statistics). But that means that whatever we liked about Bayes’ Theorem isn’t translating into updating our ontology towards a new framework that would explain why we like it so much.
My third post, Expert Iteration From the Inside, argues that we should allow our intuition to guide our longer, systematic reasoning processes. In practice, this would mean allowing yourself to take actions that you feel would be a good idea, but don’t yet have a formal, rigorous “proof” that it would succeed. This post is weakly upvoted, weaker than all the others, and has more votes than points (meaning it was downvoted a little).
This is something that I would still fully agree with (I agree with all three still actually, but this one I probably wouldn’t even re-write at all). If you’re reviewing your old self, you shouldn’t really believe wildly different things than you used to, nor should you see your old self as some kind of adversary to your current self. Your current self should have a wider, more overarching view of your older selves. That means you should be able to look at the sequence of views you obtained, recognize that you came to like certain ideas, and be able to explain why you came to like those ideas.
I mean, let’s face it: You don’t really want to cringe at everything you thought in the past, do you? You don’t need to. That’s why I said this is not just another thing to stress you out about.
The scores my more recent posts on LessWrong received do not override my overwhelming sense that what I wrote in them was correct, useful and timely. Furthermore, old LessWrong worked a little differently, but I remember feeling like those posts received backlash on there as well, prompting me to leave the community for a while. But I still like those old posts, too. And they seem to be compatible with my views as they are today.
Given that “Norm One” wasn’t taken, it’s not as though any of the criticism I or anyone else received on there could have possibly moved them in a more correct direction. Their direction has been - especially lately - to explicitly move away from “Norm One,” as opposed to staying the same or letting things go however the community decides to behave as a whole. Their philosophy has generally always been that one can never be, and should never allow themselves to feel, confident, unless they have had their ideas evaluated and approved by the wider group. And that “wider group” will not be everyone’s peers, nor a subset of one’s peers chosen yourself, but whatever the most authoritative-looking body is that claims to determine what rationality is.
This is one reason why those negative scores do not override my sense that I am correct about those posts. In addition to my current views being more evolved than my earlier ones, but explaining them better, I do not expect the current LessWrong Hive to be able to accurately judge the material that would actually best serve it. Right now, controversial material is likely to do the most work in pushing it towards better ideas.
But that just isn’t good rationality. And what I and what I suspect many others were and are looking for in a rationality-practicing community is a way to train oneself to be rational, and be able to know that you’re doing it well on your own, to be able to judge your own and others’ skill level at it, much like one would train in a martial art, crafting, or meditation. Yes, you could and should have many teachers, who you will be able to tell are good at what they do, and you would be able to tell how good you are relative to them too. Therefore, you would be able to know when you were ready to be a teacher yourself.
If you’re good at whatever you do, that will be apparent to you as well, even if you’re doing it alone.
So in practice, that means I should be able to look at my past skill level from the lens of my current skill level. My 2017-self was on to a lot of things that seemed reasonable to him at the time, and then as time went on, he got to have deeper insights that explained why those things seemed reasonable.
In other words, your ontologies should be backwards-compatible: If you update towards what you feel is reasonable, then later, you should have more-fleshed out explanations for why those things were reasonable. Those explanations will contain new things that you feel are reasonable, and will still need more fleshing out, but you can expect that will happen.
Essentially, “ontologies should be backwards-compatible” is telling you more what to expect, rather than urging you to make sure they are prudently.
3 comments
Comments sorted by top scores.
comment by TAG · 2023-05-15T18:30:14.318Z · LW(p) · GW(p)
One-Magisterium Bayes argues for “strong Bayesianism” as an ontology, as opposed to using it “toolbox-style
You're either saying something strange about Bayes, or using "ontology" weirdly. By the ordinary meaning of ontology, Bayes isn't ontology, and is more like epistemology. A model of the world, what ontology usually means, is something you get out of an epistemology.
The first post, Mode Collapse and the Norm One Principle, is—in layman’s terms—an argument that we should promote discussion norms that discourage information-less criticism.
I'm not a fan of it myself, but sometimes the problem is that something is "not even wrong".
In particular, the reader can't always tell if ordinary things are being said in strange language , or strange things things are being said in ordinary language.
comment by Gordon Seidoh Worley (gworley) · 2023-05-15T16:50:18.989Z · LW(p) · GW(p)
I appreciate the sentiment but I find something odd about expecting ontology to be backwards compatible. Sometimes there are big, insightful updates that reshape ontology. Those are sometimes not compatible with the old ontology, except insofar as both were attempting to model approximately the same reality. As an example, at some point in the past I thought of people has having character traits, now I think of character traits as patters I extract from observed behavior and not something the person has. The new ontology doesn't seem backwards compatible to me, except that it's describing the same reality.
Replies from: Dagon↑ comment by Dagon · 2023-05-15T17:42:30.190Z · LW(p) · GW(p)
There's a LOT of detail that the word "compatible" obscures. Obviously, they're not identical, so they must differ in some ways. This will always and intentionally make them incompatible on some dimensions. "compatible for what purpose" is the key question here.
I'd argue that your character-traits example is very illustrative of this. To the extent that you use the same clustering of trait definitions, that's very compatible for many predictions of someone's behavior. Because the traits are attached differently in your model, that's probably NOT compatible for how traits change over time. There are probably semi-compatible elements in there, as well, such as how you picture uncertainty about or correlation among different trait-clusters.