0 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2017-10-19T01:42:08.268Z · LW(p) · GW(p)
I don't really like the alief/belief divide. I think I dislike it for the same reason I dislike trying to separate different kinds of desires into categories (needs, wants, hopes, preferences, values, etc.): it breaks things up into categories that lean into rather than away from the telos of ontology generation so we less look at them as they are and more look at them as we use them.
I prefer the result of combining alief, belief, memory, and nearby concepts (including all the forms of desire) into the general category we might call axia. This might seem like it's giving up a lot of valuable detail, but to me this bracketing eliminates many of the assumptions (another kind of axia) we make when using these words and instead focuses on the way they are all united in the same part of the process (specifically being part of the process's state or, in phenomenological terms, part of the subject). Bayes' Theorem calls these priors, so we could also call them that, except...
The interesting thing about uniting these concepts as part of axia is that it enables the unification of many disparate methods of combing axia with their own kind under the umbrella of a generalized axiology (and thus a good reason to choose "axia" as a name), which to me seems wholly appropriate given axiology is all about how to combine priors in the absence of infinite computational resources.
None of this is to take away from the fact that there is a distinction between voluntary beliefs and involuntary things that function a lot like beliefs, just to point out I don't think it's one normally worth making because it causes you to focus on the wrong level of detail.
↑ comment by Conor Moreton · 2017-10-19T02:47:24.721Z · LW(p) · GW(p)
Loren ipsum
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2017-10-19T18:12:18.906Z · LW(p) · GW(p)
I guess it depends on what you consider to count as "concrete", but sure I can give 4 benefits.
1. Alief, belief, desire, and all the rest posit a lot of ontological complexity in the sense of gears or epicycles. They may allow you to produce accurate predictions for the things you care about but at the cost of being strictly less likely to be accurate in general because they have more parts so multiplication of probabilities makes them less likely to produce accurate predictions in all cases. Another way to put this is that the more parts your model has the more likely it is that if you are accurate it's because you're fitting the model to the data rather than generating a model that fits the data. And since the more parsimonious theory of axia is more likely to accurately predict reality ("be true") because it posits less, it's more likely that any philosophical work done with the concept will remain accurate when new evidence appears.
2. Understanding oneself is confusing if it seems there are competing kinds of things that lead to making decisions. The existence of aliefs and beliefs, for example, requires not only a way of understanding how aliefs and beliefs can be combined amongst themselves (we might call these the alief calculus and the belief calculus), but also some way of how aliefs and beliefs interact. Again, in an appeal to parsimony, it's less complex if both these are actually the same thing and just have conflicting axia. This might be annoying if you were hoping to have a very powerful alief/belief calculus with proper functions, for example, but is no problem if you accept a calculus of relations that need not even be transitive.
(As an aside, this points to an important aspect of AI safety research that I think is often overlooked: most efforts now are focused on things with utility functions and the like, but that's because if we can't even solve the more constrained case of utility functions how are we possibly going to handle the more general case of preference relations?)
3. Following on from (2), it is much easier to be tranquil and experience fluidity if you can understand your ("you" being a perceived persistent subject) relationship to axia in general rather than each kind of thing we might break apart from axia. It's the difference between addressing many special cases and addressing everything all at once.
4. Thinking in terms of axia allows you to more think of yourself as integrated rather than made up of parts, and this better reflects how we understand the world to work. That is, aliefs, beliefs, etc. have firm boundaries that require additional ontology to integrate with other parts of yourself. Axia eliminate the need to specifically understand how such integrations work in detail while still letting you work as if you did. Think how topology lets you do work even when you don't understand the details of the space you are analyzing, and similarly how category theory lets you not even need to understand much about the things you are working with.
↑ comment by gjm · 2017-10-19T19:39:19.165Z · LW(p) · GW(p)
I'm not sure I've understood what you're proposing. Let me sketch what it sounds like and ask a few questions, and then maybe you can tell me how I've misunderstood everything :-).
So, you have things called axia. I guess either (1) a belief is an axion, and so is an alief, and so is a memory of something, and so is a desire, and so on; or else, (2) belief is an axion, and so is alief, and so is memory, and so is desire; that is, each of these phenomena considered as a whole is an axion.
I don't think #2 is consistent with what you're saying because that means keeping these notions of belief and alief and so on and adding a broader category on top of them, and it sounds like you don't want the narrower categories at all. So I'm guessing you intend #1. So now my belief that many millions of people live in London is an axion; so is my alief that if I ignore something bothersome that I need to do it will go away after a while; so is my recollection of baking chocolate chip cookies a few days ago; if I were hungry right now, so would be my desire for some food; I'm guessing you would also consider my preference for fewer people to get killed in Middle Eastern conflicts to be an axion, and perhaps also my perception of a computer monitor in front of me right now, and probably a bunch of other kinds of thing.
Question: what actually is the definition of "axion"? How do I tell whether a given (kind of) thing is one?
OK, so now you want to lump all these things together and ... do what with them? I mean, if you just want to say that these are all things that happen in human thought processes then yeah, I agree, but you can say that just as well if you classify them as beliefs, desires, etc. Treat them all exactly the same? Surely not: the consequences for me of thinking there is a hungry tiger in the room are very different from those of wishing there were one, for instance. So we have these things that are diverse but have some common features, and this is equally true whether you give them different names like "belief" and "desire" or call them all "axia". I'm not really seeing much practical difference, and the things you've said in response to Conor's question don't help me much. Specifically:
Your first point (more detailed models are more likely to overfit) seems like a fully general argument against distinguishing things from other things, and unsurprisingly I think that as it stands it's just wrong. It is not true that more parsimonious theories are more likely to predict accurately. It's probably true that more parsimonious and equally explanatory theories are more likely to predict accurately, but you've given no reason to suppose that lumping the different kinds of "axia" together doesn't lose explanatory power, and if in fact what you want to do is call 'em all "axia" but still distinguish when that's necessary to get better predictions (e.g., of how I will act when I contemplate the possibility of a hungry tiger in the room in a particular way) then your theory is no longer more parsimonious.
Your second point (it's confusing to think that there are competing kinds of things that lead to making decisions) seems like nonsense to me. (Which may just indicate that I haven't understood it right.) Our decisions are affected by (e.g.) our beliefs and our desires; beliefs and desires are not the same; if that's confusing then very well, it's confusing. Pretending that the distinctions aren't there won't make them go away, and if I don't understand something then I want to be aware of my confusion.
Your third point seems more or less the same as the second.
Your fourth point (it's better to think of ourselves as integrated) is fair enough, but once again that doesn't mean abandoning distinctions. My left arm and my spleen are both part of my body, they're both made up of cells, they have lots in common, but none the less it is generally preferable to distinguish arms from spleens and super-general terms like "body part" are not generally better than more specific terms like "arm".
So I'm still not seeing the benefits of preferring "axia" to "aliefs, beliefs, desires, etc." for most purposes. Perhaps we can try an actually concrete example? Suppose there is a thing I need to do, which is scary and bothersome and I don't like thinking about it, so I put it off. I want to understand this situation. I could say "I believe that I need to do this, but I alieve that if I pay attention to it I will get hurt and that if I ignore it it will go away". What would you have me think instead, and why would it be better? (One possibility: you want me to classify all those things as axia and then attend to more detailed specifics of each. If so, I'd like to understand why that's better than classifying them as aliefs/beliefs and then attending to the more detailed specifics.)
↑ comment by Gordon Seidoh Worley (gworley) · 2017-10-19T23:54:32.845Z · LW(p) · GW(p)
First, thanks for your detailed comments. This kind of direct engagement with the ideas as stated helps me a lot in figuring out just what the heck it is I'm trying to communicate!
Question: what actually is the definition of "axion"? How do I tell whether a given (kind of) thing is one?
First, quick note, "axia" is actually the singular, and I guess the plural in English should be either "axies" or "axias", but I share your intuition that it sounds like a plural so my intent was to use "axia" as a mass noun. This would hardly be the first time an Anglophone abused Ancient Greek, and my notion of "correct" usage is primarily based on wikitionary.
Axia is information that resides within a subject when we draw a subject-object distinction, as opposed to evidence (I'll replace this with a Greek word later ;-)) which is the information that resides in the object being experienced. This gets a little tricky because for conscious subjects some axia may also be evidence (where the subject becomes the object of its own experience) and evidence becomes axia (that's the whole point of updating and is the nature of experience), so axia is information "at rest" inside a subject to be used as priors during experience.
the consequences for me of thinking there is a hungry tiger in the room are very different from those of wishing there were one, for instance. So we have these things that are diverse but have some common features, and this is equally true whether you give them different names like "belief" and "desire" or call them all "axia". I'm not really seeing much practical difference,
To me the point is that beliefs, desires, and the rest have a common structure that let us see the differences between beliefs, desires, etc. as differences of content rather than differences of kind. That is, beliefs are axia that contain information that describe claims about the world and desires are axia that contain information that describe claims about how we would like the world to be. That I can describe subclasses of axia in this way obviously implies that their content is rich enough that we can identify patterns within their content and talk about categories that match those patterns, but the important shift is in thinking of beliefs, aliefs, desires, etc. not as separate kinds of things but as different expressions of the same thing.
Maybe that seems uninteresting but it goes a long way in addressing what we need to understand to do philosophical work, in particular how much stuff we need to assume about the world in order to be able to say useful things about it, because we can be mistaken about what we really mean to point at when we say "belief", "desire", etc. but are less likely to be mistaken when we make the target we point at larger.
Perhaps we can try an actually concrete example? Suppose there is a thing I need to do, which is scary and bothersome and I don't like thinking about it, so I put it off. I want to understand this situation. I could say "I believe that I need to do this, but I alieve that if I pay attention to it I will get hurt and that if I ignore it it will go away". What would you have me think instead, and why would it be better? (One possibility: you want me to classify all those things as axia and then attend to more detailed specifics of each. If so, I'd like to understand why that's better than classifying them as aliefs/beliefs and then attending to the more detailed specifics.)
So you seem to already get what I'm going to say, but I'll say it anyway for clarity. If all these things are axia, then what you have is not a disagreement between what you believe and what you alieve and instead straight up contradictory axia. The resolution then is not a matter of aligning belief and alief or reweighting their importance in how you decide things, but instead to synthesize the contradictory axia. Thus I might think on why do I at once think I need to do this but also think it will hurt and hurt can be avoided by ignorance. Now these claims all stand on equal footing to be understood, each likely contributing something towards the complete understanding and ultimately the integration of axia that had previously been left ununified within you-as-subject.
The advantages are that you remove artificial boundaries in your ontology that may make it implicitly difficult to conceive of these axia being integrateable and work instead with a general process of axia synthesis that can be trained and reused in many cases rather than just in those between axia we can identify as "belief" and "alief".
comment by [deleted] · 2017-10-19T00:10:46.278Z · LW(p) · GW(p)
Some things I think that I alieve:
It's basically always my fault if I feel bad after an interaction with another party.
Time spent on a computer never counts as "real" work.
If things in life are getting too good, something bad is bound to show up.
comment by nBrown · 2017-10-19T10:38:17.995Z · LW(p) · GW(p)
There is a technique called belief reporting developed by Leverage Research. My take is that it lets you check if you alief something. It involves intentions.
Intentions, briefly: Place a cup on the table. Hold the intention not to pick up the cup. Try to pick up the cup.
One of two things happens - you can't pick up the cup. Or you release the intention. If you can't pick up the cup you may feel a physical pressure. Straining against yourself.
Belief reporting is where you hold the intention to only say true things. I feel this as a firmness in my back. If I try to say "My hair is green" I won't be able to speak, and a kind of pressure develops in my chest. Perhaps this is a weak form of self-hypnosis.
I belief reported to all of the statements above. And found I was able to say more than five of the above while holding the intention to only speak the truth. Strange. my S2 does not endorse them at all. But I guess my S1 does.
↑ comment by Conor Moreton · 2017-10-19T17:26:31.447Z · LW(p) · GW(p)
Loren ipsum
comment by rthomas2 · 2017-10-19T23:03:58.900Z · LW(p) · GW(p)
Like u/gworley, I’m not a fan of the alief/belief distinction. I take a slightly different tack: I think that belief, and related terms, are just poorly defined. I find it easier to talk about expectations.
“Expectations” is, I think, a term best operationalized as “things that surprise a person if they don’t happen, while not surprising said person if they do happen.” (I started thinking of this due to the “invisible dragon” from the Sequences.)
It’s empirically testable whether a person is non-plussed—sometimes this might be hard to notice, but almost always, they register some quick change in body language/facial expression, and more importantly, have to pause and generate new ideas, because they’ve fallen into a gap in their previous ones.
A person can say “I believe democracy is the best form of government”, and mean a whole bunch of things by it. Including merely “I want to live in a democracy.” They can also say “I believe 2+2=4”...and yet be amazed when someone takes two groups of two apples, combines them, and counts them all to total four apples. Saying “I believe” seems like it has many possible meanings, which are all best communicated by other words—and the primary one seems to be expectations. So rather than keep the ill-defined word belief and create a new word to specify some other imprecise category, let’s just be more precise. We already have enough words for that—alief is too many, and so is belief.
↑ comment by Conor Moreton · 2017-10-19T23:10:00.676Z · LW(p) · GW(p)
Loren ipsum
Replies from: rthomas2, gworley↑ comment by rthomas2 · 2017-10-20T22:36:56.070Z · LW(p) · GW(p)
I think you’re exactly right that distinguishing between what people claim, and then what they turn out to actually expect, is the important thing here. My argument is that alief/belief (or belief in belief), as terms, make this harder. I just used the words “claim” and “expectation”, and I would be immensely surprised if anyone misunderstands me. (To be redundant to the point of silliness: I claim that’s my expectation.)
“Belief” has, I think, lost any coherent definition. It seems now, not to refer to expectations, but to mean “I want to expect X.” Or to be a command: “model me as expecting X.” Whenever it’s used, I have to ask “what do you mean you believe it?” and the answer is often “I think it’s true”; but then when I say “what do you mean, you think it’s true?”, the answer is often “I just think it’s true”, or “I choose to think it’s true”. So it always hits a state somewhere on the continuum between “meaningless” and “deceptive”.
Words like “claim”, “expectation”, or even “presume”—as in, “I choose to presume this is true”—all work fine. But belief is broken, and alief implies all we need is to add another word on top. My claim is that we need, instead, less words: merely the ones that remain meaningful, rather than acting as linguistic shields against meaning.
↑ comment by Gordon Seidoh Worley (gworley) · 2017-10-20T00:05:42.101Z · LW(p) · GW(p)
But to use rthomas2's idea of expectation, these can just be contradictory expectations. If you take away the constraint that beliefs be functions from world states to probabilities and instead let them be relations between world states and probabilities, we eliminate the need to talk about alief/belief or belief and belief-in-belief and can just talk about having contradictory axia/priors/expectations.
I think making an alief/belief distinction is mostly interesting if you want to think in terms of belief in the technical sense used by Jaynes, Pearl, et al.. Unfortunately humans don't actually have beliefs in this technical sense, hence the entire rationalist project.
comment by Raemon · 2017-10-21T07:31:54.522Z · LW(p) · GW(p)
When I read this I wasn't 100% sure I grokked the magnanimous point.
Then an anecdote happened to me. :) :/
I recently (due to Alpha Go Zero) gained the system-1 alief that "there is actually a 10%+ chance that we're all dead in 15 years." For the past 6 years I've basically system-2 believed this, but there was sufficient wiggle room given Raemon's-crude-understanding-of-AI-knowledge that it was *much* more like "okay, I can see how it could happen, and smart monkeys I trust seem to think it's likely", but ultimately the argument was built off of metaphor and guesses about reference class that I wasn't forced to take seriously.
Post Alpha-Go-Zero, I have a concrete sense - I see the gears locking into place and how they fit together. They are still sticking out of some black boxes, but I see the indisputable evidence that it is more likely for the black-box to be Fast Takeoff shaped than Slow Takeoff shaped.
The first thing I felt was an abstract "woah."
The second thing I felt was a general shaking scared-ness.
The third thing was "I want to talk to my parents about this." I want a sit down conversation that goes something like "okay, Mom, Dad, I know I've been spouting increasingly weird stuff over the past 6 years, and for the most part we've sort of agreed to not stress out too much about it nowadays. But, like, I actually believe there's a 10% chance that we all die in 10 years, and I believe it for reasons that I think are in principle possible for me to explain to you.
And... it's okay if you still don't actually share this belief with and it's definitely okay if, even if you do belief it you mostly shrug and go about your business because what can you do? But, it's really important to me that I at least try to talk about this and that you at least try to listen."
This doesn't feel quite shaped like the Magnanimous Error as you describe it, but I'm curious if it feels like it's pointing at a similar phenomena.
↑ comment by Conor Moreton · 2017-10-21T07:48:59.760Z · LW(p) · GW(p)
Loren ipsum