Probabilistic argument relationships and an invitation to the argument mapping community

post by lunatic_at_large · 2023-09-09T18:45:47.001Z · LW · GW · 4 comments

Contents

  Motivation
  The Argument Model
  Ideology Weighting and Growth-Consistency
  An Invitation to the Argument Mapping Community
None
4 comments

I've been thinking about the probabilistic properties of how ideas are connected to each other, and I think that the Lesswrong community might be interested in the same kind of questions that interest me. I'd like to introduce[1] a mathematical framework for talking about random arguments and the ideologies that are roughly-self-consistent within them. Using this framework to answer practical questions requires a lot of data that I don't have. I think that the best source of this data is from argument mapping programs, and that one of the most direct applications of this theory is to make computational argument mappers that can provide their users with non-immediately-obvious insights into their arguments. I've noticed some interest in the rationalist space for argument mapping programs and it seems that a good number of people are creating them. For this reason, I'd like to invite anyone who is in the business of creating argument mappers to get together and chat about how these kinds of computational tools could be integrated -- I made a Discord server at https://discord.gg/2DuBbmEQ5X but am open to other platform suggestions. Maybe we could collect enough data to start addressing some interesting philosophical questions!


Motivation

Here are some questions that I don't feel that I currently have the tools to reason about:
(1) Should we expect debates to converge as we spend more time/effort thinking about them? If we pick a question with a yes/no answer "at random" and look at the probability that we think the answer is "yes" over time (assuming we live in a static world where no real-life developments impact our reasoning), then does this probability usually have a limit as time goes to infinity? Is it likely to exhibit some kind of "false convergence" where, for example, it hovers near 1 for debate lengths between 10 minutes and 10,000 hours but at 10,001 hours a new consideration could suddenly emerge and change the arguer's probability of the answer being "yes" to near 0?

(2) How should we expect ideologies to evolve over time? Let's think of ideologies as "roughly-self-consistent" sets of ideas. Suppose you take a community that has been thinking about some cluster of questions for time T. Suppose you observe roughly three competing ideologies. Let's say you return to this community after they've been thinking for time 2T. How would you expect the set of ideologies to have evolved? What are the probabilities of these ideologies shifting, merging, bifurcating? How much drift should we expect on the answers to the questions that the community was originally asking?

(3) Which parts of arguments should we be able to assume are mostly independent? If we are presented with a complicated nest of arguments and one of the chains of reasoning supports a conclusion that turns out to be false, how much do we have to re-consider?[2]

(4) In a similar vein, can we detect signatures of social pressure or hidden arguments if we are given a bunch of arguments and data on which people support which arguments? For example, if there are two completely separate arguments for two separate policy proposals but the proponents of the two proposals are highly correlated, can we infer that there are either parts of the arguments being left unsaid or that there are social dynamics interfering with rational decision making?

(5) When, if ever, is it advantageous to split a community of researchers up into two communities of researchers who don't talk to each other for a while? I can think of historical examples of such separation being good overall, but it's not clear to me how you could predict when such intellectual diversity is called for.

I think that some of these questions could gain added urgency in a world populated by super-intelligent AI's. For example, we might suddenly have 10,000 years' worth of philosophical developments dropped on humanity's doorstep in an afternoon. Under the supremely generous assumption that a super-intelligent AI would reason with similar-ish patterns to humans, how different do we expect the conclusions of those developments to be compared to the moral frameworks we have today? Understanding how collections of arguments evolve over time seems critical if that evolution could suddenly start happening several orders of magnitude faster.

I think that to answer these questions properly, we need a notion of what a random argument or set of ideas looks like. We care less about what a collection of claims are saying about the world and more about how they are connected to each other. To be able to tackle the kinds of questions I listed earlier, let's write down some mathematical structures that capture the behavior we're interested in.


The Argument Model

To guide the creation of our model, let's consider an example. Let's consider the following collection of claims: "I slept a lot last night. Therefore I'm well-rested. That reasoning is nonsense, I'm often randomly tired after getting good sleep! Well, I feel well-rested at least." If I ever were to say these words, there would obviously be a lot more going on inside my head than just what I express out loud, and not all things I say bear equal weight. However, it feels difficult to quantify these weights, especially if multiple people are arguing over the same set of statements and each person attaches different weights to them.[3] Thus, let's focus on just the text in front of us.

If I had to break the given text into individual claims, I would probably do something like "(1): 'I slept a lot last night.' (2): 'I'm well rested.' (3): (1) implies (2). (4): 'I'm often randomly tired after getting good sleep!' (5): (4) implies not (3). (6): 'I feel well-rested.' (7): (6) implies (2)." Some of these claims are directly making statements about the world -- these claims aren't really saying anything about the relationships between claims and thus aren't contributing to the structure of the argument. Other claims are asserting relationships between previously stated claims -- in particular, they are asserting that propositional formulas hold in terms of the truth values of previous claims. For example, if (3) is true then the propositional formula "(1) implies (2)" must hold. I'd push back against the idea that claim (3) is true if and only if the logical formula "(1) implies (2)" is true -- maybe it's possible that I slept a lot last night and that I'm well-rested but it's untrue that the former caused the latter. I'd argue that it's more accurate to say that the logical formula "(3) implies ((1) implies (2))" is true -- i.e. if the reasoning in (3) is true then the formula "(1) implies (2)" must hold. Statements like (1) that say things about the world can be thought of as forcing trivial implications to be true, i.e. "(1) implies True".[4]

I think this idea generalizes pretty nicely: each claim implies that some formula in terms of the previous claims must be true. For example, I'd represent this argument as a function  from the numbers  through  to propositional formulas involving variables drawn from the numbers  through . Our specific  would be:
 
I would then say that  induces the logical formula:

          

Let's try to generalize this construction. Let  denote the set  for any natural number  (I'm excluding  from ). Let  denote the set of propositional formulas with variables in , and let . Let's define an argument  of size  to be a function from  to  such that  for all  (let's define ).[5] We can then define  to be the propositional formula that  represents: . Let  denote the set of all arguments of size  and let .[6]

For some of the questions I listed in the motivation section, this framework is enough. We could talk about probability measures over arguments, we could try to turn those arguments into graphs by drawing edges between vertices in  if a vertex is a variable in another vertex's formula and then look at these as random graphs, etcetera. However, for some other questions I listed in the motivation section (such as predicting how ideologies shift over time), we need some way of describing how arguments grow. Furthermore, if we specify a random growth process, then that random growth process will induce a probability measure on arguments of size : we start with the empty argument and grow from there until we reach size . This process very roughly reflects how arguments grow in practice, so it's reasonable to expect that a realistic probability measure over arguments will be induced by such a process. Also, I suspect it will be easier to write down simple argument growth processes with somewhat realistic behavior than to write down a probability measure on arguments directly.

Let's define an argument growth process  for arguments of size  to be a function  such that for any , i.e.  specifies a probability mass function over  ( is finite so without loss of generality we can focus on probability mass functions instead of probability measures). Intuitively, given an existing argument  gives us a probability mass function on the next claim that will be added to . We can then define an argument growth process to be a sequence  of argument growth processes for arguments of size  for each . In other words, an argument growth process specifies how to grow a size-0 argument into a size-1 argument, how to grow a size-1 argument into a size-2 argument, and so on. Formally, if we let  denote the set of argument growth processes for arguments of size , then an argument growth process is an element of .

So far so good! We have some kind of language for talking about arguments and how they evolve over time. However, right now there's nothing linking these arguments to the beliefs arguers have about the claims being argued over. For reasoning about how beliefs evolve over time as a result of the progression of an argument, let's define some more objects.


Ideology Weighting and Growth-Consistency


Okay, to answer some of the questions I listed in the motivation section we need to be able to look at an argument and identify its roughly-self-consistent ideologies, the correlations between its claims, etcetera. The most natural way to represent this information is by putting a probability measure on true/false assignments to the claims. Of course, we shouldn't interpret this probability measure as an actual statement about the probabilities of the claims described happening in the real world. Rather, this is some more abstract evaluation of the structure of the argument itself.

Suppose we have an argument  for some . Let's define an ideology on  to be a true/false assignment to the variables  through , i.e. an element of  . Let's define an ideology mass function on arguments of size , then, to simply be a function  such that for all , i.e.  is a probability mass function on . An ideology weighting  is then a sequence of ideology mass functions  on arguments of size  for every .

Okay, but this is a little stupid: we haven't forced the ideology weighting to interact with the argument in any way! Let's say that an ideology weighting  is "dismissive" if for every , for every , if the propositional formula  is false under the true/false assignment , then . In other words,  dismisses true/false assignments that are impossible as per our formula .

We can now define the uniform dismissive ideology weighting: it assigns equal mass to every valid true/false assignment. For example, if we have the argument "(1): I am sleepy, (2): I am making typos, (3): (1) implies (2)" parsed as , then the uniform dismissive ideology weighting's third mass function would assign any true/false assignment except  a mass of  and it would assign  a mass of . The uniform dismissive ideology weighting is what I used when I tried to create a computational argument mapper in real life (see below).

There's a bit of an inconsistency when we try to interpret these ideology mass functions as probabilities. If we've specified a growth process that turns an argument in  into an argument in  and if we've specified an ideology mass function on  then the law of total probability basically forces our choice of ideology mass function on . For example, suppose we consider the growth process for arguments of size  which creates the propositional formula  with probability  and everything else with probability . Let's say we start with the argument . Under the uniform dismissive ideology mass function for arguments of size 2, the weight on the assignment  is . However, when we grow our argument by adding the  claim, the uniform dismissive ideology mass function on arguments of size 3 gives the  assignment a weight of  and the  assignment a weight of , so the overall probability of assigning  to 1 and  to 2 should have been  all along.

To rid ourselves of this inconsistency, let's define an ideology weighting  to be "growth-consistent" with respect to some argument growth process  if it satisfies the law of total probability: for all , for all , for all , we have  where  is defined by  (basically  is the fixed argument  appended with the potential next claim ).

One of the questions we can ask at this point is whether there is a natural way of thinking about a "closest-to-uniform" dismissive growth-consistent ideology weighting. In some sense, this should be a way of extracting a sensible ideology weighting from a specified growth process while assuming as little as possible. I have some thoughts on this but it's unclear to me if any of the methods that come to mind are particularly fundamental.

I could go on defining more properties of interest (and I'm happy to make a follow-up post if people like this stuff), but I think I should address the elephant in the room: how on Earth to actually construct a reasonable argument growth model.


An Invitation to the Argument Mapping Community

This theory needs at least two more things to make any useful predictions: (1) a lot of arguments drawn from the wild, and (2) some way of statistically analyzing them to build a realistic argument growth model. These things can be done in parallel, but (1) seems like the bottleneck to me. I can imagine two ways of collecting this data: either you can go online and find arguments in the wild and try to parse them or you can collect arguments that are already in the specific parsed format you want. I'm not sure which approach is better, but the second approach allows me to rope in something else that I think could benefit from thinking about random arguments: argument mapping programs.

Although I highlighted at the start of this post how understanding random argumentation could help predict how arguments behave, I also think that it could help people involved in arguments understand better what's going on. I think it could be useful to have a program that can track high-probability/self-consistent ideologies or point out subtle correlations in arguments or highlight which potential connections could most help or hurt some given ideology. For a long time I've wanted to build an argument mapper that could provide users with these kinds of computational insights. I made logicgraph.dev to try to create something like what I imagine, but to be frank I don't have the logistical capacity to develop my own argument mapper as a side project. I'm also aware that a lot of other people are trying to create argument mapping programs, so I thought it would be a good idea to create some kind of forum for argument mapping developers/enthusiasts to talk to each other and possibly to talk about applications to random argumentation. I've tentatively created a Discord server (invite link https://discord.gg/2DuBbmEQ5X), but I'm open to other platform suggestions. I think that a community centered around argument mapping could have a very mutualistic relationship with random argumentation theory, and from my experience at EAG I get the impression that this community is basically asking to exist.

 

P.S. I played around a bit with argumentation theory under Kevin Zollman in CMU's philosophy department. Plenty of insights may have been his, all misconceptions/omissions are very deeply my own.

  1. ^

    I haven't seen anyone reference this model, but it's simple enough that I wouldn't be surprised if someone else has already written it down. If this post is old news then please let me know!

  2. ^

    This idea reminds me of the paper "Formalizing the presumption of independence".

  3. ^

    I'm also attracted to the simplicity of ignoring these internal weights. It's reminiscent to me of how topology does away with the real-number distances of metric space analysis while still preserving a lot of fundamental structure. That said, I'd be very interested in extending this model to account for different weights on the statements in question if anyone can find a practical way to do so.

  4. ^

    Let's work with propositional logic since we only have finitely many claims in play. I don't see a need to go first-order.

  5. ^

    I considered allowing  but decided against it -- it feels weird to allow a claim to say something about another claim that hasn't been introduced yet.

  6. ^

    It's valid to ask why we index the claims in our argument in this linear fashion. Indeed, it's hard to model the kind of community splitting and rejoining behavior I referenced in the motivation section using this framework -- if an argument is being developed in two directions at once then which claim gets listed first shouldn't be allowed to matter. You can generalize this definition to an arbitrary directed acyclic graph in a natural way. I can write more about that in a follow-up post if people are interested.

4 comments

Comments sorted by top scores.

comment by ProgramCrafter (programcrafter) · 2023-09-10T05:44:47.069Z · LW(p) · GW(p)

Welcome to LessWrong! Have you also read about Bayes' Theorem [LW · GW]?

  1.  is not right since it's trivially true (false statement can imply anything and true statement always implies True); were you talking about ?
  2. > how different do we expect the conclusions of those developments to be compared to the moral frameworks we have today?
    Rationalists usually can't coherently expect that beliefs of other rational system will be different in a pre-known way, since that's a reason to update own beliefs. See also: the third virtue of rationality, lightness [? · GW].

It seems that your argument models will require a way of updating weights on one of the next steps, so I'd recommend you to read the Sequences [? · GW].

Replies from: lunatic_at_large
comment by lunatic_at_large · 2023-09-10T15:45:57.084Z · LW(p) · GW(p)

Hey, thanks for the response! Yes, I've also read about Bayes' Theorem. However, I'm unconvinced that it is applicable in all the circumstances that I care about. For example, suppose I'm interested in the question "Should I kill lanternflies whenever I can?" That's not really an objective question about the universe that you could, for example, put on a prediction market. There doesn't exist a natural function from (states of the universe) to (answers to that question). There’s interpretation involved. Let’s even say that we get some new evidence (my post wasn’t really centered on that context, but still). Suppose I see the news headline "Arkansas Department of Stuff says that you should kill lanternflies whenever you can." How am I supposed to apply Bayes’ rule in this context? How do I estimate P(I should kill lanternflies whenever I can | Arkansas Department of Stuff says I should kill lanternflies whenever I can)? It would be nice to be able to dismiss these kinds of questions as ill-posed, but in practice I spend a sizeable fraction of my time thinking about them. Am I incorrect here? Is Bayes’ theorem more powerful than I’m realizing?

 

(1) Yeah, I'm intentionally inserting a requirement that's trivially true. Some claims will make object-level statements that don’t directly impose restrictions on other claims. Since these object-level claims aren’t directly responsible for putting restrictions on the structure of the argument, they induce trivial clauses in the formula. 

(2) Absolutely, you can’t provide concrete predictions on how beliefs will evolve over time. But I think you can still reason statistically. For example, I think it’s valid to ask “You put ten philosophers in a room and ask them whether God exists. At the start, you present them with five questions related to the existence of God and ask them to assign probabilities to combinations of answers to these questions. After seven years, you let the philosophers out and again ask them to assign probabilities to combinations of answers. What is the expected value of the shift (say, the KL divergence) between the original probabilities and the final probabilities?“ I obviously cannot hope to predict which direction the beliefs will evolve, but the degree to which we expect them to evolve seems more doable. Even if we’ve updated so that our current probabilities equal the expected value of our future probabilities, we can still ask about the variance of our future probabilities. Is that correct or am I misunderstanding something?

Thanks again, by the way!

Replies from: robert-miles
comment by Robert Miles (robert-miles) · 2023-09-10T16:32:59.267Z · LW(p) · GW(p)

One way of framing the difficulty with the lanternflies thing is that the question straddles the is-ought gap. It decomposes pretty cleanly into two questions: "What states of the universe are likely to result from me killing vs not killing lanternflies" (about which Bayes Rule fully applies and is enormously useful), and "Which states of the universe do I prefer?", where the only evidence you have will come from things like introspection about your own moral intuitions and values. Your values are also a fact about the universe, because you are part of the universe, so Bayes still applies I guess, but it's quite a different question to think about.
If you have well defined values, for example some function from states (or histories) of the universe to real numbers, such that larger numbers represent universe states that you would always prefer over smaller numbers, then every "should I do X or Y" question has an answer in terms of those values. In practice we'll never have that, but still it's worth thinking separately about "What are the expected consequences of the proposed policy?" and "What consequences do I want", which a 'should' question implicitly mixes together.

Replies from: lunatic_at_large
comment by lunatic_at_large · 2023-09-10T20:13:29.464Z · LW(p) · GW(p)

You raise an excellent point! In hindsight I’m realizing that I should have chosen a different example, but I’ll stick with it for now. Yes, I agree that “What states of the universe are likely to result from me killing vs not killing lanternflies” and “Which states of the universe do I prefer?” are both questions grounded in the state of the universe where Bayes’ rule applies very well. However, I feel like there’s a third question floating around in the background: “Which states of the universe ‘should’ I prefer?” Based on my inner experiences, I feel that I can change my values at will. I specifically remember a moment after high school when I first formalized an objective function over states of the world, and this was a conscious thing I had to do. It didn’t come by default. You could argue that the question “Which states of the universe would I decide I should prefer after thinking about it for 10 years” is a question that’s grounded in the state of the universe so that Bayes’ Rule makes sense. However, trying to answer this question basically reduces to thinking about my values for 10 years; I don’t know of a way to short circuit that computation. I’m reminded of the problem about how an agent can reason about a world that it’s embedded inside where its thought processes could change the answers it seeks. 

If I may propose another example and take this conversation to the meta-level, consider the question “Can Bayes’ Rule alone answer the question ‘Should I kill lanternflies?’?” When I think about this meta-question, I think you need a little more than just Bayes’ Rule to reason. You could start by trying to estimate P(Bayes Rule alone solves the lanternfly question), P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions), etc. The problem is that I don’t see how to ground these probabilities in the real world. How can you go outside and collect data and arrive at the conclusion “P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions) = 0.734”?

In fact, that’s basically the issue that my post is trying to address! I love Bayes’ rule! I love it so much that the punchline of my post, the dismissive growth-consistent ideology weighting, is my attempt to throw probability theory at abstract arguments that really didn’t ask for probability theory to be thrown at them. “Growth-consistency” is a fancy word I made up that basically means “you can apply probability theory (including Bayes’ Rule) in the way you expect.” I want to be able to reason with probability theory in places where we don’t get “real probabilities” inherited from the world around us.