What does the word "collaborative" mean in the phrase "collaborative truthseeking"?

post by Zack_M_Davis · 2019-06-26T05:26:42.295Z · LW · GW · 6 comments

This is a question post.

Contents

  Answers
    28 Viliam
    21 Vaniver
    18 gjm
    16 Elo
    4 tailcalled
    3 Dagon
None
6 comments

I keep hearing this phrase, "collaborative truthseeking." Question: what kind of epistemic work is the word "collaborative" doing?

Like, when you (respectively I) say a thing and I (respectively you) hear it, that's going to result in some kind of state change in my (respectively your) brain. If that state change results in me (respectively you) making better predictions [LW · GW] than I (respectively you) would have in the absence of the speech, then that's evidence for the hypothesis that at least one of us is "truthseeking."

But what's this "collaborative" thing about? How do speech-induced state changes result in better predictions if the speaker and listener are "collaborative" with each other? Are there any circumstances in which the speaker and listener being "collaborative" might result in worse predictions?

Answers

answer by Viliam · 2019-06-26T21:19:00.865Z · LW(p) · GW(p)

Assumption: Most people are not truthseeking.

Therefore, a rational truthseeking person's priors would still be that the person they are debating with is optimizing for something else, such as creating an alliance, or competing for status.

Collaborative truthseeking would then be what happens where all participants trust each other to care about truth. That not only each of them cares about truth privately, but that this value is also common knowledge.

If I believe that the other person genuinely cares about truth, then I will take their arguments more seriously, and if I am surprised, I will be more likely to ask for more info.

answer by Vaniver · 2019-06-26T18:23:26.108Z · LW(p) · GW(p)

If "collaborative" is qualifying truth-seeking, perhaps we can see it more easily by contrast with non-collaborative truthseeking. So what might that look like?

  • I might simply be optimizing for the accuracy of my beliefs, instead of whether or not you also discover the truth.
  • I might be optimizing competitively, where my beliefs are simply judged on whether they're better than yours.
  • I might be primarily concerned about learning from the environment or from myself as opposed to learning from you.
  • I might be following only my interests, instead of joint interests.
  • I might be behaving in a way that doesn't incentivize you to point out things useful to me, or discarding clues you provide, or in a way that fails to provide you clues.

This suggests collaborative truthseeking is done 1) for the benefit of both parties, 2) in a way that builds trust and mutual understanding, and 3) in a way that uses that trust and mutual understanding as a foundation.

There's another relevant contrast, where we could look at collaborative non-truthseeking, or contrast "collaborative truthseeking" as a procedure with other procedures that could be used (like "allocating blame"), but this one seems most related to what you're driving at.

answer by gjm · 2019-06-26T13:15:58.554Z · LW(p) · GW(p)

I share Richard Kennaway's feeling that this is a rather strange question because the answer seems so obvious; perhaps I'm missing something important. But:

"Collaborative" just means "working together". Collaborative truthseeking means multiple people working together in order to distinguish truth from error. They might do this for a number of reasons, such as these:

  • They have different skills that mesh together to let them do jointly what they could not do so well separately.
  • The particular truths they're after require a lot of effort to pin down, and having more people working on that can get it done quicker.
  • They know different things; perhaps the truth in question can be deduced by putting together multiple people's knowledge.
  • There are economies of scale; e.g., a group of people could get together and buy a bunch of books or a fast computer or a subscription to some information source, which is almost as useful to each of them as if they'd paid its full price on their own.
  • There are things they can do together that nudge their brains into working more effectively (e.g., maybe adversarial debate gets each person to dig deeper for arguments in a particular direction than they would have done without the impetus to compete and win).

There is a sense in which collaborative truth-seeking is built out of individual truth-seeking. It just happens that sometimes the most effective way for an individual to find what's true in a particular area involves working together with other individuals who also want to do that.

Collaborative truth-seeking may involve activities that individual truth-seeking (at least if that's interpreted rather strictly) doesn't because they fundamentally require multiple people, such as adversarial debate or double-cruxing.

Being "collaborative" isn't a thing that in itself brings benefits. It's a name for a variety of things people do that bring benefits. Speech-induced state changes don't result in better predictions because they're "collaborative"; engaging in the sort of speech whose induced state changes seem likely to result in better predictions is collaboration.

And yes, there are circumstances in which collaboration could be counterproductive. E.g., it might be easier to fall into groupthink. Sufficiently smart collaboration might be able to avoid this by explicitly pushing the participants to explore more diverse positions, but empirically it doesn't look as if that usually happens.

Related: collaborative money-seeking, where people join together to form a "company" or "business" that pools their work in order to produce goods or services that they can sell for profit, more effectively than they could if not working together. Collaborative sex-seeking, where people join together to form a "marriage" or "relationship" or "orgy" from which they can derive more pleasure than they could individually. Collaborative good-doing, where people join together to form a "charity" which helps other people more effectively than the individuals could do it on their own. Etc.

(Of course businesses, marriages, charities, etc., may have other purposes besides the ones listed above, and often do; so might groups of people getting together to seek the truth.)

answer by Elo · 2019-06-26T18:24:49.771Z · LW(p) · GW(p)

There are two cultures in this particular trade-off. Collaborative and adversarial.

I pitch collaborative as, "let's work together to find the answer (truth)" and I pitch adversarial as, "let's work against each other to find the answer (truth)".

Internally the stance is different. For collaborative, it might look something like, "I need to consider the other argument and then offer my alternative view". For adversarial, it might look something like, "I need to advocate harder for my view because I'm right". (not quite a balanced description)

Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".

Culturally 99% of either is fine as long as all parties agree on the culture and act like it. They do include each other at least partially.

Bad collaboration is not being willing to question the other's position and bad adversarial is not being willing to question one's own position and blindly advocating.

I see adversarial as going downhill in quality of conversation faster because it's harder to get a healthy separation of "you are wrong" from, "and you should feel bad (or dumb) about it". "only an idiot would have an idea like that".

In a collaborative process, the other person is not an idiot because there's an assumption that we work together. If adversarial process cuts to the depth of beliefs about our interlocker then from my perspective it gets un-pretty very quickly. Although skilled scientists are always using both and have a clean separation of personal and idea.

In an adversarial environment, I've known of some brains to take the feedback, "you are wrong because x" and translate it to, "I am bad, or I should give up, or I failed" and not "I should advocate for my idea better".

At the end of an adversarial argument is a very strong flip, popperian style "I guess I am wrong so I take your side".

At the end of a collaborative process is when I find myself taking sides, up until that point, it's not always clear what my position is, and even at the end of a collaborative process I might be internally resting on the best outcome of collaboration so far, but tomorrow that might change.

I see the possibility of being comfortable in each step of collaboration to say, "thank you for adding something here". However I see that harder or more friction to say so during adversarial cultures.

I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs. Humans are not perfect arguers or it would not matter so much. Because we play with brains and mixing territory of belief and interpersonal relationships I prefer collaborative to adversarial but I could see a counter argument that emphasised the value of the opposite position.

I can also see that it doesn't matter which culture one is in, so long as there is clarity around it being one and not the other.

comment by Zack_M_Davis · 2019-06-27T03:03:37.732Z · LW(p) · GW(p)

Collaborative: "I don't know if that's true, what about x" Adversarial "you're wrong because of x".

Culturally 99% of either is fine as long as all parties agree on the culture and act like it.

Okay, but those mean different things. "I don't know if that's true, what about x" is expressing uncertainty about one's interlocutor's claim, and entreating them to consider x as an alternative. "You're wrong because of x" is a denial of one's interlocutor's claim for a specific reason.

I find myself needing to say both of these things, but in different situations, each of which probably occurs more than 1% of the time. This would seem to contradict the claim that 99% of either is fine!

A culture that expects me to refrain from saying "You're wrong because of x" even if someone is in fact wrong because of x (because telling the truth about this wouldn't be "collaborative") is trying to decrease the expressive power of language and is unworthy of the "rationalist" brand name.

I advocate for collaboration over adversarial culture because of the bleed through from epistemics to inherent interpersonal beliefs.

I advocate for a culture that discourages bleed-through from epistemics to inherent interpersonal beliefs, except to whatever limited extent such bleed-through is epistemically justified.

"You're wrong about this" and "You are stupid and bad" are distinct propositions. It is not only totally possible, but in fact ubiquitously common, for the former to be true but the latter to be false! They're not statistically independent—if Kevin is wrong about everything all the time, that does raise my subjective probability that Kevin is stupid and bad. But I claim that any one particular instance of someone being wrong is only a very small amount of evidence about that person's degree of stupidity or badness! It is for this reason it is written that you should Update Yourself Incrementally [LW · GW]!

Humans are not perfect arguers or it would not matter so much.

I agree that humans are not perfect arguers! However, I remember reading a bunch of really great blog posts back in the late 'aughts articulating a sense that it should be possible [LW · GW] for humans to become better arguers! I wonder whatever happened to that website!

Replies from: dxu
comment by dxu · 2019-06-27T20:50:55.918Z · LW(p) · GW(p)
if Kevin is wrong about everything all the time, that does raise my subjective probability that Kevin is stupid and bad.

This is largely tangential to your point (with which I agree), but I think it's worth pointing out that if Kevin really manages to be wrong about everything, you'd be able to get the right answer just by taking his conclusions and inverting them--meaning whatever cognitive processes he's using to get the wrong answer 100% of the time must actually be quite intelligent. [LW · GW]

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-06-28T05:30:48.918Z · LW(p) · GW(p)

if Kevin really manages to be wrong about everything, you'd be able to get the right answer just by taking his conclusions and inverting them

That only works for true-or-false questions. In larger answer spaces, he'd need to be wrong in some specific way such that there exists some simple algorithm (the analogue of "inverting") to compute the right answers from those wrong ones.

comment by gjm · 2019-06-26T21:44:05.123Z · LW(p) · GW(p)

If multiple parties engage in adversarial interactions (e.g., debate, criminal trial, ...) with the shared goal of arriving at the truth then as far as I'm concerned that's still an instance of collaborative truth-seeking.

On the other hand, if at least one party is aiming to win rather than to arrive at the truth then I don't think they're engaging in truth-seeking at all. (Though maybe it might sometimes be effective to have a bunch of adversaries all just trying to win, and then some other people, who had better be extremely smart and aware of how they might be being manipulated, trying to combine what they hear from those adversaries in order to get to the truth. Hard to do well, though, I think.)

Replies from: Raemon
comment by Raemon · 2019-06-27T04:52:16.512Z · LW(p) · GW(p)

The reason this question comes up in the first place is because there's multiple conversation and debate styles that have different properties, and you need some kind of name to distinguish them. Naming things is hard, and I'm not attached to any particular name.

The thing I currently call "Adversarial Collaboration" is where two people are actively working together, in a process that is adversarial, but where they have some kind of shared good faith that if each of them represents their respective viewpoint well, the truth will emerge.

A different thing, which I'd currently call "Adversarial Truthseeking", is like the first one, but where there's not as much of a shared framework of whether and how the process is supposed to produce net truth. Two people meet in the wild, think each other are wrong, and argue.

What I currently call "Collaborative Truthseeking" typically makes sense when two people are building a product together on a team. It's not very useful to say "you're wrong because X", because the goal is not to prove ideas wrong, it's to build a product. "You're wrong because X, but Y might work instead" is more useful, because it actually moves you closer to a working model. It can also do a fairly complex thing of reaffirming trust, such that people remain truthseeking rather than trying to win.

And yes, each of these can be "collaborative" in some sense, but you need some kind of word for the difference.

(There are also things where you're doing something that looks collaborative but isn't truthseeking, and something that looks adversarial but isn't truthseeking)

And each of those tend to involve fairly different mental states, which facilitate different mental motions. Adversarial truthseeking seems most likely (to me) to result in people treating arguments as soldiers and other political failure modes.

Abram's hierarchy of conversational styles [LW · GW] is a somewhat different lens on the whole thing that I also mostly endorse.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-06-27T07:46:29.492Z · LW(p) · GW(p)

What I currently call “Collaborative Truthseeking” typically makes sense when two people are building a product together on a team. It’s not very useful to say “you’re wrong because X”, because the goal is not to prove ideas wrong, it’s to build a product. “You’re wrong because X, but Y might work instead” is more useful, because it actually moves you closer to a working model. It can also do a fairly complex thing of reaffirming trust, such that people remain truthseeking rather than trying to win.

What if we’re building a product together, and I think you’re wrong about something, but I don’t know what might work instead? What should I say to you?

(See, e.g., this exchange [LW(p) · GW(p)], and pretend that cousin_it and I were members of a project team, building some sort of web app or forum software together.)

Replies from: Raemon
comment by Raemon · 2019-06-27T16:07:44.556Z · LW(p) · GW(p)

There's a couple significant aspects of that exchange that make it look more collaborative than adversarial to me.

Copying the text here for reference:

Isn't allowing <object> an invitation for XSS?
Also, at least for me, click-to-enlarge images and slide galleries weren't essential to enjoying your post.

The first sentence could have been worded "Allowing <object> is an invitation to XSS." This would have (to me) come across as a bit harsher. The "Isn't?" frame gives it more a sense of "hey, you know this, right?". It signifies that the relationship is between two people who reasonably know what they're doing, whereas the other phrasing would have communicated an undertone of "you're wrong and should have known better and I know better than you." (how strong the undertone is depends on the existing relationship. In this case I think it would have probably been relatively week)

Moreover, the second sentence actually just fits the collaborative frame as I specified it: cousin_it specifically says "the product didn't need the features that required <object>", therefore there's no more work to be done. And meanwhile says "I enjoyed your post", which indicates that they generally like what you did. All of this helps reinforce "hey, we're on the same side, building a thing together."

(I do suspect you could find an example that doesn't meet these criteria but still is a reasonable workplace exchange. I don't think you *never* need to say 'hey, you're wrong here', just that if you're saying it all the time without helping to solve the underlying problems, something is off about your team dynamics.

Probably not going to have time to delve much further into this for now though)

comment by Elo · 2019-06-26T18:25:19.167Z · LW(p) · GW(p)

Should this be its own post?

Replies from: Raemon, Dagon
comment by Raemon · 2019-06-26T20:36:34.525Z · LW(p) · GW(p)

This feels sort of on the edge of "is useful outside of the current discussion." It'd be fine to write up as it's own post but my current feel is that it's accomplishing most of it's value as an answer to this question.

[this just my opinion of what feels vaguely right as a user, not intended to be normative]

comment by Dagon · 2019-06-26T20:24:07.751Z · LW(p) · GW(p)
Should this be its own post?

Yes, because I can only upvote it once if it remains an answer on this question. Also, because it'll be useful to refer to in future discussions.

comment by Raemon · 2019-06-26T20:35:48.958Z · LW(p) · GW(p)

I roughly endorse this description. (I specifically think the "99% of either is fine" is a significant overstatement, but I probably endorse the weaker claim of "both styles can generally work if people are trying to do the same thing")

answer by tailcalled · 2024-01-03T13:23:34.759Z · LW(p) · GW(p)

I'm a big fan of collaborative truth-seeking, so lemme try to explain what distinction I'd be communicating with it:

In an idealized individual world, you would be individually truth-seeking. This would include observing information, and using te information to update your model. You might also talk to others, which for various reasons (e.g. to help your allies or to fit in with social norms about honesty and information-sharing or ...) might include telling them (to the best of your ability) the sorts of true, relevant information that you would usually use in your own decision-making.

However, the above scenario runs into some problems, mostly because the true, relevant information that you'd usually use in your own decision-making might be simplified in various ways, for instance rather than concerning your observations directly, it concerns latent variable inferences that you've made on the basis of these observations. These latent variables are inferred from your capacity to observe, and for your capacity to make decisions, so it can be difficult for others to apply them. In particular:

  • You might have phrased it in language that is factually very incorrect but evocative for your purposes (e.g. I remember talking to someone who kept saying gay men have female brains, and then it turned out that what he really meant was that gay men were psychologically feminine in a lot of ways).
  • You might be engaging in mind-projection (e.g. if it is tricky from your perspective, under your constraints, to precisely observe the differences between some entities, then you might just model them as being inherently the same, and when differences do pop up, you might assume them to be stochastic).
  • You might have implicit assumptions that you haven't explicitly stated or precisified, which affect how you interpret things.

There's also the issue that everyone involved might have far less evidence than could be collected if one went out and systematically collected it.

If one simply optimizes one's model for one's own purposes and then dumps the content of the model into collective discourse, the above problems tend to make the discourse get stuck because nobody is ready to deeply change anybody's mind, and instead only ready to make minor corrections to others who are engaging from basically the same perspective.

These problems don't seem inevitable though. If both parties agree to set a bunch of time aside to dive in deep, they could work to fix it. For instance:

  • By clarifying what purposes one is applying the concepts to, and what one is trying to evoke, one can come up with some better definitions that make information more straightforwardly transferable.
  • By explicitly listing the areas of uncertainty, one can figure out what open questions there are, and what effects independence assumptions have, which can guide further investigations.
  • Similarly, explicating implicit assumptions helps with identifying cruxes, or communicating methods for obtaining new information, or better understanding the meaning of one's claims.
  • When working together, the amount of evidence it is "worth" collecting will often be higher, because it influences more people. Furthermore, the evidence that does get collected can be of higher quality, because it is made robust against more concerns, for more perspectives.

Basically, collaborative truth-seeking involves modifying one's map so that it becomes easier to resolve disputes by collecting likelihood ratios, and then going out to collect those likelihood ratios.

One thing that I've come to think is that a major factor to clarify is one's "perspective" or "Cartesian frame", as in. which sources of information is one getting, and which areas of actions can one take within the subject matter. This perspective influences many of the other issues, and therefore is an efficient thing to talk about if one wants to understand them.

answer by Dagon · 2019-06-26T14:34:56.339Z · LW(p) · GW(p)

I don't hear this phrase much, so I suspect it's heavily context-specific in it's usage. If I were to use it at work, it'd probably be ironic, as a euphemism for "let me correct your thinking".

I can imagine it being used as a way to explicitly agree that the participants in a discussion are there to each change their minds, or to understand and improve their models, by comparing and exchanging beliefs with each other. Truth-seeking is the intent to change your beliefs, collaborative truth-seeking is the shared intent to change the group members' beliefs.

6 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2019-06-26T06:24:40.347Z · LW(p) · GW(p)

People coming together to work on a common goal can typically accomplish more than if they worked separately. This is such a familiar thing that I am unclear where your perplexity lies.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-06-26T07:08:55.435Z · LW(p) · GW(p)

What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”? How commonly do said conditions obtain? Are they in effect in all, most, some, or none of the interactions between commenters on Less Wrong?

These are non-trivial questions.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2019-06-26T12:48:38.325Z · LW(p) · GW(p)
What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”?

That people have a common goal, and that they come together to work on it. Ok, I'm being deliberately tautologous there, but these are ordinary English words that we all know the meanings of, put together in plain sentences. I am not seeing what is being asked by your question, or by Zack's. Examples of the phenomenon are everywhere (as are examples of its failure).

As for how to do real work as a group (an expression meaning the same as "coming together to work on a common goal"), and how much of it is going on at any particular place and time, these are non-trivial questions. They have received non-trivial quantities of answers. To consider just LW and the rationalsphere, see for example various criticisms of LessWrong as being no more than a place to idly hang out (a common purpose but a rather trifling one compared with some people's desires for the place); MIRI; CFAR, FHI; rationalist houses; meetups; and so on. In another sphere, the book "Moral Mazes" (recently discussed here) illustrates some failures of collaboration.

I do not see how the OP gives any entry into these questions, but I look forward to seeing other people's responses to it.

comment by Vaughn Papenhausen (Ikaxas) · 2019-07-02T11:37:07.515Z · LW(p) · GW(p)

[Off topic] Data point: the repeated "(respectively I/you)" at the beginning of the post made that paragraph several times harder to read for me than it otherwise would have been.

comment by Zack_M_Davis · 2019-06-26T05:32:54.766Z · LW(p) · GW(p)

(Publication history note: lightly adapted from a 4 May 2017 Facebook status update. I pulled the text out of the JSON-blob I got from exporting my Facebook data, but I'm not sure how to navigate to the status update itself without the permalink or pressing the Page Down key too many times, so I don't remember whether I got any good answers from my Facebook friends at the time.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2019-06-26T15:26:24.001Z · LW(p) · GW(p)

Here's the link, found from my account by searching for "collaborative truthseeking". There is a "Posts from Anyone/You/Your Friends" radio control on the left of the search page, so should probably work on your own posts as well.