How to build common knowledge of rationality and honesty?

post by MikkW (mikkel-wilson) · 2021-02-21T06:07:29.478Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    1 Dagon
    1 MikkW
None
1 comment

From my shortform [LW(p) · GW(p)]:

If you know someone is rational, honest, and well-read, then you can learn a good bit from the simple fact that they disagree with you.

If you aren't sure someone is rational and honest, their disagreement tells you little.

If you know someone considers you to be rational and honest, the fact that they still disagree with you after hearing what you have to say, tells you something.

But if you don't know that they consider you to be rational and honest, their disagreement tells you nothing.

It's valuable to strive for common knowledge of you and your partners' rationality and honesty, to make the most of your disagreements.

This is a restatement of the ideas behind Aumann's Agreement Theorem. On Less Wrong, one can often assume that a writer or commentor is interested in rationality, but that doesn't mean that the writer can be assumed to actually be rational.

How can we help build common knowledge [? · GW] of people's rationality and honesty?

Answers

answer by Dagon · 2021-02-21T21:26:53.250Z · LW(p) · GW(p)

I think we should perhaps start with actual truth, then move to knowledge.  No human is perfectly rational nor honest.  The VAST majority of humans have some topics on which they're quite irrational, and some topics (with fair overlap) on which they're quite dishonest. 

The current common doubt is pretty close to the territory.  

answer by MikkW · 2021-02-21T06:10:23.537Z · LW(p) · GW(p)

There's one idea of how to build common knowledge of honesty from my shortform [LW(p) · GW(p)]. I know I got some pushback from it, and I'm too tired right now to address those concerns, but I'll share what I wrote to help get the conversation going:

I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-literally-true. This means that almost always, common knowledge of honesty never exists, which significantly slows down positive effects from Aumann's Agreement Theorem

In language, we speak with different registers. Different registers are different ways of speaking, depending on the context of the speech. The way a salesman speaks to a potential customer, will be distinct from the way he speaks to his pals over a beer - he speaks in different registers in these different situations. But registers can also be used to communicate information about the intentions of the speaker - when a speaker is being ironic, he will intone his voice in a particular way, to signal to his listeners that he shouldn’t be taken 100% literally.

There are two points that come to my mind here: One: establishing a register of communication that is reserved for speaking literally true statements, and Two: expanding the ability to use registers to communicate not-literally-true intent, particularly in text.

On the first point, a large part of the reason why people speaking in a natural register cannot always be assumed to be saying something literally true, is that there is no external incentive to not lie. Well, sometimes there are incentives to not lie, but oftentimes these incentives are weak, and especially in a society built upon free speech, it is hard to - on a large scale - enforce a norm against not lying in natural-register speech. Now my mind imagines a protected register of speech, perhaps copyrighted by some organization (and which includes unique manners of speech which are distinctive enough to be eligible for copyright), which that organization vows to take action against anybody who speaks not-literally-true statements (i.e., which communicate a world model that does not reliably communicate the actual state of the world) in that register; anybody is free (according to a legally enforcable license) to speak whatever literally-true statements they want in that register, but may not speak non-truths in that register, at pain of legal action.

If such a register was created, and was reliably enforced, it would help create a society where people could readily trust strangers saying things that they are not otherwise inclined to believe, given that the statement is spoken in the protected register. I think such a society would look different from current society, and would have benefits compared to current society. I also think a less-strict version of this could be implemented by a single platform (perhaps LessWrong?), replacing legal action with the threat of being suspended for speaking not-literal-truths in a protected register, and I also suspect that it would have a non-zero positive effect. This also has the benefit of being probably cheaper, and in a less unclear legal position related to speech.

I don’t currently have time to get into details on the second point, but I will highlight a few things: Poe’s law states that even the most extreme parody can be readily mistaken for a serious position;; Whereas spoken language can clearly be inflected to indicate ironic intent, or humor, or perhaps even not-literally-true-but-pointing-to-the-truth, the carriers of this inflection are not replicated in written language - therefore, written language, which the internet is largely based upon, lacks the same richness of registers that allows a clear distinction between extreme-but-serious postitions from humor. There are attempts to inflect writing in such a way as to provide this richness, but as far as I know, there is no clear standard that is widely understood that actually accomplishes this. This is worth exploring in the future. Finally, I think it is worthwhile to spend time reflecting on intentionally creating more registers that are explicitly intended to communicate varying levels of seriousness and intent.

1 comment

Comments sorted by top scores.

comment by Yoav Ravid · 2021-02-22T06:31:20.108Z · LW(p) · GW(p)

Good question! I'm interested to see what answers people come up with.

Related: Problem of Verifying Rationality [? · GW] (It hasn't been made a tag yet so i can't just tag the question. Edit: it now has and the post is tagged with it)