Tweet markets for impersonal truth tracking?

post by KatjaGrace · 2020-11-10T07:10:02.380Z · LW · GW · 8 comments

Contents

8 comments

Crossposted from world spirit sock puppet.

Should social media label statements as false, misleading or contested?

Let’s approach it from the perspective of what would make the world best, rather than e.g. what rights do the social media companies have, as owners of the social media companies.

The basic upside seems to be that pragmatically, people share all kinds of false things on social media, and that leads to badness, and this slows that down.

The basic problem with it is that maybe we can’t distinguish worlds where social media companies label false things as false, and those where they label things they don’t like as false, or things that aren’t endorsed by other ‘official’ entities. So maybe we don’t want such companies to have the job of deciding what is considered true or false, because a) we don’t trust them enough to give them this sacred and highly pressured job forever, or b) we don’t expect everyone to trust them forever, and it would be nice to have better recourse when disagreement appears than ‘but I believe them’.

If there were a way to systematically inhibit or label false content based on its falseness directly, rather than via a person’s judgment, that would be an interesting solution that perhaps everyone reasonable would agree to add. If prediction markets were way more ubiquitous, each contentious propositional Tweet could say under it the market odds for the claim.

Or what if Twitter itself were a prediction market, trading in Twitter visibility? For just-posted Tweets, instead of liking them, you can bet your own cred on them. Then a while later, they are shown again and people can vote on whether they turned out right and you win or lose cred. Then your total cred determines how much visibility your own Tweets get.

It seems like this would solve:

It would be pretty imperfect, since it throws the gavel to future Twitter users, but perhaps they are an improvement on the status quo, or on the status quo without the social media platforms themselves making judgments.

8 comments

Comments sorted by top scores.

comment by Pazzaz · 2020-11-10T11:07:19.944Z · LW(p) · GW(p)

So instead of a disclaimer saying that a tweet is false, we'll now have a market saying that it probably will be declared false in the future. Then later the tweet will be declared 100% false and the market would close. But I don't see why you would trust the final result any more than the disclaimer. If you don't trust the social media companies then the prediction market just becomes "what people think social media companies will think" which doesn't solve the problem.

Edit: I missed that future users would vote to decide what the true outcome was but my point still stands. The prediction market would become "what people think people on social media will think". I know there has been work on solving this (Augur?) but I haven't read any of it.

Replies from: Viliam
comment by Viliam · 2020-11-10T17:23:36.862Z · LW(p) · GW(p)

Just thinking... what if users were allowed to say "I predict that the company will say it is X, but in my opinion it is actually Y"? Then the system could select the users who predicted correctly, and display their real opinion.

Unfortunately, this probably wouldn't work, because there would be no selection against "predicts correctly, expresses edgy opinion". Also, it is unlikely that media companies would support this kind of mechanism.

comment by David Althaus (wallowinmaya) · 2020-11-10T11:05:48.100Z · LW(p) · GW(p)

Cool post! Daniel Kokotajlo and I have been exploring somewhat similar ideas.

In a nutshell, our idea was that a major social media company (such as Twitter) could develop a feature that incentivizes forecasting in two ways. First, the feature would automatically suggest questions of interest to the user, e.g., questions thematically related to the user’s current tweet or currently trending issues. Second, users who make more accurate forecasts than the community will be rewarded with increased visibility. 

Our idea is different in two major ways: 

I.
First, you suggest to directly bet on Tweets whereas as we envisioned that people would bet/forecast on questions that are related to Tweets. 

This seems to have some advantages: There would only be one question related to many thousands of Tweets. Rather than resolving thousands of Tweets, one would only have to resolve one question. Most Tweets are also very imprecise. In contrast, these questions (and their resolution criteria) could be formulated very precisely (partly because one could spend much more time refining them because they are much fewer in number). The drawback is that this might feel less "direct" and "fun" in some ways.

II.
Second, contrary to your idea, we had in mind that the questions would be resolved by employees and not voted on by the public. Our worry is that the public voting would dissolve in easily manipulated popularity contest that might also lead to increased polarization and/or distrust of the whole platform. But it is true that users might not trust employees of Twitter—potentially for good reason! 

Maybe one could combine these two ideas. Maybe the resolution of questions could be done by a committee or court that consists of employees and members of the public (and maybe other people that enjoy a high level of trust such as maybe popular judges or scientists?). Members of this committee could even undergo a selection and training process, maybe somewhat similar to the selection and training process of US juries which seem to be widely trusted to make reasonable decisions.

 

comment by ckai · 2020-11-11T15:42:55.260Z · LW(p) · GW(p)

It seems pretty important to know who are the people who get to vote later on.  Since Twitter is a social network, is it people who are networked in some way, or is it random Twitter users, or...?  It seems like this could work very differently depending on exactly how it's implemented.  And how it works will influence whether anyone is willing to stake their cred on the results of a vote of this group of people.

 

What incentivizes anyone to vote at all, or to vote accurately?

comment by ike · 2020-11-10T13:01:44.836Z · LW(p) · GW(p)

Something like 80% of Americans think social media is doing just right or should be doing more to address misinformation (exact percentage depends on the category of information.)

If Twitter stopped, they'd lose some market share to competitors that do more fact checking. There are already popular social media networks that don't do that much checking, like reddit. Twitter itself does fact checking on fewer topics than Facebook. People can choose what level they're comfortable with, if that's important to them.

Replies from: wunan
comment by wunan · 2020-11-10T16:07:51.696Z · LW(p) · GW(p)

Do you have a source for the 80% figure?

Replies from: ike
comment by ike · 2020-11-10T18:37:37.124Z · LW(p) · GW(p)

https://knightfoundation.org/wp-content/uploads/2020/06/KnightFoundation_Panel6-Techlash2_rprt_061220-v2_es-1.pdf 

Depends on the topic, but look at e.g. Figure 2 on page 6. 81% say never allow election related misinformation, 85% say never allow health misinformation.