# Polling Thread

post by Gunnar_Zarncke · 2014-03-01T23:57:53.846Z · LW · GW · Legacy · 24 commentsThis is the second installment of the Polling Thread.

This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.

There are some rules:

- Each poll goes into its own top level comment and may be commented there.
- You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
- Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.

If you don't know how to make a poll in a comment look at the Poll Markup Help.

This is not (yet?) a regular thread. If it is successful I may post again. Or you may. In that case do the following :

- Use "Polling Thread" in the title.
- Copy the rules.
- Add the tag "poll".
- Link to this Thread or a previous Thread.
- Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
- Add a second top-level comment with an initial poll to start participation.

## 24 comments

Comments sorted by top scores.

## comment by blacktrance · 2014-03-02T04:14:49.228Z · LW(p) · GW(p)

Inspired by the below poll - rate the following LessWrong topics based on whether you'd like to see more or less posted about them here.

- Biases, heuristics, and fallacies, and methods for dealing with them

[pollid:630]

- Improving virtues, such as altruism, mindfulness, etc

[pollid:631]

- Self-improvement: productivity and fighting akrasia

[pollid:632]

- Self-improvement: other topics (luminosity, longevity, etc)

[pollid:633]

- Statistics, probability theory, decision theory, and other mathematical topics

[pollid:634]

- Ethics and metaethics

[pollid:635]

- Rationality applied to social situations and interpersonal relationships (interacting with strangers, romantic relationships, etc)

[pollid:636]

- AI

[pollid:637]

- The future: singularity, transhumanism, mental uploading, cryonics, etc

[pollid:638]

- Meetups

[pollid:639]

- Related organizations (CFAR, MIRI, GiveWell, etc), including fundraisers

[pollid:640]

Replies from: Gunnar_Zarncke, eggman## ↑ comment by Gunnar_Zarncke · 2014-03-04T22:20:27.634Z · LW(p) · GW(p)

I perceive an inconsistency or at least strong imbalance in the results: Almost all topics should have higher proportion or at least stay the same except for meetups (and to small degree related orgs).

Should this be correct there are the following explanations:

To make up for the increase in other topics meetup notes (despite their volume) would have to be reduced to almost zero.

Lots of other topics not covered by this poll have to be reduced (but I do not see how they could so)

The requested increase in proportion is indeed very slight.

People cant vote consistently.

Some combination of the above.

Something I missed.

## ↑ comment by blacktrance · 2014-03-04T23:10:48.348Z · LW(p) · GW(p)

More likely, I suspect people are reading "I would like this to be a... higher proportion" and interpreting it as "I'd like to see more of this", ignoring the "proportion" part. A more informative poll would ask voters to rank the topics in order of wanting to see them, but LW doesn't support that format.

## ↑ comment by eggman · 2014-03-03T04:43:27.858Z · LW(p) · GW(p)

Disclosure: my votes for the above poll are not anonymous. I want people to be aware of how I voted, because I state the following: my votes for this poll are limited to my perception of Less Wrong over only the last few months, as of the date of this comment, which is the period of time in which I have started checking Less Wrong on a semi-daily basis.

## comment by Scott Garrabrant · 2014-03-03T19:02:05.304Z · LW(p) · GW(p)

You and a friend are will go through the following procedure. You are not allowed to communicate before hand.

1: Your friend names a card from a 52 card deck.

2: You look at the top card of the deck.

3: You tell your friend the suit of the top card of the deck. (You are allowed to lie, but you must name a suit)

4: Your friend bets on a card, and wins if the top card matches the card he bet on. Your friend puts in 1 dollar, and receives 20 dollars if he guesses correctly. If your friend bets on the same card that he named originally, he may double the bet, putting in 2 dollars, and receiving 40 dollars if he guesses correctly. The bet is between your friend and a stranger.

Imagine you are playing this game, and your friend names "seven of diamonds," and the top card is "three of diamonds." When you tell your friend the suit, do you lie? (By lie, I mean name a suit other than diamonds)

[pollid:641]

Replies from: RichardKennaway, Lumifer, JGWeissman, Sherincall, Dagon## ↑ comment by RichardKennaway · 2014-03-04T09:55:43.882Z · LW(p) · GW(p)

I assume that my friend and I have common knowledge of the rules of the game, and that we have a common interest in seeing him maximise his winnings or minimise his losses. Here is my analysis (with final conclusions rot13'd).

My first thought on seeing this game is that "truth" and "lie" are not accurate descriptions of the actions "name the actual suit" and "name a different suit". The real rules are that my friend and I both know that we can use the two bits of information available in my response in whatever way we can manage to agree on. To name a different suit is no more a lie than is a conventional bid in the game of bridge. The requirement of not having a pre-arranged strategy, as bridge partners do, complicates things somewhat but does not affect this essential point, that an agreed convention is not a lie.

To simplify the matter, I shall assume that my friend and I are not preternaturally adept at Schelling games, and cannot magically independently pluck a common strategy out of the space of all strategies (otherwise the no collaboration rule is rendered meaningless). I do assume we are logically omniscient, so if there is a unique optimal strategy, we will both discover it and have common knowledge that the other has also discovered it.

The space of all my possible strategies consists of my responses to each of three situations: my friend's guess is correct, it is the right suit but the wrong rank, or it is a different suit. Although I have four possible responses in each situation, my response can communicate only one bit to my friend, because all he receives from me is a suit that is either the same as his guess or different. The three suits that are different are not distinguishable in the absence of magical Schelling abilities. So the information I can communicate reduces to saying the same suit as he guessed or saying a different suit.

Given three possible situations and a binary choice to make in each one, I have 8 strategies.

My friend's action is one of three: stick with the original guess, guess another card in the same suit, or guess a card of a different suit. (When I name a suit different to my friend's guess, the last of these strategies could be split into two: guess a card in the suit I mentioned, or guess one in a suit not equal to either his first guess or my response. But this makes no difference to the payoffs.) He must make his choice one way if I name the same suit as he guessed, and one way if I name a different suit, so he has 9 strategies.

Of the 8*9 = 72 joint strategies, is there a single one which maximises his winnings? If so, that is common knowledge to us both and that is the strategy to use.

But before brute-forcing this with the computer, there is a symmetry to notice. If in my strategy I swap the actions "say the same suit" and "say a different suit", and my friend also swaps his responses to those two actions, the payoff remains the same. Choosing between these must require a Schelling-type decision, and the only relevant information that could be used to prize one of them over the other is the everyday ideas of truth and lies, according to which truth is better than lies. Therefore, other things being equal, we might decide to favour that strategy with the greatest probability of telling the "truth". If that still does not decide the strategy uniquely, a further consideration could be that trust in a friend is also good, therefore we should favour a strategy which most often results in the same action by the friend as taking my statement as actually truthful would.

The results of my computer-aided calculation: gurer ner sbhe fgengrtvrf juvpu nyy cebqhpr gur fnzr rkcrpgrq tnva bs 7/52. Gurfr ner:

1 naq 2. V gryy gur "gehgu" vs zl sevraq unf thrffrq gur evtug pneq be gur jebat fhvg, naq "yvr" vs ur unf thrffrq gur jebat pneq bs gur evtug fhvg.

Zl sevraq fubhyq fgvpx gb uvf thrff vs gur fhvg V naabhapr vf gur fnzr nf uvf thrff, bgurejvfr ur fubhyq fjvgpu gb nal bgure pneq. (Guvf vf gjb qvssrerag fgengrtvrf jvgu gur fnzr rssrpg, nf ur unf gur pubvpr bs fjvgpuvat gb nabgure pneq bs gur fnzr fhvg, be n pneq bs nabgure fhvg.)

3 naq 4. Gur fnzr nf 1 naq 4, jvgu "gehgu" naq "yvr" fjvgpurq va zl fgengrtl, naq "fhvg == thrff" naq "fhvg != thrff" fjvgpurq va zl sevraq'f.

Fvapr 1 naq 2 unir yrff rkcrpgrq "ylvat" guna 3 naq 4, V "yvr" va gur fpranevb jurer zl sevraq'f thrff vf 7Q naq gur gbc pneq vf 3Q.

ETA: I missed the fact that the friend has the option of doubling if he sticks with the same card, and analysed the game on the basis that he must always double his bet if he stays with the same card. But I expect the choice of strategy that results will be the same. ETA2: it is.

## ↑ comment by Lumifer · 2014-03-03T20:00:52.181Z · LW(p) · GW(p)

Is it a one-time or a multiple-round game?

Is the bet between **you** and your friend or there is a third party? In other words, what are **your** incentives here?

## ↑ comment by Scott Garrabrant · 2014-03-03T20:05:07.626Z · LW(p) · GW(p)

It is a one round game, and the bet is with a third party who you do not know.

## ↑ comment by JGWeissman · 2014-03-03T19:51:03.467Z · LW(p) · GW(p)

By saying "clubs", I communicate the message that my friend would be better off betting $1 on a random club than $2 on the seven of diamonds, (or betting $1 on a random heart or spade), which is true, so I don't really consider that lying.

If, less conveniently, my friend takes what I see to literally mean the suit of the top card, but I still can get them to not bet $2 on the wrong card, then I bite the bullet and lie.

Replies from: Scott Garrabrant## ↑ comment by Scott Garrabrant · 2014-03-03T20:08:23.629Z · LW(p) · GW(p)

I expect most people here would bite that bullet, but I am not sure if everyone here will. "Never Lie" seems like a rather convenient Schelling Fence.

## ↑ comment by Sherincall · 2014-03-03T23:47:17.271Z · LW(p) · GW(p)

Can the friend and myself come up with a protocol beforehand?

Replies from: Scott Garrabrant## ↑ comment by Scott Garrabrant · 2014-03-04T00:51:19.019Z · LW(p) · GW(p)

No

## ↑ comment by Dagon · 2014-03-03T21:06:41.736Z · LW(p) · GW(p)

Is it important that it's my friend (that is, is my knowledge or motivation different here than if I were playing against a faceless stranger backed by a corporation)?

Assuming not, I think I randomize, so I don't know whether I lie or not in any given case. Or perhaps always say clubs. I can't understand why you describe rule 3 that way.

Replies from: Scott Garrabrant## ↑ comment by Scott Garrabrant · 2014-03-03T21:12:43.890Z · LW(p) · GW(p)

Sorry, I think I was unclear. The bet is between your friend and a stranger. You do not bet.

Replies from: Dagon## ↑ comment by Dagon · 2014-03-03T21:35:20.732Z · LW(p) · GW(p)

Wow, that's completely opposite of what I expected. And I'm still confused, but now I'm confused why I'd ever lie (before I was confused why I'd ever tell the truth).

Replies from: Nornagest## ↑ comment by Nornagest · 2014-03-04T00:40:07.125Z · LW(p) · GW(p)

Well, the naive approach is to always tell the truth. But there's an interesting asymmetry there: from your friend's perspective, if they guess the same suit that you find, their best move is to double down on their initial guess (knowing themselves to have a positive EV within that suit).

From *yours*, it doesn't look like that: if they guess the correct suit but the wrong value, you know their best move is to double down, but you also know whether or not that doubling will succeed. Failing a double-down loses them twice as much as failing on an ordinary bet, so from the perspective of minimizing their losses, you're given an incentive to lie about the suit (which would lead them to guess something you know to be wrong) if their initial guess had the suit but not the value right. You should still tell the truth if they got both right.

Of course, this only works if you don't reveal the cards to your friend (the win rates shouldn't reveal that you're ever lying) or if your friend is bright enough to follow the reasoning. If you do and they're not, revealing your choices might screw up your coordination enough to wreck the strategy, meaning that your best move goes back to the naive version.

And all bets are off if you have deontological reasons not to lie, even in a case like this.

## comment by Gunnar_Zarncke · 2014-03-01T23:57:15.162Z · LW(p) · GW(p)

Rate how typical the following topics are on/for LessWrong (0 means totally atypical, 1.0 means totally on track):

methods for being less wrong, knowing about biases, fallacies and heuristics [pollid:618]

advancing specific virtues: altruism, mindfulness, empathy, truthfulness, openness [pollid:619]

methods of self-improvement (if scientifically backed), e.g. living luminiously, winning at life, longevity, advice in the repositories http://lesswrong.com/lw/gx5/boring_advice_repository/ [pollid:620]

- as a specific sub-field thereof: dealing with procrastination and akrasia [pollid:621]

statistics, probability theory, decision theory and related mathematical fields [pollid:622]

(moral) philosophical theories (tried to make this sharp somehow but failed) [pollid:623]

rationality applied to social situations in relationships, parenting and small groups [pollid:624]

platform to hangout with like-minded (and often high-IQ) people [pollid:625]

artificial intelligence topics esp. if related to AGI, (U)FAI, AI going FOOM (or not) [pollid:626]

the singularity and transhumanism (includes cryonics as method to get there) [pollid:627]

organization and discussion of meetups [pollid:628]

presentation and discussion of topics of associated or related organizations CFAR, MIRI, GiveWell, CEA [pollid:629]

I chose this poll because I want to use it to validate a presentation I am preparing for a meetup about what does constitute a typical LessWrong topic (and giving examples of such). If this works out it might provide a helpful primer for LW newbies (e.g. at a meetup).

Replies from: Gunnar_Zarncke, savageorange, ChristianKl## ↑ comment by Gunnar_Zarncke · 2014-03-09T19:51:56.224Z · LW(p) · GW(p)

I derived the following list of LessWrong topics and presented in on our LW meetup.

In order of decreasing typicality (most typical for LW first):

methods for being less wrong, knowing about biases, fallacies and heuristics

methods of self-improvement (if scientifically backed), e.g. living luminiously, winning at life, longevity

organization and discussion of meetups

dealing with procrastination and akrasia

statistics, probability theory, decision theory and related mathematical fields

topics of associated or related organizations CFAR, MIRI, GiveWell, CEA

advancing specific virtues: altruism, mindfulness, empathy, truthfulness, openness

artificial intelligence topics esp. if related to AGI, (U)FAI, AI going FOOM (or not)

the singularity and transhumanism (includes cryonics as method to get there) - this had the largest variance

rationality applied to social situations in relationships, parenting and small groups - this also had a large variance

(moral) philosophical theories, ethics

platform to hangout with like-minded smart people

## ↑ comment by savageorange · 2014-03-02T00:17:50.696Z · LW(p) · GW(p)

I'd be a lot more inclined to respond to this if I didn't need to calculate probability values (ie. could input weights instead, which were then normalized.)

To that end, here is a simple Python script which normalizes a list of weights (given as commandline arguments) into a list of probabilities:

```
#!/usr/bin/python
import sys
weights = [float(v) for v in sys.argv[1:]]
total_w = sum(weights)
probs = [v / total_w for v in weights]
print ('Probabilities : %s' % (", ".join([str(v) for v in probs])))
```

Produces output like this:

```
Probabilities : 0.1, 0.2, 0.3, 0.4
```

Replies from: Dagon## ↑ comment by ChristianKl · 2014-03-03T12:24:17.271Z · LW(p) · GW(p)

In what way does being typical imply a probability?

Replies from: Gunnar_Zarncke## ↑ comment by Gunnar_Zarncke · 2014-03-03T20:41:20.686Z · LW(p) · GW(p)

It doesn't and I didn't claim it did.

I just abused the 0..1 interval for typicality because the poll provides range checking in this case.

## comment by Gunnar_Zarncke · 2014-03-01T23:54:42.234Z · LW(p) · GW(p)

Discussion of this thread goes here; all other top-level comments should be polls or similar

Replies from: Scott Garrabrant## ↑ comment by Scott Garrabrant · 2014-03-03T18:56:28.785Z · LW(p) · GW(p)

I think this should be a regular monthly thread.