Posts

Comments

Comment by NatKozak on Postmortem to Petrov Day, 2020 · 2020-10-06T00:10:33.500Z · LW · GW

One reason I'm dissatisfied with the LW Petrov Day Button is that I consider it to equivocate between Petrov's role (where unilateral action against an imperfect command system is good and correct) and the role of nuclear actors on the country-sized abstraction level (where unilateral action on the part of any actor is bad).

I think the phishing that occurred this year caused a good approximation of Petrov’s role (especially since it constituted a command-and-control failure on the part of the LW team!) Therefore, I was interested in sketching a preliminary design for a scenario that is analogous to taking on the role of a nuclear-armed actor:

  1. Make it a non-trivial action for individuals to "make" a button -- in particular, have them pay LW karma
  2. Have the consequences accrue against some specific other user (potentially a very large decrease in karma or a tempban)
  3. Allow people to announce first-strike and second-strike plans in advance
  4. Let people "buy" a smaller likelihood that your own bombs go off accidentally (less karma = more likely/more karma = less likely)
  5. Let people "buy" more and bigger bombs
  6. Allow people a method to coordinate methods to check other peoples’ “safety" and "proliferation amounts”
  7. Only cause problems "for everyone" if enough bombs go off, at some threshold unknown to most people (to take into account uncertainty over how bad nuclear winter would be)
  8. Allow people to use, or add safety measures to, “bombs” they “purchased” during last year’s event

 

There are pros and cons to this approach over the current approach. Some I thought of:

Pro: More closely resembles reality, meaning that the people who are treating the event as an experiment may be able to draw better conclusions.

Pro: An increase in the stakes. I expect people to care more about their own LW Karma going down, or personally not being able to use the site, than the front page going down for a day, which only trivially decreases usability.

Pro: Better incentive alignment. If participation in the game entails paying karma, and you care about your karma, the best move for a group to coordinate on would be not to play!

Pro: Resistance to trolls — if the karma threshold to “purchase” a bomb is sufficiently high, and if people are excluded from playing who started their accounts within a certain period of time, the only people who would be able to play are community members.

Pro: People with more karma have more “ability” to participate, like the current system. This may be a good check for whether karma actually maps to cooperation traits that Ben/Ray/Oliver seem to care about.

Con: More complicated. If you’re interested in using the event to communicate a simple lesson about unilateral action being bad, this experiment will probably not serve that goal as well.

Con: No ability to opt out. To better approximate the real problems with nuclear proliferation, any account should have to be able to get “nuked.” The current system arguably also has this problem, but because the stakes are lower people are much less likely to get upset or have their experience with the website ruined.

Con: If point 8 is included, it makes the experiment less contained, which may make it harder to draw conclusions from the experiment or replicate it.

Comment by NatKozak on Shallow Review of Consistency in Statement Evaluation · 2019-09-13T06:05:30.083Z · LW · GW

Some relevant quotes from Scott’s review of Superforecasting are bolded below:

  1. “First of all, is it just luck? After all, if a thousand chimps throw darts at a list of stocks, one of them will hit the next Google, after which we can declare it a “superchimp”. Is that what’s going on here? No. Superforecasters one year tended to remain superforecasters the next. The year-to-year correlation in who was most accurate was 0.65; about 70% of superforecasters in the first year remained superforecasters in the second. This is definitely a real thing.
    1. Could imply that accuracy (in predictionmaking) correlates with consistency. Would need to look into whether there’s a relationship between the consistency of someone’s status as a superforecaster and their consistency with respect to the answer of a particular question, or group of questions
    2. Could also imply that our best method for verifying accuracy is by ascertaining consistency.
    3. Further steps: look into the good judgement project directly, try out some metrics that point at “how close is this question to another related question”, see how this metric varies with accuracy.
  2. “One result is that while poor forecasters tend to give their answers in broad strokes – maybe a 75% chance, or 90%, or so on – superforecasters are more fine-grained. They may say something like “82% chance” – and it’s not just pretentious, Tetlock found that when you rounded them off to the nearest 5 (or 10, or whatever) their accuracy actually decreased significantly. That 2% is actually doing good work.”
    1. This seems to be a different kind of “precision” than the “consistency” that you’re looking for. Maybe it’s worth separating refinement-type precision from reliability-type precision.

Scott notably reports that IQ, well-informed-ness, and math ability only correlate somewhat with forecasting ability, and that these traits don’t do as good a job of distinguishing superforecasters.

On the other hand, AI Impacts did a review of data from the Good Judgement Project, the project behind Tetlock’s conclusions, that suggests that some of these traits might actually be important -- particularly intelligence. Might be worth looking into the GJP data specifically with this question in mind.