Posts

Strategyproof Mechanisms: Possibilities 2014-06-02T02:26:29.399Z
Strategyproof Mechanisms: Impossibilities 2014-05-16T00:52:30.626Z
Incentive compatibility and the Revelation Principle 2014-05-03T13:38:48.134Z
Mechanism Design: Constructing Algorithms for Strategic Agents 2014-04-30T18:37:20.875Z
[Sequence announcement] Introduction to Mechanism Design 2014-04-30T16:21:31.652Z
What should superrational players do in asymmetric games? 2014-01-24T07:42:05.108Z
[Link] On the Height of a Field 2013-01-02T11:20:28.056Z
Rational Toothpaste: A Case Study 2012-05-31T00:31:57.865Z
[SEQ RERUN] Third Alternatives for Afterlife-ism 2011-06-09T15:31:07.596Z
[SEQ RERUN] The Third Alternative 2011-06-08T14:19:42.504Z
[SEQ RERUN] Beware the Unsurprised 2011-06-07T12:35:44.560Z
[SEQ RERUN] Think Like Reality 2011-06-06T13:28:26.484Z
[SEQ RERUN] Universal Law 2011-06-05T13:23:30.136Z
[SEQ RERUN] Universal Fire 2011-06-04T15:10:46.106Z
[SEQ RERUN] Feeling Rational 2011-06-03T13:28:49.867Z
[SEQ RERUN] Consolidated Nature of Morality Thread 2011-06-02T11:39:24.484Z
[SEQ RERUN] Your Rationality is My Business 2011-06-01T14:04:57.022Z
[SEQ RERUN] New Improved Lottery 2011-05-31T13:09:13.236Z
[SEQ RERUN] Lotteries: A Waste of Hope 2011-05-30T15:13:58.848Z
[SEQ RERUN] Marginally Zero-Sum Efforts 2011-05-28T16:07:39.641Z
Epistemology and the Psychology of Human Judgment 2011-05-28T05:15:50.017Z
[SEQ RERUN] Futuristic Predictions as Consumable Goods 2011-05-27T15:57:47.356Z
[SEQ RERUN] Inductive Bias 2011-05-26T13:20:46.506Z
Dominus' Razor 2011-05-26T01:05:49.558Z
[SEQ RERUN] Debiasing as Non-Self-Destruction 2011-05-25T14:14:03.878Z
Free Stats Textbook: Principles of Uncertainty 2011-05-24T19:45:16.079Z
[SEQ RERUN] Knowing About Biases Can Hurt People 2011-05-24T12:54:31.986Z
[REVIEW] Foundations of Neuroeconomic Analysis 2011-05-24T02:25:04.801Z
[SEQ RERUN] The Majority Is Always Wrong 2011-05-23T13:53:38.400Z
[SEQ RERUN] The Error of Crowds 2011-05-22T14:21:20.490Z
[SEQ RERUN] Useful Statistical Biases 2011-05-21T14:14:10.756Z
[SEQ RERUN] Statistical Bias 2011-05-20T14:19:23.356Z
[SEQ RERUN] Tsuyoku vs. the Egalitarian Instinct 2011-05-19T15:28:40.682Z
[SEQ RERUN] Tsuyoku Naritai! (I Want To Become Stronger) 2011-05-18T16:11:49.264Z
[SEQ RERUN] Self-deception: Hypocrisy or Akrasia? 2011-05-17T15:22:24.522Z
[SEQ RERUN] Chronophone Motivations 2011-05-16T16:29:25.640Z
[SEQ RERUN] Archimedes's Chronophone 2011-05-15T13:04:39.613Z
[SEQ RERUN] Useless Medical Disclaimers 2011-05-14T13:51:11.091Z
What's in a name? That which we call a rationalist… 2009-04-24T23:53:51.129Z
Weekly Wiki Workshop and suggested articles 2009-04-19T01:13:23.164Z
Extreme Rationality: It Could Be Great 2009-04-09T22:00:13.538Z
Rationalist Wiki 2009-04-06T00:19:25.192Z
Hygienic Anecdotes 2009-03-29T05:46:07.838Z
Contests vs. Real World Problems 2009-03-25T01:29:02.264Z
Rationality and Positive Psychology 2009-03-05T15:31:16.803Z

Comments

Comment by badger on Crazy Ideas Thread, Aug. 2015 · 2015-08-12T15:16:47.907Z · LW · GW

If there is a net positive externality, then even large private benefits aren't enough. That's the whole point of the externality concept.

Comment by badger on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-11T19:13:45.108Z · LW · GW

If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself.

Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.

Comment by badger on Open thread, Mar. 9 - Mar. 15, 2015 · 2015-03-10T17:44:26.242Z · LW · GW

A pdf copyof Swarmwise from the author's website.

Comment by badger on Open thread, Mar. 9 - Mar. 15, 2015 · 2015-03-10T13:51:47.602Z · LW · GW

From the Even Odds thread:

Assume there are n people. Let S_i be person i's score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be

(i.e. the person's score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.

This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quadratic scoring rule), then each person expects the same profit before the question is resolved.

Comment by badger on Open thread, Feb. 16 - Feb. 22, 2015 · 2015-02-18T19:09:49.764Z · LW · GW

Not aware of any tourneys with this tweak, but I use a similar example when I teach.

If the payoff from exiting is zero and the mutual defection payoff is negative, then the game doesn't change much. Exit on the first round becomes the unique subgame-perfect equilibrium of any finite repetition, and with a random end date, trigger strategies to support cooperation work similarly to the original game.

Life is a more interesting if the mutual defection payoff is sufficiently better than exit. Cooperation can happen in equilibrium even when the end date is known (except on the last round) since exit is a viable threat to punish defection.

Comment by badger on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-15T18:40:18.089Z · LW · GW

From an economics perspective, the stapler dissertation is real. The majority of the time, the three papers haven't been published.

It's also possible to publish empirical work produced in a few months. The issue is where that article is likely to be published. There's a clear hierarchy of journals, and a low ranked publication could hurt more than it helps. Dissertation committees have very different standards depending on the student's ambition to go into academia. If the committee has to write letters of rec to other professors, it takes a lot more work to be sufficiently novel and interesting. If someone goes into industry, almost any three papers will suffice.

I've seen people leave because they couldn't pass coursework or because they felt burnt out, but the degree almost always comes conditional on writing something and having well-calibrated ambitions.

Comment by badger on Stupid Questions December 2014 · 2014-12-10T21:09:08.326Z · LW · GW

Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.

Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from "equal incomes".

What benefits do you think a different system might provide, or what problems does monetary exchange have that you're trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.

Comment by badger on Stupid Questions December 2014 · 2014-12-10T19:35:24.840Z · LW · GW

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.

For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.

Comment by badger on Incentive compatibility and the Revelation Principle · 2014-11-23T14:10:01.987Z · LW · GW

I'm on board with "absurdly powerful". It underlies the bulk of mechanism design, to the point my advisor complains we've confused it with the entirety of mechanism design.

The principle gives us the entire set of possible outcomes for some solution concept like dominant-strategy equilibrium or Bayes-Nash equilibrium. It works for any search over the set of outcomes, whether that leads to an impossibility result or a constructive result like identifying the revenue-optimal auction.

Given an arbitrary mechanism, it's easy (in principle) to find the associated IC direct mechanism(s). The mechanism defines a game, so we solve the game and find the equilibrium outcomes for each type profile. Once we've found that, the IC direct mechanism just assigns the equilibrium outcome directly. For instance, if everyone's equilibrium strategy in a pay-your-bid/first-price auction was to bid 90% of their value, the direct mechanism assigns the item to the person with the highest value and charges them 90% of their value. Since a game can have multiple equilibria, we have one IC mechanism per outcome. The revelation principle can't answer questions like "Is there a mechanism where every equilibrium (as opposed to some equilibrium) gives a particular outcome?"

Comment by badger on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-17T21:29:29.284Z · LW · GW

The paper cited is handwavy and conversational because it isn't making original claims. It's providing a survey for non-specialists. The table I mentioned is a summary of six other papers.

Some of the studies assume workers in poorer countries are permanently 1/3rd or 1/5th as productive as native workers, so the estimate is based on something more like a person transferred from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy is able to produce $10-15K in value.

Comment by badger on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-17T19:19:41.548Z · LW · GW

For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)

Comment by badger on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-28T18:35:56.294Z · LW · GW

The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power / votes / delegates.

On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + y^2 ≤ 1; x,y ≥ 0}, then the NBS starting from (0.9, 0.1) is about (0.95, 0.28) and the NBS starting from (0,0) with theory 1 having nine delegates (i.e. an exponent of nine in the Nash product) and theory 2 having one delegate is (0.98, 0.16).

If the credence-weighted mixture were on the Pareto frontier, both approaches are equivalent.

Comment by badger on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-28T16:35:16.221Z · LW · GW

For the NBS with more than two agents, you just maximize the product of everyone's gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.

Agents could be given more bargaining power by giving them different exponents in the Nash product.

Comment by badger on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-28T16:22:47.006Z · LW · GW

Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.

Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person's life (the outcome can't be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).

Comment by badger on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-28T13:30:35.461Z · LW · GW

I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we're doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.

Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between

  • choosing C (say if C is 99% as good as the ideal for each agent),
  • a 50/50 lottery over A and B (if C is only 1% better than the worst for each), or
  • some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/3rds on C and 1/3rd on A gives them each 60% of the gain between their best and worst)
Comment by badger on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-27T23:12:40.308Z · LW · GW

My reading of the problem is that a satisfactory Parliamentary Model should:

  • Represent moral theories as delegates with preferences over adopted policies.
  • Allow delegates to stand-up for their theories and bargain over the final outcome, extracting concessions on vital points while letting others policies slide.
  • Restrict delegates' use of dirty tricks or deceit.

Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of starting with the Nash bargaining solution as implemented by an alternating offer game?

The two obvious issues are how to translate delegate's preferences into utilities and what the disagreement point is. Assuming a utility function is fairly mild if the delegate has preferences over lotteries. Plus,there's no utility comparison problem even though you need cardinal utilities. The lack of a natural disagreement point is trickier. What intuitions might be lost going this route?

Comment by badger on Strategyproof Mechanisms: Possibilities · 2014-06-11T14:10:32.109Z · LW · GW

It turns out the only Pareto efficient, individually rational (ie everyone never gets something worse than their initial job), and strategyproof mechanism is Top Trading Cycles. In order to make Cato better off, we'd have to violate one of those in some way.

Comment by badger on Open Thread, May 26 - June 1, 2014 · 2014-05-27T21:43:10.657Z · LW · GW

Metafilter has a classic thread on "What book is the best introduction to your field?". There are multiple recommendations there for both law and biology.

Comment by badger on Strategyproof Mechanisms: Impossibilities · 2014-05-17T16:36:25.453Z · LW · GW

Damn.

Comment by badger on Strategyproof Mechanisms: Impossibilities · 2014-05-16T12:46:36.693Z · LW · GW

Since Arrow and GS are equivalent, it's not surprising to see intermediate versions. Thanks for pointing that one out. I still stand by the statement for the common formulation of the theorem. We're hitting the fuzzy lines between what counts as an alternate formulation of the same theorem, a corollary, or a distinct theorem.

Comment by badger on Strategyproof Mechanisms: Impossibilities · 2014-05-16T12:15:01.109Z · LW · GW

Thanks. Fixed.

Comment by badger on Strategyproof Mechanisms: Impossibilities · 2014-05-16T04:09:24.137Z · LW · GW

Arrow's theorem doesn't apply to rating systems like approval or range voting. However, Gibbard-Satterthwaite still holds. It holds more intensely if anything since agents have more ways to lie. Now you have to worry about someone saying their favorite is ten times better than their second favorite rather than just three times better in addition to lying about the order.

Comment by badger on Open Thread, May 5 - 11, 2014 · 2014-05-06T16:46:37.225Z · LW · GW

See pg. 391-392 of The Role of Deliberate Practice in the Acquisition of Expert Performance.pdf), the paper that kicked off the field. A better summary is that 2-4 hours is the maximum sustainable amount of deliberate practice in a day.

Comment by badger on [Sequence announcement] Introduction to Mechanism Design · 2014-05-05T23:33:52.815Z · LW · GW

I'm a PhD student working in this field and have TA'd multiple years for a graduate course covering this material.

Comment by badger on Incentive compatibility and the Revelation Principle · 2014-05-03T20:46:03.916Z · LW · GW

Typo fixed now. Jill's payment should be p_Jill = 300 - p_Jack.

The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn't do it for them. The "bid and split excess" mechanism I mention at the very end could be better if people are occasionally honest.

I'm now curious what's possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It'd be fairly easy to calculate the potential welfare gain by adding a flag to the agent's type saying whether they are helpful or strategic and yet again applying the revelation principle. The trickier part would be finding an useful indirect mechanism to match that, since it'd be painfully obvious that you'd get a smaller payoff for saying you're helpful under the direct mechanism.

Comment by badger on Incentive compatibility and the Revelation Principle · 2014-05-03T16:22:35.323Z · LW · GW

I added some explanation right after the diagram to clarify. The idea is that if I can design a game where players have dominant strategies, then I can also design a game where they have a dominant strategy to honestly reveal their types to me and proceed on that basis.

Comment by badger on Mechanism Design: Constructing Algorithms for Strategic Agents · 2014-05-01T14:35:43.913Z · LW · GW

That's an indexed Cartesian product, analogous to sigma notation for indexed summation, so is the set of all vectors of agent types.

Comment by badger on Mechanism Design: Constructing Algorithms for Strategic Agents · 2014-05-01T13:25:08.822Z · LW · GW

Thanks for catching that!

I did introduce a lot here. Now that I've thrown all the pieces of the model out on the table, I'll include refreshers as I go along so it can actually sink in.

Comment by badger on [Sequence announcement] Introduction to Mechanism Design · 2014-05-01T13:17:15.750Z · LW · GW

Aside from academic economists and computer scientists? :D Auction design has been a big success story, enough so that microeconomic theorists like Hal Varian and Preston McAfee now work at Google full time. Microsoft and other tech companies also have research staff working specifically on mechanism design.

As far as people that should have some awareness (whether they do or not): anyone implementing an online reputation system, anyone allocating resources (like a university allocating courses to students or the US Army allocating ROTC graduates to its branches), or anyone designing government regulation.

Comment by badger on [Sequence announcement] Introduction to Mechanism Design · 2014-04-30T21:58:24.559Z · LW · GW

Some exposure to game theory. Otherwise, tolerance of formulas and a little bit of calculus for optimization.

At least, I hope that's the case. I've been teaching this to economics grad students for the past few years, so I know common points of misunderstanding, but can easily take some jargon for granted. Please call me out on anything that is unclear.

Comment by badger on Open Thread April 8 - April 14 2014 · 2014-04-08T21:03:45.761Z · LW · GW

Alright, that makes more sense. Random music can randomize emotional state, just like random drugs can randomize physical state. Personally, I listen to a single artist at a time.

Comment by badger on Open Thread April 8 - April 14 2014 · 2014-04-08T20:33:15.901Z · LW · GW

Music randomizes emotion and mindstate.

Wait, where did "randomizes" come from? The study you link and the standard view says that music can induce specific emotions. The point of the study is that emotions induced by music can carry over into other areas, which suggests we might optimize when we use specific types of music. The study you link about music and accidents also suggests specific music decreased risks.

All the papers I'm immediately seeing on Google Scholar suggest there is no association between background music and studying effectiveness, or if there is, it's only negative for those that don't usually study to music. If that's accurate, either people are already fairly aware of whether music distracts them, they would adapt to it given time, or they don't know what kinds of music are effective for them due to lack of experience.

Comment by badger on Open Thread April 8 - April 14 2014 · 2014-04-08T18:31:05.944Z · LW · GW

Hmm... Atlas Shrugged does have (ostensible) paragons. Rand's idea of Romanticism as portraying "the world as it should be" seems to match up: "What Romantic art offers is not moral rules, not an explicit didactic message, but the image of a moral person—i.e., the concretized abstraction of a moral ideal." (source) Rand's antagonists do tend to be all flaws and no virtues though.

Comment by badger on Open Thread April 8 - April 14 2014 · 2014-04-08T17:59:08.574Z · LW · GW

One more hypothesis after reading other comments:

HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than "rationalist fiction", a better term would be "paragon fiction". Characters have rich and conflicting motives so life isn't a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or against their interests. Virtues aren't merely labels and obstacles don't automatically dissolve, so readers could learn to emulate these paragons through observation.

This actually does seem at odds with the western canon, and off-hand I can't think of anything else that might be described in this way. Perhaps something like Hikaru No Go? Though I haven't read them, maybe Walter Jon Williams' Aristoi or Ian Banks' Culture series?

Comment by badger on Open Thread April 8 - April 14 2014 · 2014-04-08T15:55:04.616Z · LW · GW

I'm also somewhat confused by this. I love HPMoR and actively recommend it to friends, but to the extent Eliezer's April Fools' confession can be taken literally, characterizing it as "you-don't-have-a-word genre" and coming from "an entirely different literary tradition" seems a stretch.

Some hypotheses:

  1. Baseline expectations for Harry Potter fanfic are so low that when it turns out well, it seems much more stunning than it does relative to a broader reference class of fiction.
  2. Didactic fiction is nothing new, but high quality didactic fiction is an incredibly impressive accomplishment.
  3. The scientific content happens to align incredibly well with some readers' interests, making it genre-breaking in the same way The Hunt for Red October was for technical details of submarines. If you are into that specific field, it feels world-shatteringly good. For puns about hydras and ordinals, HPMoR is the only game in town, but that's ultimately a sparse audience.
  4. There is a genuine gap in fiction that is both light-hearted and serious in places which Eliezer managed to fill. Pratchett is funny and can make great satirical points, but doesn't have the same dramatic tension. Works that otherwise get the dramatic stakes right tend to steer clear of being light-hearted and inspirational. HPMoR is genre-breaking for roughly the same reasons Adventure Time gets the same accolades.
Comment by badger on [deleted post] 2014-04-03T17:13:31.384Z

Exactly. No need to put tunnels underground when it makes substantially more sense to build platforms over existing roads. This also means cities can expand or rezone more flexibly since you can just build standard roads like now and then add bridges or full platforms when pedestrians enter the mix. Rain, snow, and deer don't require more than a simple aluminum structure.

Comment by badger on Open Thread March 31 - April 7 2014 · 2014-04-01T23:03:10.920Z · LW · GW

What do you mean by applying Kelly to the LMSR?

Since relying on Kelly is equivalent to maximizing log utility of wealth, I'd initially guess there is some equivalence between a group of risk-neutral agents trading via the LMSR and a group of Kelly agents with equal wealth trading directly. I haven't seen anything around in the literature though.

"Learning Performance of Prediction Markets with Kelly Bettors" looks at the performance of double auction markets with Kelly agents, but doesn't make any reference to Hanson even though I know Pennock is aware of the LMSR.

"The Parimutuel Kelly Probability Scoring Rule" might point to some connection.

Comment by badger on Open Thread March 31 - April 7 2014 · 2014-04-01T12:36:34.255Z · LW · GW

Hidden Order by David Friedman is a popular book, but is semi-technical enough that it could serve as a textbook for an intro microeconomics course.

Comment by badger on Irrationality Game III · 2014-03-13T17:09:12.543Z · LW · GW

What are a few more structured approaches that could substantially improve matters? Some improvements can definitely be made, but I disagree that outcomes are much worse. Two studies suggest marriage markets are about 20% off the optimal match (Suen and Li (1999), "A direct test of the efficient marriage market hypothesis", based on Hong Kong data, and Cao et al (2010), "Optimizing the marriage market", based on Swiss data). While 20% is not trivial, it's not a major failure.

If there are major improvements to be had, I expect it to come through individual attitudes and expectations, not overall structure. Does advice like "don't become fixated on one person while you're still young" count as more structure?

Comment by badger on Open Thread February 25 - March 3 · 2014-02-26T21:43:20.038Z · LW · GW

Scarce signals do increase willingness to go on dates, based on a field experiment of online dating in South Korea.

Comment by badger on Open Thread February 25 - March 3 · 2014-02-26T02:20:25.684Z · LW · GW

Thanks for the SA paper!

The parameter space is only two dimensional here, so it's not hard to eyeball roughly where the minimum is if I sample enough. I can say very little about the noise. I'm more interested being able to approximate the optimum quickly (since simulation time adds up) than hitting it exactly. The approach taken in this paper based on a non-parametric tau test looks interesting.

Comment by badger on Open Thread February 25 - March 3 · 2014-02-26T01:55:03.482Z · LW · GW

The parameter space in this current problem is only two dimensional, so I can eyeball a plausible region, sample at a higher rate there, and iterate by hand. In another project, I had something with an very high dimensional parameter space, so I figured it's time I learn more about these techniques.

Any resources you can recommend on this topic then? Is there a list of common shortcuts anywhere?

Comment by badger on Open Thread February 25 - March 3 · 2014-02-25T22:35:46.200Z · LW · GW

Not really. In this particular case, I'm minimizing how long it takes a simulation reach one state, so the distribution ends up looking lognormal- or Poisson-ish.

Edit: Seeing your added question, I don't need an efficient estimator in the usual sense per se. This is more about how to search the parameter space in a reasonable way to find where the minimum is, despite the noise.

Comment by badger on Open Thread February 25 - March 3 · 2014-02-25T22:08:11.061Z · LW · GW

Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.

Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?

Comment by badger on Single player extensive-form games as a model of UDT · 2014-02-25T20:26:10.506Z · LW · GW

Your description of incomplete information is off. What you give as the definition of incomplete information is one type of imperfect information, where nature is added as a player.

A game has incomplete information when one player has more information than another about payoffs. Since Harsanyi, incomplete information has been seen as a special case of imperfect information with nature randomly assigning types to each player according to a commonly known distribution and payoffs given types being commonly known.

Comment by badger on Single player extensive-form games as a model of UDT · 2014-02-25T20:02:33.820Z · LW · GW

The first option is standard. When the second interpretation comes up, those strategies are referred to as behavior strategies.

If every information set is visited at most once in the course of play, then the game satisfies no-absent-mindedness and every behavior strategy can be represented as a standard mixed strategy (but some mixed strategies don't have equivalent behavior strategies).

Kuhn's theorem says the game has perfect recall (roughly players never forget anything and there is a clear progression of time) if and only if mixed and behavior strategies are equivalent.

Comment by badger on Open Thread for February 18-24 2014 · 2014-02-24T23:38:18.290Z · LW · GW

Haidt's claim is that liberals rely on purity/sacredness relatively less often, but it's still there. Some of the earlier work on the purity axis put heavy emphasis on sex or sin. Since then, Haidt has acknowledged that the difference between liberals and conservatives might even out if you add food or environmental concerns to purity.

Comment by badger on Open Thread for February 18-24 2014 · 2014-02-24T23:27:02.367Z · LW · GW

Haidt acknowledges that liberals feel disgust at racism and that this falls under purity/sacredness (explicitly listing it in a somewhat older article on Table 1, pg 59). His claim is that liberals rely on the purity/sacredness scale relatively more often, not that they never engage it. Still, in your example, I'd expect the typical reaction to be anger at a fairness violation rather than disgust.

Comment by badger on Open Thread for February 18-24 2014 · 2014-02-20T02:18:11.518Z · LW · GW

My guess is the person most likely to defend this criterion is a Popperian of some flavor, since precise explanations (as you define them) can be cleanly falsified.

While it's nice when something is cleanly falsified, it's not clear we should actively strive for precision in our explanations. An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn't mean we should give it an extra penalty.

If all explanations have equal prior probability, then Bayesian reasoning will tend to favor the most precise explanations consistent with the evidence. Seeing a black marble is most likely when all the marbles in a collection are black. If you then found a red marble, that would definitely rule out the black collection (assuming they both had to come from the same one). The best candidate would then be one that is half each. Ultimately, this all comes back down to likelihoods though, so I'm not sure the idea of precision adds much.

Comment by badger on Open Thread for February 18-24 2014 · 2014-02-19T14:58:06.681Z · LW · GW

I've used 1-2mg of nicotine (via gum) a few of times a month for a couple years. I previously used it a few times a week for a few months before getting a methylphenidate prescription for ADD. There hasn't been any noticeable dependency, but I haven't had that with other drugs either.

Using it, I feel more focused and more confident, in contrast to caffeine which tends to just leaves me jittery and methyphenidate which is better for focus but doesn't have the slight positive emotion boost. Delivered via gum, the half-life is short (an hour at most). That's not great for a day-to-day stimulant, but it's useful when I need something at 6pm and methylphenidate would interfere with my sleep. The primary downside is occasional nausea. Now I'm wondering if patches would be longer-lasting and less nausea-inducing.