post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2018-01-19T01:08:42.853Z · LW(p) · GW(p)

The machine learning metaphor you want is the distinction between supervised learning and unsupervised learning. Supervised learning is when someone hands you a bunch of pictures and labels them as being either cats or dogs and it's your job to infer future cat-dog labels. Unsupervised learning is when someone hands you a bunch of pictures and it's your job to infer that they can be separated into two clumps which, if you handed them to a human, the human might say "those are cats and those are dogs" (but maybe it's more complicated than that).

The simplest subtype of unsupervised learning is clustering, where you only get a bunch of unlabeled data points and it's your job to organize them into clusters, which you might loosely map onto buckets. Roughly speaking, there are three sorts of things that can happen to your clusters as you get more data points, namely

  1. A data point appears which is so far away from your other clusters that you need a new cluster for it,
  2. Something you thought was one cluster gets broken into two clusters, or
  3. Something you thought was two clusters gets merged into one cluster.

So far this metaphor / model doesn't have or need a notion of the "name" of a cluster, which is more complicated. Anyway, this seems like as good a place as any for a starting point for the rationality development needed here.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-01-19T08:31:39.421Z · LW(p) · GW(p)

Ugh, this comment is also on the wrong post; it's supposed to be a comment to Soft Priors.

comment by Qiaochu_Yuan · 2018-01-19T00:57:46.229Z · LW(p) · GW(p)

Neither of these sides really resonated for me as described so I don't have a good sense of what you're pointing to. The closest you got for me was expecting a lot vs. expecting little.

I just finished reading Elephant in the Brain so I'm particularly drawn to what these mean as social strategies: expecting a lot of others vs. expecting little of others. The question seems to be something like, what do I think other people owe me? What promises, implicit or explicit, do I take them to have made, and how should I react if I think those promises have been broken? Seems related to the question of how diachronic vs. episodic I expect others to be.

comment by Conor Moreton · 2017-09-28T04:30:59.150Z · LW(p) · GW(p)

(secretly interprets lack of commentary as 100% endorsement even though really it's opportunity costs)

comment by hamnox · 2017-09-28T18:38:11.679Z · LW(p) · GW(p)

I try to keep tallies of both wins and mistakes (and maybe a third tally for things I just found interesting), then keep an eye on my relative scores. I reset the counters once one hit ~25 and recalibrate their sensitivity so they're in range of feeling actionable.

Example: Playing frisbee, I'd keep track of throws that were good enough for the other person to catch (+) or ones they had to run for (-).
If numbers came too slowly I'd focus on smaller aspects like good form or trying out variations.
If everything counted up too quickly I'd focus on getting a whole sequence (run fast, catch smoothly, do a trick, throw straight) right.
If I was too skewed towards one side or the other, I could change focus on just one side or tighten the technicalities of what counted.

Replies from: Conor Moreton
comment by Conor Moreton · 2017-10-01T02:54:00.267Z · LW(p) · GW(p)

Oh, interesting. Keeping track of + and - for the same thing, like frisbee tosses, indicates an obvious third path of "start out assuming average priors and then look for consistent deltas."

comment by abstractwhiz · 2018-01-23T22:10:37.369Z · LW(p) · GW(p)

EDIT: This belongs on a different post, looks like there's some kind of commenting bug. (I've seen Qiaochu complaining about the same thing here.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-01-23T22:27:15.746Z · LW(p) · GW(p)

The bug is specific to navigating through this sequence, I think; the comment will end up where you started navigating and not where you are. Apparently it will be fixed soon. For now I think refreshing before commenting will fix it?

comment by Chris_Leong · 2017-10-03T08:38:50.812Z · LW(p) · GW(p)

Counting up sounds more like what optimists would do to me - focus on identifying good qualities.

Counting down sounds more like what pessimists would do to me - focus on identifying flaws.

More specifically, it seems to capture a specific element of optimism/pessimism:

  • Do you focus on the good elements or the bad elements?

If I had to break it up into other components, I'd probably say:

  • Do you overestimate or underestimate probabilities of good things happening (reverse for bad things)?

  • Are you confident with you ability to cope with the bad?

Replies from: Conor Moreton
comment by Conor Moreton · 2017-10-03T16:18:05.774Z · LW(p) · GW(p)

I agree that it felt counter-intuitive, at first, to assign counting up to the pessimists and counting down to the optimists. I think really what this points at is that, for me at least, "optimists" and "pessimists" are terms that aren't really cutting reality at the joints.

I think it's a sort of fundamentally pessimistic move to start at zero, but then there's optimism and enthusiasm for each step up. And I think it's sort of fundamentally optimistic to start at 100, but then there's pessimism and holding-to-a-standard with each point knocked off. So I weaken my claim that either strategy is heavily one or the other.