Buck's Shortform

post by Buck · 2019-08-18T07:22:26.247Z · score: 11 (2 votes) · LW · GW · 15 comments

15 comments

Comments sorted by top scores.

comment by Buck · 2019-12-02T00:27:14.852Z · score: 53 (9 votes) · LW(p) · GW(p)

[I'm not sure how good this is, it was interesting to me to think about, idk if it's useful, I wrote it quickly.]

Over the last year, I internalized Bayes' Theorem much more than I previously had; this led me to noticing that when I applied it in my life it tended to have counterintuitive results; after thinking about it for a while, I concluded that my intuitions were right and I was using Bayes wrong. (I'm going to call Bayes' Theorem "Bayes" from now on.)

Before I can tell you about that, I need to make sure you're thinking about Bayes in terms of ratios rather than fractions. Bayes is enormously easier to understand and use when described in terms of ratios. For example: Suppose that 1% of women have a particular type of breast cancer, and a mammogram is 20 times more likely to return a positive result if you do have breast cancer, and you want to know the probability that you have breast cancer if you got that positive result. The prior probability ratio is 1:99, and the likelihood ratio is 20:1, so the posterior probability is = 20:99, so you have probability of 20/(20+99) of having breast cancer.

I think that this is absurdly easier than using the fraction formulation. I think that teaching the fraction formulation is the single biggest didactic mistake that I am aware of in any field.


Anyway, a year or so ago I got into the habit of calculating things using Bayes whenever they came up in my life, and I quickly noticed that Bayes seemed surprisingly aggressive to me.

For example, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I wondered how regularly he went there. Consider the hypotheses of "he goes here three times a week" and "he goes here once a month". The likelihood ratio is about 12x in favor of the former hypothesis. So if I previously was ten to one against the three-times-a-week hypothesis compared to the once-a-month hypothesis, I'd now be 12:10 = 6:5 in favor of it. This felt surprisingly high to me.

(I have a more general habit of thinking about whether the results of calculations feel intuitively too low or high to me; this has resulted in me noticing amusing inconsistencies in my numerical intuitions. For example, my intuitions say that $3.50 for ten photo prints is cheap, but 35c per print is kind of expensive.)

Another example: A while ago I walked through six cars of a train, which felt like an unusually long way to walk. But I realized that I'm 6x more likely to see someone who walks 6 cars than someone who walks 1.

In all these cases, Bayes Theorem suggested that I update further in the direction of the hypothesis favored by the likelihood ratio than I intuitively wanted to. After considering this a bit more, I have came to the conclusion that my intuitions were directionally right; I was calculating the likelihood ratios in a biased way, and I was also bumping up against an inconsistency in how I estimated priors and how I estimated likelihood ratios.

If you want, you might enjoy trying to guess what mistake I think I was making, before I spoil it for you.


Here's the main mistake I think I was making. Remember the two hypotheses about my friend going to the hot tub place 3x a week vs once a month? I said that the likelihood ratio favored the first by 12x. I calculated this by assuming that in both cases, my friend visited the hot tub place on random nights. But in reality, when I'm asking whether my friend goes to the hot tub place 3x every week, I'm asking about the total probability of all hypotheses in which he visits the hot tub place 3x per week. There are a variety of such hypotheses, and when I construct them, I notice that some of the hypotheses placed a higher probability on me seeing my friend than the random night hypothesis. For example, it was a Saturday night when I saw my friend there and started thinking about this. It seems kind of plausible that my friend goes once a month and 50% of the times he visits are on a Saturday night. If my friend went to the hot tub place three times a week on average, no more than a third of those visits could be on a Saturday night.

I think there's a general phenomenon where when I make a hypothesis class like "going once a month", I neglect to think about things about specific hypotheses in the class which make the observed data more likely. The hypothesis class offers a tempting way to calculate the likelihood, but it's in fact a trap.

There's a general rule here, something like: When you see something happen that a hypothesis class thought was unlikely, you update a lot towards hypotheses in that class which gave it unusually high likelihood.

And this next part is something that I've noticed, rather than something that follows from the math, but it seems like most of the time when I make up hypotheses classes, something like this happens where I initially calculate the likelihood to be lower than it is, and the likelihoods of different hypothesis classes are closer than they would be.

(I suspect that the concept of a maximum entropy hypothesis is relevant. For every hypothesis class, there's a maximum entropy (aka maxent) hypothesis, which is the hypothesis which is maximally uncertain subject to the constraint of the hypothesis class. Eg the maximum entropy hypothesis for the class "my friend visits the hot tub place three times a month on average" is the hypothesis where the probability of my friend visiting the hot tub place every day is equal and uncorrelated. In my experience in real world cases, hypotheses classes tend to contain non-maxent hypotheses which fit the data better much better. In general for a statistical problem, these hypotheses don't do better than the maxent hypothesis; I don't know why they tend to do better in problems I think about.)


Another thing causing my posteriors to be excessively biased towards low-prior high-likelihood hypotheses is that priors tend to be more subjective to estimate than likelihoods are. I think I'm probably underconfident in assigning extremely high or low probabilities to hypotheses, and this means that when I see something that looks like moderate evidence of an extremely unlikely event, the likelihood ratio is more extreme than the prior, leading me to have a counterintuitively high posterior on the low-prior hypothesis. I could get around this by being more confident in my probability estimates at the 98% or 99% level, but it takes a really long time to become calibrated on those.

comment by Ben Pace (Benito) · 2019-12-02T01:06:03.125Z · score: 7 (3 votes) · LW(p) · GW(p)

If you want, you might enjoy trying to guess what mistake I think I was making, before I spoil it for you.

Time to record my thoughts! I won't try to solve it fully, just note my reactions.

For example, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I wondered how regularly he went there. Consider the hypotheses of "he goes here three times a week" and "he goes here once a month". The likelihood ratio is about 12x in favor of the former hypothesis. So if I previously was ten to one against the three-times-a-week hypothesis compared to the once-a-month hypothesis, I'd now be 12:10 = 6:5 in favor of it. This felt surprisingly high to me.

Well, firstly, I'm not sure that the likelihood ratio is 12x in favor of the former hypothesis. Perhaps likelihood of things clusters - like people either do things a lot, or they never do things. It's not clear to me that I have an even distribution of things I do twice a month, three times a month, four times a month, and so on. I'd need to think about this more.

Also, while I agree it's a significant update toward your friend being a regular there given that you saw them the one time you went, you know a lot of people, and if it's a popular place then the chances of you seeing any given friend is kinda high, even if they're all irregular visitors. Like, if each time you go you see a different friend, I think it's more likely that it's popular and lots of people go from time to time, rather than they're all going loads of times each.

Another example: A while ago I walked through six cars of a train, which felt like an unusually long way to walk. But I realized that I'm 6x more likely to see someone who walks 6 cars than someone who walks 1.

I don't quite get what's going on here. As someone from Britain, I regularly walk through more than 6 cars of a train. The anthropics just checks out.

comment by Liam Donovan (liam-donovan) · 2019-12-02T10:55:02.929Z · score: 1 (1 votes) · LW(p) · GW(p)
The prior probability ratio is 1:99, and the likelihood ratio is 20:1, so the posterior probability is 120:991 = 20:99, so you have probability of 20/(20+99) of having breast cancer.

What does "120:991" mean here?

comment by Buck · 2019-12-02T23:29:42.635Z · score: 4 (2 votes) · LW(p) · GW(p)

formatting problem, now fixed

comment by Buck · 2019-08-18T07:22:26.379Z · score: 53 (24 votes) · LW(p) · GW(p)

I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour.

It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions.

For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise worth doing even if you're not going to have a tutor, IMO.) We spent the time talking about some mix of the questions I'd prepared, various tangents that came up during those explanations, and his sense of the field overall.

I came away with a whole bunch of my minor misconceptions fixed, a few pointers to topics I wanted to learn more about, and a way better sense of what the field feels like and what the important problems and recent developments are.

There are a few reasons that having a paid tutor is a way better way of learning about a field than trying to meet people who happen to be in that field. I really like it that I'm paying them, and so I can aggressively direct the conversation to wherever my curiosity is, whether it's about their work or some minor point or whatever. I don't need to worry about them getting bored with me, so I can just keep asking questions until I get something.

Conversational moves I particularly like:

  • "I'm going to try to give the thirty second explanation of how gene expression is controlled in animals; you should tell me the most important things I'm wrong about."
  • "Why don't people talk about X?"
  • "What should I read to learn more about X, based on what you know about me from this conversation?"

All of the above are way faster with a live human than with the internet.

I think that doing this for an hour or two weekly will make me substantially more knowledgeable over the next year.

Various other notes on online tutors:

  • Online language tutors are super cheap--I had some Japanese tutor who was like $10 an hour. They're a great way to practice conversation. They're also super fun IMO.
  • Sadly, tutors from well paid fields like programming or ML are way more expensive.
  • If you wanted to save money, you could gamble more on less credentialed tutors, who are often $20-$40 an hour.

If you end up doing this, I'd love to hear your experience.

comment by habryka (habryka4) · 2019-08-18T07:37:21.896Z · score: 29 (10 votes) · LW(p) · GW(p)

I've hired tutors around 10 times while I was studying at UC-Berkeley for various classes I was taking. My usual experience was that I was easily 5-10 times faster in learning things with them than I was either via lectures or via self-study, and often 3-4 one-hour meetings were enough to convey the whole content of an undergraduate class (combined with another 10-15 hours of exercises).

comment by crabman · 2019-08-18T10:17:57.936Z · score: 11 (5 votes) · LW(p) · GW(p)

How do you spend time with the tutor? Whenever I tried studying with a tutor, it didn't seem more efficient than studying using a textbook. Also when I study on my own, I interleave reading new materials and doing the exercises, but with a tutor it would be wasteful to do exercises during the tutoring time.

comment by habryka (habryka4) · 2019-08-18T18:14:41.892Z · score: 24 (7 votes) · LW(p) · GW(p)

I usually have lots of questions. Here are some types of questions that I tended to ask:

  • Here is my rough summary of the basic proof structure that underlies the field, am I getting anything horribly wrong?
    • Examples: There is a series of proof at the heart of Linear Algebra that roughly goes from the introduction of linear maps in the real numbers to the introduction of linear maps in the complex numbers, then to finite fields, then to duality, inner product spaces, and then finally all the powerful theorems that tend to make basic linear algebra useful.
    • Other example: Basics of abstract algebra, going from groups and rings to modules, fields, general algebra's, etcs.
  • "I got stuck on this exercise and am confused how to solve it". Or, "I have a solution to this exercise but it feels really unnatural and forced, so what intuition am I missing?"
  • I have this mental visualization that I use to solve a bunch of problems, are there any problems with this mental visualization and what visualization/intuition pumps do you use?
    • As an example, I had a tutor in Abstract Algebra who was basically just: "Whenever I need to solve a problem of "this type of group has property Y", I just go through this list of 10 groups and see whether any of them has this property, and ask myself why it has this property, instead of trying to prove it in abstract"
  • How is this field connected to other ideas that I am learning?
    • Examples: How is the stuff that I am learning in real analysis related to the stuff in machine learning? Are there any techniques that machine learning uses from real analysis that it uses to achieve actually better performance?
comment by SoerenMind · 2019-08-20T20:25:23.451Z · score: 10 (4 votes) · LW(p) · GW(p)

Hired an econ tutor based on this.

comment by magfrump · 2019-08-20T04:38:53.042Z · score: 4 (2 votes) · LW(p) · GW(p)

How do you connect with tutors to do this?

I feel like I would enjoy this experience a lot and potentially learn a lot from it, but thinking about figuring out who to reach out to and how to reach out to them quickly becomes intimidating for me.

comment by habryka (habryka4) · 2019-08-20T18:35:05.790Z · score: 4 (2 votes) · LW(p) · GW(p)

I posted on Facebook, and LW might actually also be a good place for some subset of topics.

comment by Buck · 2019-08-20T21:04:12.980Z · score: 1 (1 votes) · LW(p) · GW(p)

I recommend looking on Wyzant.

comment by Ben Pace (Benito) · 2019-08-18T18:08:10.340Z · score: 2 (1 votes) · LW(p) · GW(p)

This sounds like a really fun thing I can do at weekends / in the mornings [LW · GW]. I’ll try it out and report back sometime.

comment by Chris_Leong · 2019-08-18T14:48:57.381Z · score: 2 (1 votes) · LW(p) · GW(p)

Thanks for posting this. After looking, I'm definitely tempted.

comment by Buck · 2019-08-21T01:20:18.379Z · score: 32 (7 votes) · LW(p) · GW(p)

A couple weeks ago I spent an hour talking over video chat with Daniel Cantu, a UCLA neuroscience postdoc who I hired on Wyzant.com to spend an hour answering a variety of questions about neuroscience I had. (Thanks Daniel for reviewing this blog post for me!)

The most interesting thing I learned is that I had quite substantially misunderstood the connection between convolutional neural nets and the human visual system. People claim that these are somewhat bio-inspired, and that if you look at early layers of the visual cortex you'll find that it operates kind of like the early layers of a CNN, and so on.

The claim that the visual system works like a CNN didn’t quite make sense to me though. According to my extremely rough understanding, biological neurons operate kind of like the artificial neurons in a fully connected neural net layer--they have some input connections and a nonlinearity and some output connections, and they have some kind of mechanism for Hebbian learning or backpropagation or something. But that story doesn't seem to have a mechanism for how neurons do weight tying, which to me is the key feature of CNNs.

Daniel claimed that indeed human brains don't have weight tying, and we achieve the efficiency gains over dense neural nets by two other mechanisms instead:

Firstly, the early layers of the visual cortex are set up to recognize particular low-level visual features like edges and motion, but this is largely genetically encoded rather than learned with weight-sharing. One way that we know this is that mice develop a lot of these features before their eyes open. These low-level features can be reinforced by positive signals from later layers, like other neurons, but these updates aren't done with weight-tying. So the weight-sharing and learning here is done at the genetic level.

Secondly, he thinks that we get around the need for weight-sharing at later levels by not trying to be able to recognize complicated details with different neurons. Our vision is way more detailed in the center of our field of view than around the edges, and if we need to look at something closely we move our eyes over it. He claims that this gets around the need to have weight tying, because we only need to be able to recognize images centered in one place.

I was pretty skeptical of this claim at first. I pointed out that I can in fact read letters that are a variety of distances from the center of my visual field; his guess is that I learned to read all of these separately. I'm also kind of confused by how this story fits in with the fact that humans seem to relatively quickly learn to adapt to inversion goggled. I would love to check what some other people who know neuroscience think about this.

I found this pretty mindblowing. I've heard people use CNNs as an example of how understanding brains helped us figure out how to do ML stuff better; people use this as an argument for why future AI advances will need to be based on improved neuroscience. This argument seems basically completely wrong if the story I presented here is correct.