Posts

What are some examples from history where a scientific theory predicted a significant experimental observation in advance? 2021-07-17T05:39:56.721Z

Comments

Comment by Insub on An Observation of Vavilov Day · 2022-01-04T18:40:46.536Z · LW · GW

I'll offer up my own fasting advice as well:

I (and the couple of people I know who have also experimented with fasting) have found it to be a highly trainable skill. Doing a raw 36-hour fast after never having fasted before may be miserable; but doing the same fast after two weeks of 16-8 intermittent fasting will probably be no big deal.

Before I started intermittent fasting, I'd done a few 30-hour fasts, and all of them got very difficult towards the end. I would get headaches, feel very fatigued, and not really be able to function from hours 22-30. When I started IF, the first week was quite tough. I'd have similar symptoms as the fasting window was ending: headaches, trouble focusing. But then right around the two week mark, things changed. The symptoms went away, and the hunger became a much more "passive" feeling. Rather than hunger directly causing discomfort, the hunger now feels more like a "notification". Just my body saying "hey, just so you know, we haven't eaten for a while", rather than it saying "you're going to die if you don't eat right this moment". This change has been persistent, even during periods where I've stopped IF.

Both of the two others I've seen try IF have reported something similar, that the first few weeks are tough, but then the character of hunger itself starts to change. Today, I can go 24 hours without eating fairly trivially, ie without much distraction or performance decreases from hunger.

Going 36 will still be a challenge, but some pre-training may make it easier! Of course you may be specifically trying to test your willpower, in which case making it easier may be counter productive. Either way, this seems like a cool idea for a secular holiday. Best of luck! 

Comment by Insub on AI Safety Needs Great Engineers · 2021-11-24T03:08:30.647Z · LW · GW

I'm in a similar place, and had the exact same thought when I looked at the 80k guide.

Comment by Insub on Nate Soares on the Ultimate Newcomb's Problem · 2021-11-01T21:49:55.453Z · LW · GW

Yes that was my reasoning too. The situation presumably goes:

  1. Omicron chooses a random number X, either prime or composite
  2. Omega simulates you, makes its prediction, and decides whether X's primality is consistent with its prediction
  3. If it is, then:
    1. Omega puts X into the box
    2. Omega teleports you into the room with the boxes and has you make your choice
  4. If it's not, then...? I think the correct solution depends on what Omega does in this case.
    1. Maybe it just quietly waits until tomorrow and tries again? In which case no one is ever shown a case where the box does not contain Omicron's number. If this is how Omega is acting, then I think you can act as though your choice affects Omircon's number, even though that number is technically random on this particular day.
    2. Maybe it just picks its own number, and shows you the problem anyway. I believe this was the assumption in the post.
Comment by Insub on How much should you update on a COVID test result? · 2021-10-18T01:41:03.240Z · LW · GW

I remember hearing from what I thought was multiple sources that your run-of-the-mill PCR test had something like a 50-80% sensitivity, and therefore a pretty bad bayes factor for negative tests. But that doesnt seem to square with these results - any idea what Im thinking of?

Comment by Insub on Secure homes for digital people · 2021-10-11T01:07:56.468Z · LW · GW

I agree. It makes me really uncomfortable to think that while Hell doesn't exist today, we might one day have the technology to create it.

Comment by Insub on EA Hangout Prisoners' Dilemma · 2021-09-28T04:49:36.284Z · LW · GW

I’m disappointed that a cooperative solution was not reached

I think you would have had to make the total cooperation payoff greater than the total one-side-defects payoff in order to get cooperation as the final result. From a "maximize money to charity" standpoint, defection seems like the best outcome here (I also really like the "pre-commit to flip a coin and nuke" solution). You'd have to believe that the expected utility/$ of the "enemy" charity is less than 1/2 of the expected utility/$ of yours; otherwise, you'd be happier with the enemy side defecting than with cooperation. I personally wouldn't be that confident about the difference between AMF and MIRI.

Comment by Insub on [deleted post] 2021-09-06T02:18:14.383Z

For those of us who don't have time to listen to the podcasts, can you give a quick summary of which particular pieces of evidence are strong? I've mostly been ignoring the UFO situation due to low priors. Relatedly, when you say the evidence is strong, do you mean that the posterior probability is high? Or just that the evidence causes you to update towards there being aliens? Ie, is the evidence sufficient to outweigh the low priors/complexity penalties that the alien hypothesis seems to have?

FWIW, my current view is something like:

  • I've seen plenty of videos of UFOs that seemed weird at first that turned out to have a totally normal explanation. So I treat "video looks weird" as somewhat weak Bayesian evidence.
  • As for complexity penalties: If there were aliens, it would have to be explained why they mostly-but-not-always hide themselves. I don't think it would be incompetence, if they're the type of civilization that can travel stellar distances.
  • It would also have to be explained why we haven't seen evidence of their (presumably pretty advanced) civilization
  • And it would have to be explained why there hasn't been any real knock-down evidence, eg HD close-up footage of an obviously alien ship (unless this is the type of evidence you're referring to?). A bunch of inconclusive, non-repeatable, low-quality data seems to be much more likely in the world where UFOs are not aliens. Essentially there's a selection effect where any sufficiently weird video will be taken as an example of a UFO. It's easier for a low-quality video to be weird, because the natural explanations are masked by the low quality. So the set of weird videos will include more low-quality data sources than the overall ratio of existing high/low quality sources would indicate. Whereas, if the weird stuff really did exist, you'd think the incidence of weird videos would match the distribution of high/low quality sources (which I don't think it does? as video tech has improved, have we seen corresponding improvements in average quality of UFO videos?).
Comment by Insub on How To Write Quickly While Maintaining Epistemic Rigor · 2021-08-29T17:51:37.805Z · LW · GW

I really like this post for two reasons:

  1. I've noticed that when I ask someone "why do you believe X", they often think that I'm asking them to cite sources or studies or some such. This can put people on the defensive, since we usually don't have ready-made citations in our heads for every belief. But that's not what I'm trying to ask; I'm really just trying to understand what process actually caused them to believe X, as a matter of historical fact. That process could be "all the podcasters I listen to take X as a given", or "my general life experience/intuition has shown X to be true". You've put this concept into words here and solidified the idea for me: that it's helpful to communicate why you actually believe something, and let others do with that what they will.
  2. The point about uncertainty is really interesting. I'd never realized before that if you present your conclusion first, and then the evidence for it, then it sure looks like you already had that hypothesis for some reason before getting a bunch of confirming evidence. Which implies that you have some sort of evidence/intuition that led you to the hypothesis in addition to the evidence you're currently presenting.

I've wondered why I enjoy reading Scott Alexander so much, and I think that the points you bring up here are a big reason why. He explains his processes really well, and I usually end up feeling that I understand what actually caused him to believe his conclusions.

Comment by Insub on What are some beautiful, rationalist sounds? · 2021-08-06T03:24:56.447Z · LW · GW

In a similar vein, there's a bunch of symphony of science videos. These are basically remixes of random quotes by various scientists, roughly grouped by topic into a bunch of songs.

Comment by Insub on What does knowing the heritability of a trait tell me in practice? · 2021-07-27T05:14:58.022Z · LW · GW

If, on the other hand, heritability is high, then throwing more effort/money at how we do education currently should not be expected to improve SAT scores

I agree with spkoc that this conclusion doesn't necessarily follow from high heritability. I think it would follow from high and stable heritability across multiple attempted interventions.

An exaggerated story for the point I'm about to make: imagine you've never tried to improve SAT scores, and you measure the heritability. You find that, in this particular environment, genetic variance explains 100% of SAT scores. You can predict someone's SAT scores perfectly just by looking at their genome. You decide to take the half of the population with the highest predicted scores, and keep the SAT a secret from them until the day they take the test. And for the lower half, you give them dedicated tutors to help them prepare. Given the 100% heritability, you expect scores to stay exactly the same. But wait! What no one told you was that before your intervention, the learning environment had been magically uniform for every student. There had been no environmental variance at all, and so the only thing left to explain test scores was genetics. What you didn't realize is that your heritability estimate gave you no information at all about how environmental changes would affect scores - because there was no environmental variance at all! 

A single heritability measurement only tells you, roughly, the ratio of "[existing environmental variance] times [sensitivity to environmental variance]" to "[existing genetic variance] times [sensitivity to genetic variance]". But it doesn't do anything to disentangle the sensitivities-to-variances from the actual variances. What if there's practically zero variance in the environment, but a high sensitivity of the trait you're looking at to environmental variance? You'd find heritability is very high, but changes to the environment will cause large decreases to heritability. Same thing with genes: what if your trait is 100% determined by genes, but it just so happens that everyone has the exact same genes? You'd find that genetic variance explains zero percent of your trait, but if you then tried some genetic engineering, you'd find heritability shoot upward.

In order to disentangle the "sensitivity of X to environmental variance" from "the level of environmental variance", you'd have to run multiple interventions over time, and measure the heritability of X after each one (or be confident that your existing environment has lots of variance).

Comment by Insub on A Contamination Theory of the Obesity Epidemic · 2021-07-26T06:02:59.637Z · LW · GW

People get fat eating fruits

Are you implying that there are examples of people like BDay mentioned, who are obese despite only eating fruits/nuts/meat/veggies? Or just that people can get fat while including fruit in the diet? I'd be surprised and intrigued if it were the former. 

I've tried the whole foods diet, and I've personally found it surprisingly hard to overeat, even when I let myself eat as many fruits and nuts as I want. You can only eat so many cashews before they start to feel significantly less appetizing. And after I've eaten 500 cal of cashews in one sitting, the next time I'm feeling snacky, those cashews still sound kinda meh. Fruit is certainly easier to eat, but still after the fourth or fifth clementine I feel like "ok that's enough" (and that's probably only ~300 calories). Whereas I could easily eat 500 cal of candy without breaking a sweat.

I think one major roadblock to overeating with fruit is that it takes effort to eat. You have to peel an orange, or cut up a kiwi or melon, or bite off the green part of a strawberry. There's a lot more work involved in eating 500 cal of fruit than there is in unwrapping a candy bar or opening a party size bag of chips. 

So all of this rambling is just to say that I'm somewhat skeptical of the claims that "fruit (nuts) are mostly sugar (fat) and are calorie dense, and you can overeat them just like you can with junk food". I think it's surprisingly hard in practice to do so (and it's much less enjoyable than overeating junk food).

Comment by Insub on [deleted post] 2021-06-23T22:19:00.506Z

Couple more:

"he wasn't be treated"

"Club cast cast Lumos"

Comment by Insub on I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction · 2021-06-22T05:48:32.884Z · LW · GW

It seems to me that the hungry->full Dutch book can be resolved by just considering the utility function one level deeper: we don't value hungriness or fullness (or the transition from hungry to full) as terminal goals themselves. We value moving from hungry to full, but only because doing so makes us feel good (and gives nutrients, etc). In this case, the "feeling good" is the part of the equation that really shows up in the utility function, and a coherent strategy would be one for which this amount of "feeling good" can not be purchased for a lower cost.

Comment by Insub on Irrational Modesty · 2021-06-21T03:08:16.291Z · LW · GW

In the event  anyone reading this has objective, reliable external metrics of extremely-high ability yet despite this feels unworthy of exploring the possibility that they can contribute directly to research

Huh, that really resonates with me. Thanks for this advice.

Comment by Insub on The Darwin Game - Conclusion · 2020-12-04T19:21:38.089Z · LW · GW

For the record, here's what the 2nd place CooperateBot [Insub] did:

  • On the first turn, play 2.
  • On other turns:
    • If we added up to 5 on the last round, play the opponent's last move
    • Otherwise, 50% of the time play max(my last move, opponents last move), and 50% of the time play 5 minus that

My goal for the bot was to find a simple strategy that gets into streaks of 2.5's as quickly as possible with other cooperation-minded bots. Seems like it mostly worked.

Comment by Insub on The Darwin Game - Conclusion · 2020-12-04T16:13:21.624Z · LW · GW

Is something strange going on in the Round 21-40 plot vs the round 41-1208 plot? It looks like the line labeled MeasureBot in the Round 21-40 plot switches to be labeled CooperateBot [Insub] in the Round 41-1208 plot. I hope my simple little bot actually did get second place!