Posts

Alcohol, health, and the ruthless logic of the Asian flush 2021-06-04T18:14:08.797Z
The irrelevance of test scores is greatly exaggerated 2021-04-15T14:15:29.046Z
Poll: Any interest in an editing buddy system? 2020-12-02T02:18:40.443Z
Simpson's paradox and the tyranny of strata 2020-11-19T17:46:32.504Z
It's hard to use utility maximization to justify creating new sentient beings 2020-10-19T19:45:39.858Z
Police violence: The veil of darkness 2020-10-12T21:32:33.808Z
Doing discourse better: Stuff I wish I knew 2020-09-29T14:34:55.913Z
Making the Monte Hall problem weirder but obvious 2020-09-17T12:10:23.472Z
What happens if you drink acetone? 2020-09-14T14:22:41.417Z
Comparative advantage and when to blow up your island 2020-09-12T06:20:36.622Z

Comments

Comment by dynomight on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-07T18:12:37.647Z · LW · GW

In principle, I guess you could also think about low-tech solutions. For example, people who want to opt out of alcohol might have some slowly dissolving tattoo / dye placed somewhere on their hand or something. This would eliminate the need for any extra ID checks, but has the big disadvantage it would be visible most of the time.

Comment by dynomight on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-05T19:56:19.086Z · LW · GW

Thanks. Are you able to determine what the typical daily dose is for implanted disulfiram in Eastern Europe? People who take oral disulfiram typically need something like 0.25g / day to have a significant physiological effect. However, most of the evidence I've been able to find (e.g. this paper) suggest that the total amount of disulfiram in implants is around 1g. If that's dispensed over a year, you're getting like 1% of the dosage that's active orally. On top of that, the evidence seems pretty strong that bioavailability from implants is lower than from oral doses, so it's effectively even less.

Of course, there's nothing stopping someone implanting 100x as large a dose, and maybe bioavailability can be improved (or isn't that big a concern). But if not, my impression was that most implants are effectively pure placebo effect.

Comment by dynomight on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-05T17:41:40.398Z · LW · GW

Very interesting! Do you know how much disulfiram the implant gives out per day? There's a bunch of papers on implants, but there's usually concerns about (a) that the dosage might be much smaller than the typical oral dosage and/or (b) that there's poor absorption.

Comment by dynomight on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-05T01:48:53.196Z · LW · GW

I specified (right before the first graph) that I was using the US standard of 14g. (I know the paper uses 10g. There's no conflict because I use their raw data which is in g, not drinks.)

Comment by dynomight on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-04T21:38:12.665Z · LW · GW

Ironically, there is no standard for what a "standard drink" is, with different countries defining it to be anything from 8g to 20g of ethanol.

Comment by dynomight on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-04T21:03:49.914Z · LW · GW

I wasn't (intentionally?) being ironic. I guess that for underage drinking we have the advantage that you can sort of guess how old someone looks, but still... good point.

Comment by dynomight on The irrelevance of test scores is greatly exaggerated · 2021-04-22T13:58:21.628Z · LW · GW

I've politely contacted them several times via several different channels just asking for clarifications and what the "missing coefficients" are in the last model. Total stonewall- they won't even acknowledge my contacts. Some people more connected to the education community also apparently did that as a result of my post, with the same result. 

Comment by dynomight on How is rationalism different from utilitarianism? · 2021-02-15T15:26:55.208Z · LW · GW

You could model the two as being totally orthogonal:

  • Rationality is the art of figuring out how to get what you want.
  • Utilitarianism is a calculus for figuring out what you should want.

In practice, I think the dividing lines are more blurry. Also, the two tend to come up together because people who are attracted to the thinking in one of these tend to be attracted to the other as well.

Comment by dynomight on Simpson's paradox and the tyranny of strata · 2020-11-20T19:04:13.112Z · LW · GW

You definitely need a number of data at least exponential in the number of parameters, since the number of "bins" is exponential. (It's not so simple as to say that exponential is enough because it depends on the distributional overlap. If there are cases where one group never hits a given bin, then even an infinite amount of data doesn't save you.)

Comment by dynomight on Simpson's paradox and the tyranny of strata · 2020-11-20T18:58:43.535Z · LW · GW

I see what you're saying, but I was thinking of a case where there is zero probability of having overlap among all features. While that technically restores the property that you can multiply the dataset by arbitrarily large numbers, if feels a little like "cheating" and I agree with your larger point.

I guess Simpson's paradox does always have a right answer in "stratify along all features", it's just that the amount of data you need increases exponentially in the number of relevant features. So I think that in the real world you can multiply the amount of data by a very, very large number and it won't solve the problem, even though in a large enough number will.

In the real world it's often also sort of an open question if the number of "features" is finite or not.

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T22:55:22.032Z · LW · GW

I like your concept that the only "safe" way to use utilitarianism is if you don't include new entities (otherwise you run into trouble). But I feel like they have to be included in some cases. E.g. If I knew that getting a puppy would make me slightly happier, but the puppy would be completely miserable, surely that's the wrong thing to do?

(PS thank you for being willing to play along with the unrealistic setup!)

Comment by dynomight on Message Length · 2020-10-20T15:10:43.458Z · LW · GW

This covers a really impressive range of material -- well done! I just wanted to point out that if someone followed all of this and wanted more, Shannon's 1948 paper is surprisingly readable even today and is probably a nice companion:

http://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T15:04:36.947Z · LW · GW

Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don't live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don't need to mathematically justify these things (and maybe it's impossible) but I wish we could!

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T00:33:27.400Z · LW · GW

If I understand your second point, you're suggesting that part of our intuition seems to suggest large populations are better is that larger populations tend to make the average utility higher. I like that! It would be interesting to try to estimate at that human population level average utility would be highest. (In hunter/gatherer or agricultural times probably very low levels. Today probably a lot higher?)

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-19T22:05:46.895Z · LW · GW

Can you clarify which answer you believe is the correct one in the puppy example? Or, even better, the current utility for the dog in the "yes puppy" example is 5-- for what values you believe it is correct to have or not have the puppy?

Comment by dynomight on Police violence: The veil of darkness · 2020-10-13T21:08:19.750Z · LW · GW

My guess is that the problem is I didn't make it clear that this is just the introduction from the link? Sorry, I edited to clarify.

Comment by dynomight on Doing discourse better: Stuff I wish I knew · 2020-09-29T19:20:53.726Z · LW · GW

Totally agree that the different failure modes are in reality interrelated and dependent. In fact, one ("necessary despot") is a consequence of trying to counter some of the others. I do feel that there's enough similarity between some of the failure modes at different sites that's it's worth trying to name them. The temporal dimension is also an interesting point. I actually went back and looked at some of the comments on Marginal Revolution posts years ago. They are pretty terrible today, but years ago they were quite good.

Comment by dynomight on Comparative advantage and when to blow up your island · 2020-09-21T23:03:15.826Z · LW · GW

In principle, for work done for market, I guess you don't need to explicitly think about free trade. Rather, by everyone pursing their own interests ("how much money can I make doing this"?) they'll eventually end up specializing in their comparative advantage anyway. Though, with finite lifetime, you might want to think about it to short-circuit "eventually".

For stuff not done for market (like dividing up chores), I'd think there's more value in thinking about it explicitly. That's because there's no invisible hand naturally pushing people toward their comparative advantage so you're more likely to end up doing things inefficiently.

Comment by dynomight on Making the Monte Hall problem weirder but obvious · 2020-09-18T02:38:12.006Z · LW · GW

Thanks for pointing this out. I had trouble with the image formatting trying to post it here.

Comment by dynomight on Making the Monte Hall problem weirder but obvious · 2020-09-17T17:13:52.941Z · LW · GW

That's definitely the central insight! However, experimentally, I found that explanation alone was only useful for people who already understood Monty Hall pretty well. The extra steps (the "10 doors" step and the "Monty promising") seem to lose fewer people.

That being said, my guess is that most lesswrong-ites probably fall into the "already understood Monty Hall" category, so...