Posts

Simpson's paradox and the tyranny of strata 2020-11-19T17:46:32.504Z
It's hard to use utility maximization to justify creating new sentient beings 2020-10-19T19:45:39.858Z
Police violence: The veil of darkness 2020-10-12T21:32:33.808Z
Doing discourse better: Stuff I wish I knew 2020-09-29T14:34:55.913Z
Making the Monte Hall problem weirder but obvious 2020-09-17T12:10:23.472Z
What happens if you drink acetone? 2020-09-14T14:22:41.417Z
Comparative advantage and when to blow up your island 2020-09-12T06:20:36.622Z

Comments

Comment by dynomight on Simpson's paradox and the tyranny of strata · 2020-11-20T19:04:13.112Z · LW · GW

You definitely need a number of data at least exponential in the number of parameters, since the number of "bins" is exponential. (It's not so simple as to say that exponential is enough because it depends on the distributional overlap. If there are cases where one group never hits a given bin, then even an infinite amount of data doesn't save you.)

Comment by dynomight on Simpson's paradox and the tyranny of strata · 2020-11-20T18:58:43.535Z · LW · GW

I see what you're saying, but I was thinking of a case where there is zero probability of having overlap among all features. While that technically restores the property that you can multiply the dataset by arbitrarily large numbers, if feels a little like "cheating" and I agree with your larger point.

I guess Simpson's paradox does always have a right answer in "stratify along all features", it's just that the amount of data you need increases exponentially in the number of relevant features. So I think that in the real world you can multiply the amount of data by a very, very large number and it won't solve the problem, even though in a large enough number will.

In the real world it's often also sort of an open question if the number of "features" is finite or not.

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T22:55:22.032Z · LW · GW

I like your concept that the only "safe" way to use utilitarianism is if you don't include new entities (otherwise you run into trouble). But I feel like they have to be included in some cases. E.g. If I knew that getting a puppy would make me slightly happier, but the puppy would be completely miserable, surely that's the wrong thing to do?

(PS thank you for being willing to play along with the unrealistic setup!)

Comment by dynomight on Message Length · 2020-10-20T15:10:43.458Z · LW · GW

This covers a really impressive range of material -- well done! I just wanted to point out that if someone followed all of this and wanted more, Shannon's 1948 paper is surprisingly readable even today and is probably a nice companion:

http://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T15:04:36.947Z · LW · GW

Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don't live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don't need to mathematically justify these things (and maybe it's impossible) but I wish we could!

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-20T00:33:27.400Z · LW · GW

If I understand your second point, you're suggesting that part of our intuition seems to suggest large populations are better is that larger populations tend to make the average utility higher. I like that! It would be interesting to try to estimate at that human population level average utility would be highest. (In hunter/gatherer or agricultural times probably very low levels. Today probably a lot higher?)

Comment by dynomight on It's hard to use utility maximization to justify creating new sentient beings · 2020-10-19T22:05:46.895Z · LW · GW

Can you clarify which answer you believe is the correct one in the puppy example? Or, even better, the current utility for the dog in the "yes puppy" example is 5-- for what values you believe it is correct to have or not have the puppy?

Comment by dynomight on Police violence: The veil of darkness · 2020-10-13T21:08:19.750Z · LW · GW

My guess is that the problem is I didn't make it clear that this is just the introduction from the link? Sorry, I edited to clarify.

Comment by dynomight on Doing discourse better: Stuff I wish I knew · 2020-09-29T19:20:53.726Z · LW · GW

Totally agree that the different failure modes are in reality interrelated and dependent. In fact, one ("necessary despot") is a consequence of trying to counter some of the others. I do feel that there's enough similarity between some of the failure modes at different sites that's it's worth trying to name them. The temporal dimension is also an interesting point. I actually went back and looked at some of the comments on Marginal Revolution posts years ago. They are pretty terrible today, but years ago they were quite good.

Comment by dynomight on Comparative advantage and when to blow up your island · 2020-09-21T23:03:15.826Z · LW · GW

In principle, for work done for market, I guess you don't need to explicitly think about free trade. Rather, by everyone pursing their own interests ("how much money can I make doing this"?) they'll eventually end up specializing in their comparative advantage anyway. Though, with finite lifetime, you might want to think about it to short-circuit "eventually".

For stuff not done for market (like dividing up chores), I'd think there's more value in thinking about it explicitly. That's because there's no invisible hand naturally pushing people toward their comparative advantage so you're more likely to end up doing things inefficiently.

Comment by dynomight on Making the Monte Hall problem weirder but obvious · 2020-09-18T02:38:12.006Z · LW · GW

Thanks for pointing this out. I had trouble with the image formatting trying to post it here.

Comment by dynomight on Making the Monte Hall problem weirder but obvious · 2020-09-17T17:13:52.941Z · LW · GW

That's definitely the central insight! However, experimentally, I found that explanation alone was only useful for people who already understood Monty Hall pretty well. The extra steps (the "10 doors" step and the "Monty promising") seem to lose fewer people.

That being said, my guess is that most lesswrong-ites probably fall into the "already understood Monty Hall" category, so...