Posts
Comments
At the point of death, presumably, the person whose labour is seized does not exist. I think that's a good point to consider, since I also estimate that a significant amount of resistance to the idea of no inheritance assumes the dead person's will is a moral factor after their death.
I tend to agree that in such a world there would be more consumption rather than saving approaching old age, but I'm not sure that's a problem or how big of a problem that is, and there are ways for governments to nudge that ratio through monetary policy.
I also don't agree that you're effectively limiting people's power of affecting causes they care about to what the government would do with the money, since people have other causes they care about besides their offspring, even if to a lesser degree, and are free to spend their money while alive to advance those.
A relevant point I don't have an opinion on is whether the offsprings of a person are better stewards of that person's former wealth than the government. There's the question of whether being the offspring of someone wealthy is casual for being more financially proficient than the average citizen, and the (major) question of the overhead in dissolving existing businesses and functional assets.
Thank you, that was very informative.
I don't find the "probability of inclusion in final solution" model very useful, compared to "probability of use in future work" (similarly for their expected value versions) because
- I doubt that central problems are a good model for science or problem solving in general (or even in the navigation analogy).
- I see value in impermanent improvements (e.g. current status of HIV/AIDS in rich countries) and in future-discounting our value estimations.
- Even if a good description of a field as a central problem and satalite problems exists, we are unlikely to correctly estimate it apriori, or estimate the relevance of a solution to it. In comparison, predicting how useful a solution is to "nearby" work is easier (with the caveat that islands or cliques of only internaly-useful problems and solutions can arise, and do in practice).
Given my model, I think 20% generalizability is worth a person's time. Given yours, I'd say 1% is enough.
I see what you mean with regards to the number of researchers. I do wonder a lot about the amount of waste from multiple researchers unknowingly coming up with the same research (a different problem to what you pointed out) and the uncoordinated solution to that is to work on niche problems and ideas (which coincidentally seem less likely to individually generalize).
Could you share your intuition for why the solution space in AI alignment research is large, or larger than in cancer? I don't have an intuition about the solution space in alignment v.s. a "typical" field, but I strongly think cancer research has a huge space and can't think of anything more difficult within biology. Intuition: you can think of fighting cancer as a two player game, with each individual organism being an instance and the within-organism evolutionary processes leading to cancer being the other player. In most problems in biology you can think about the genome of the organisms as defining rule set for the game, but here the genome isn't held constant.
w.r.t. to the threshold to strong generalizability, I don't have a working model of this to disagree with. Putting confidence/probability values on relatively abstract things is something I'm not used to (and I understand is encouraged here), so I'd appreciate if you could share a little more insight about how to
- Define the baseline distribution generalizability is defined on.
- Give a little intuition about why a threshold is meaningful, rather than a linear "more general is better".
I'm sorry if that's too much to ask for or ignorant of something you've laid out before. I have no intuition about #2, but for #1 I suspect that you have a model of AI alignment or an abstract field as having a single/few central problems at a time, whereas my intuition is that they are mostly composed of many problems, with "centrality" only being relevant as a storytelling device or a good way to make sense of past progress in retrospect (so that generalizability is more the chance that one work will be useful for another, rather than be useful following a solution to the central problem) . More concretely, aging may be a general factor to many diseases, but research into many of the things aging relates to is composed of solving many small problems that do not directly relate to aging, and defining solving aging as a bottleneck problem and judging generalizability with respect to it doesn't seem useful.
Besides reiterating Ryan Greenblat's objection to the assumption of a single bottleneck problem, I would also like to add that there is apriori value in having many weakly generalizable solutions even if only few will have posteriori value.
Designing only best-worst-case subproblem solutions while waiting for Alice would be like restricting strategies in game to ones agnostic to the opponent's moves, or only founding startups that solve a modal person's problem. That's not to say that generalizability isn't a good quality, but I think the claim in the article goes a little too far.
There's one common reason I sometimes undervalue weakly-generalizeable solutions (it's not in response to any claim in the article, but I hope it is relevant still): it sucks to be the individual researcher whose work turns out as irrelevant. I think that both in a utilitarian sense and as a personal way to cope with life, it's better to adopt an apriori mindset of meaning and value in one's work, but we're not naturally equipped with it.