Philosophical Parenthood

post by SquirrelInHell · 2017-05-30T14:09:07.702Z · score: 1 (1 votes) · LW · GW · Legacy · 25 comments

This is a link post for http://squirrelinhell.blogspot.com/2017/05/philosophical-parenthood.html

25 comments

Comments sorted by top scores.

comment by MattG2 · 2017-05-31T03:57:15.423Z · score: 1 (1 votes) · LW(p) · GW(p)

There seems to be a weird need in this community to over argue obvious conclusions.

This whole post seems to boil down to:

  1. You are altruistic and smart.
  2. You want more altruistic and smart people.
  3. Therefore, you should propagate your genes.

Similar to the recent "Dragon Army Baracks", which seems to boil down to:

  1. We want an effective group organization.
  2. Most effective groups seem to be hierarchical with a clear leader.
  3. Therefore, it might make sense for us to try being hierarchical with a clear leader..

I mean, I get that there's a lot of mental models that led to these conclusions, and you want to share the mental models as well... but it seems like separating out the teaching of the mental models and the arguments themselves into separate pieces of content might make sense.

comment by G Gordon Worley III (gworley) · 2017-05-31T17:35:30.948Z · score: 2 (2 votes) · LW(p) · GW(p)

I've thought about the meta issue you're raising before, so to respond to it directly:

The trouble is most people's thinking is teleological, viz. motivated to certain ends. As such writing about an idea without addressing the teleological aspects of an idea is going to be a failure to anticipate the reader's needs and answer their questions. Thus when presenting an idea it's generally necessary to take both teleological and non-teleological approaches. To address teleology alone you need not concern yourself with substance, and to address non-teleology is to ignore your (very human) reader, thus both must be considered simultaneously.

To put this another way, having a theory is literally useless if you don't know what to use it for or how to use it. Not addressing use leads to difficulty in sharing ideas, such as in academic writing in journals that has expunged all teleos and consequently fails to often engage many readers with ideas.

Even more succinctly: people come for the arguments/ideas and stay for the ideas/arguments.

comment by SquirrelInHell · 2017-06-01T08:56:15.700Z · score: 0 (0 votes) · LW(p) · GW(p)

As for your main point, see gworley's reply, though I'm not at all opposed to making the distinction more clear.

You are altruistic and smart. You want more altruistic and smart people. Therefore, you should propagate your genes.

The post itself very emphatically states that this is NOT the chain of reasoning that I find compelling. In particular, its point would stand even if children of smart people were somehow exactly as smart as the population average.

comment by entirelyuseless · 2017-05-31T02:24:33.065Z · score: 1 (1 votes) · LW(p) · GW(p)

And if the philosophically correct thing to do for smart people is to not have children, then the incentive gradient will forever be such that there can't be very many people who understand and act on abstract reasoning.

Why does that matter? It would only matter to smart people if they cared about there being a lot of smart people, in which case they might have already convinced themselves to have children for the obvious reasons. On the other hand if they don't care about there being a lot of smart people, they won't find your argument persuasive.

comment by SquirrelInHell · 2017-06-01T08:59:43.701Z · score: 0 (0 votes) · LW(p) · GW(p)

Thank you for raising this point. The argument would also matter to smart people if they cared about their own intelligence. If the population average is pegged to a low level, even the outliers are much less smart in absolute scales. As for why it would matter to you when your intelligence is already settled, see updateless decision theory.

comment by Stuart_Armstrong · 2017-06-02T08:05:46.384Z · score: 1 (1 votes) · LW(p) · GW(p)

You need more work than saying "updateless decision theory" to get that argument to work. Make the model and the correlations clear and explicit.

comment by entirelyuseless · 2017-06-01T13:18:15.034Z · score: 0 (0 votes) · LW(p) · GW(p)

I understand why some people follow a decision theory that works that way, but I do not.

comment by Dagon · 2017-05-30T19:02:13.102Z · score: 1 (1 votes) · LW(p) · GW(p)

My wife and I just celebrated our 16th anniversary, and are pretty well completely past reasonable child-creating age. No intellectual nor emotional arguments convinced us to procreate, though there have been times that one or the other of us has wanted to (but not enough to make the massive additional sacrifices it'd take to coerce the partner or find a new one).

If you're serious about convincing more smart people to have children, bottom-up moral or philosophical arguments are unlikely to work. The people you're targeting (smart, well-read, self-aware) are well aware of the arguments and can model the impact of a marginal smart baby on the world. They may forget to factor in the chance of a marginal genius, but probably not.

The two arguments that came closest to working on us were the "full human experience" argument (that there is pretty much no way other than parenthood to have that kind of bond and closeness with another human) and the " 'twer best done quickly" argument (if you're going to eventually do so, you should do it earlier than you probably think - it's just easier in your 20s and 30s than it will be later).

comment by username2 · 2017-05-30T21:38:54.968Z · score: 3 (3 votes) · LW(p) · GW(p)

Those are both valid arguments. They also combine to create a third: parenting completely changes your priorities in life, giving you new perspective, drives, and goals that make those things you were doing in 20's and 30's that seemed so much more important than having kids actually feel like a waste of time in retrospect. And you wonder why you didn't start earlier.

comment by Viliam · 2017-06-02T09:18:39.476Z · score: 0 (0 votes) · LW(p) · GW(p)

the "full human experience" argument

This feels like an argument that proves too much. If you were not in a war, or tortured, you are still missing some kinds of human experience. It could be argued that war is quite common experience. Yet this argument would not convince me to go to war or get tortured, just to achieve the full spectrum of human experience.

comment by Dagon · 2017-06-02T21:23:31.095Z · score: 0 (0 votes) · LW(p) · GW(p)

Fair point, but in practice when you actually have the debate, it very quickly becomes clear that it's really the "commonly reported to be positive experiences" argument.

And for some, war and surviving hardship _do_ qualify - there's plenty of examples of people seeking pain (or risk of pain) just because they crave "adventure". IMO, this doesn't reach anywhere as close to universal as parenthood does, but it seems like a related drive.

comment by WalterL · 2017-05-30T14:58:16.061Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm a bit confused as to who you are arguing with. The folks at what your article calls 'level 1', who have redirected their efforts into other pursuits, surely aren't about to embark upon the enormous life change necessary to get offspring because they read your website, yeah?

comment by SquirrelInHell · 2017-06-01T09:03:44.635Z · score: 0 (0 votes) · LW(p) · GW(p)

That is of course correct, but:

  1. It seems a better policy on the margin to err on the side of arguing for what one deems morally correct.

  2. I bring up other interesting stuff to discuss the issue, and parenthood makes for a good example.

comment by ImmortalRationalist · 2017-06-11T17:50:10.624Z · score: 0 (0 votes) · LW(p) · GW(p)

Ted Kaczynski wrote something similar to this in Industrial Society And Its Future, albeit with different motivations.

  1. Revolutionaries should have as many children as they can. There is strong scientific evidence that social attitudes are to a significant extent inherited. No one suggests that a social attitude is a direct outcome of a person’s genetic constitution, but it appears that personality traits are partly inherited and that certain personality traits tend, within the context of our society, to make a person more likely to hold this or that social attitude. Objections to these findings have been raised, but the objections are feeble and seem to be ideologically motivated. In any event, no one denies that children tend on the average to hold social attitudes similar to those of their parents. From our point of view it doesn’t matter all that much whether the attitudes are passed on genetically or through childhood training. In either case they ARE passed on.
  1. The trouble is that many of the people who are inclined to rebel against the industrial system are also concerned about the population problems, hence they are apt to have few or no children. In this way they may be handing the world over to the sort of people who support or at least accept the industrial system. To insure the strength of the next generation of revolutionaries the present generation should reproduce itself abundantly. In doing so they will be worsening the population problem only slightly. And the important problem is to get rid of the industrial system, because once the industrial system is gone the world’s population necessarily will decrease (see paragraph 167); whereas, if the industrial system survives, it will continue developing new techniques of food production that may enable the world’s population to keep increasing almost indefinitely.
comment by oge · 2017-06-11T16:13:35.483Z · score: 0 (0 votes) · LW(p) · GW(p)

Hi SquirrellInHell, how would you respond to the comment left on the original post that having kids is likely net bad for the kids but net good for the world?

Instead, the process you describe of "overthinking your motivations" and acting on abstract reasoning has led me to believe that having kids actually harms the kids. I'm not sacrificing their lives to the altruism of letting them improve the lives of everyone else.

comment by SquirrelInHell · 2017-06-16T17:48:49.752Z · score: 0 (0 votes) · LW(p) · GW(p)

I disagree, but I would hate to press my opinion on this in any way

comment by oge · 2017-06-25T16:15:44.544Z · score: 0 (0 votes) · LW(p) · GW(p)

I would love to hear your opinion since I have many loved ones currently planning to have kids. Do you mind DMing me your opinion?

comment by tpsreport11_duplicate0.6303718969521046 · 2017-06-04T19:09:35.957Z · score: 0 (0 votes) · LW(p) · GW(p)

I definitely agree with more rationalists having children. Any baby would be lucky to be born to a loving family that values education and helping the world.

The larger concern, however, is misplaced. If we want more rationalism and altruism in society, it isn't a question of genes it's a question of memes. For one, we don't have time for the next generation to solve the biggest risks we face. We have to find a way to make vast swaths of humanity more rational and altruistic in like, the next 50 years. One generation of gene spreading is not going to increase ranks enough to make a difference. Second, intelligence can help rationality and altruism, but it can just as easily hurt those things. I've known some pretty brilliant people who were completely blind to biases and treated people with cruelty.

This is similar to the discussions around AI risk, I think. Will a smart AI necessarily be kind or work in the interests of humanity? Maybe not.

The question should be how we nurture our intellectual children. What does rationality become as a mature system of thought? Will it be accessible and valuable to anybody regardless of their current social identity / location / education, etc.? Can it be easily transmitted or does it require context? These are the parental thoughts we should be having, in addition to finding somebody to love and produce babies with.

comment by SquirrelInHell · 2017-05-30T14:10:44.855Z · score: 0 (0 votes) · LW(p) · GW(p)

In this post, I will lay out a strong philosophical argument for rational and intelligent people to have children. It's important and not obvious, so listen well.

comment by Lumifer · 2017-05-30T14:46:56.169Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't understand the argument. Can you ELI5?

comment by G Gordon Worley III (gworley) · 2017-05-31T17:20:30.337Z · score: 2 (2 votes) · LW(p) · GW(p)

ELI5: DONT HAVE KIDS

(sorry, the snark was irresistible)

comment by SquirrelInHell · 2017-06-01T09:07:22.444Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm pretty sure this can't be explained to a 5 year old, because of some cognitive features that are still missing at that age. Do you seem to have more trouble with wording used in the post, with the flow of explanation, the strangeness of concepts, or something else?

comment by Lumifer · 2017-06-01T15:20:32.137Z · score: 0 (0 votes) · LW(p) · GW(p)

I can't follow the flow of logic which seems to me to rely on strange assumptions: e.g. considering a group of people who are exactly the same and react the same way. This is not a puzzle about an island full of logicians with some words on their foreheads, presumably you're talking about real life.

I understand the standard dialectic spiral, but I don't understand what "philosophically correct" means. I am also not clear how your concern for which way the incentive gradient goes is different from the simple "there should be more of our kind".

comment by SquirrelInHell · 2017-06-04T16:52:27.071Z · score: 0 (0 votes) · LW(p) · GW(p)

I can't follow the flow of logic which seems to me to rely on strange assumptions: e.g. considering a group of people who are exactly the same and react the same way. This is not a puzzle about an island full of logicians with some words on their foreheads, presumably you're talking about real life.

This seems like standard objection #353, (it often appears e.g. when people argue that Newcomb's dilemma is irrelevant etc). The standard answer applies here: in the real world, you replace "equality" with "correlation".

I understand the standard dialectic spiral

Thanks for the reference, it seems that I have reinvented this particular wheel

but I don't understand what "philosophically correct" means

Something like "agents with enough decision-theoretic sophistication will converge on this outcome"

I am also not clear how your concern for which way the incentive gradient goes is different from the simple "there should be more of our kind".

Yeah, this is a good question, and there's a lot of subtle differences. First of all, "there should be more of our kind" is wishful thinking (a useless, impossible counterfactual), it draws an arbitrary distinction between groups of people, it assumes it's still OK for some people to be in a bad situation by design, etc. I'm sure you can come up with more complaints. OTOH, looking at the incentive gradient for the whole population avoids these problems, and also is not morally abhorrent.

comment by Lumifer · 2017-06-04T22:55:13.531Z · score: 0 (0 votes) · LW(p) · GW(p)

you replace "equality" with "correlation"

The problem with this is that the setup suddenly becomes a whole lot handwavy. Equality is mathematically "hard", correlation is not, it's a statistical "soft". Which correlation (Pearson's? why this one?), how much is enough (is 5% sufficient? 50%? does it depend on the amount of noise present?), etc.

For example you say:

Everyone can expect everyone else in the group to follow the same logic, and get the same conclusion. It's not possible to game the system

but if equality is replaced with correlation, this doesn't hold.

"agents with enough decision-theoretic sophistication will converge on this outcome"

Regardless of their values?

"there should be more of our kind" is wishful thinking

I disagree. Human history is full of "there should be more of our kind and we will make it happen". The typical route is killing competitors and taking their resources. There are also cultural factors (e.g. the "protect the women" virtue), religious/ideological influences (see the population growth in Roman Catholic countries until recently), etc.

And I still cannot follow your train of logic.