Would you have a baby in 2024?

post by martinkunev · 2023-12-25T01:52:04.358Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    64 Dave Orr
    11 maia
    7 nim
    4 jessicata
    3 VojtaKovarik
    3 sapphire
    1 lc
    -5 Gesild Muka
    -6 chaosmage
None
2 comments

Given how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.), do you think it's a good idea to have a baby now? What factors would you take into account (e.g. age)?

 

Today I saw a tweet by Eliezer Yudkowski that made me think about this:

"When was the last human being born who'd ever grow into being employable at intellectual labor? 2016? 2020?"

https://twitter.com/ESYudkowsky/status/1738591522830889275

 

Any advice for how to approach such a discussion with somebody who is not at all familiar with the topics discussed on lesswrong?

What if the option "wait for several years and then decide" is not available?

Answers

answer by Dave Orr · 2023-12-25T02:46:52.324Z · LW(p) · GW(p)

I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that's true in many many worlds.

In fact it's hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don't solve alignment and EY is right about what happens in a fast takeoff world, it doesn't really matter if you have kids or not.

In that sense, it's basically a freeroll, though of course there are intermediate outcomes. I don't immediately see any strong argument in favor of not having kids if you would otherwise want them.

comment by dr_s · 2023-12-25T14:05:48.108Z · LW(p) · GW(p)

If we don't solve alignment and EY is right about what happens in a fast takeoff world, it doesn't really matter if you have kids or not.

This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don't have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard "I really care about this person" one you could experience with just about anyone else.

Replies from: gw, dave-orr, cata
comment by gw · 2023-12-25T19:12:22.508Z · LW(p) · GW(p)

+ the obvious fact that it might matter to the kid that they're going to die

(edit: fwiw I broadly think people who want to have kids should have kids)

Replies from: jkaufman, dr_s, timothy-underwood-1
comment by jefftk (jkaufman) · 2023-12-26T00:58:32.550Z · LW(p) · GW(p)

I'm sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they're very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.

Which makes lots of sense to me: they seem quite happy, and it's not surprising they would be opposed to never getting to exist even if it isn't a full lifetime.

comment by dr_s · 2023-12-25T19:24:31.591Z · LW(p) · GW(p)

I think the idea here was sort of "if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it"... cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.

It's a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can't say I have any answers for it. But it's obviously not as simple as this answer makes it.

comment by Timothy Underwood (timothy-underwood-1) · 2023-12-25T19:31:01.515Z · LW(p) · GW(p)

Yeah, but assuming your p(doom) isn't really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.

I don't expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life -- one that is better than it would be otherwise in part because she never has a job.

Replies from: dr_s, Sherrinford
comment by dr_s · 2023-12-25T19:36:34.933Z · LW(p) · GW(p)

If your timelines are short-ish, you could likely have a child afterwards, because even if you're a bit on the old side, hey, what, you don't expect the ASI to find ways to improve health and fertility later in life?

I think the most important scenario to balance against is "nothing happens", which is where you get shafted if you wait too long to have a child.

comment by Sherrinford · 2023-12-25T20:07:57.854Z · LW(p) · GW(p)

Could you please briefly describe the median future you expect?

comment by Dave Orr (dave-orr) · 2023-12-25T20:46:36.207Z · LW(p) · GW(p)

I agree that it's bad to raise a child in an environment of extreme anxiety. Don't do that.

Also try to avoid being very doomy and anxious in general, it's not a healthy state to be in. (Easier said than done, I realize.)

comment by cata · 2023-12-25T20:58:28.954Z · LW(p) · GW(p)

I don't agree with that. I'm a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn't do anything. If that's the deal of life it's a pretty good deal and I don't think there's any reason to be particularly anguished about it on your kid's behalf.

Replies from: dr_s, Sherrinford
comment by dr_s · 2023-12-25T21:21:33.161Z · LW(p) · GW(p)

I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.

comment by Sherrinford · 2023-12-25T22:03:14.291Z · LW(p) · GW(p)

Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)

Replies from: cata
comment by cata · 2023-12-25T23:25:52.403Z · LW(p) · GW(p)

Yes, I basically am not considering that because I am not aware of the arguments for why that's a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.

Replies from: Sherrinford
comment by Sherrinford · 2023-12-26T10:55:09.978Z · LW(p) · GW(p)

I think the "stable totalitarianism" scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.

comment by RomanHauksson (r) · 2023-12-25T06:51:57.899Z · LW(p) · GW(p)

Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.

Replies from: Gunnar_Zarncke, Viliam
comment by Gunnar_Zarncke · 2023-12-27T11:48:01.863Z · LW(p) · GW(p)

This argument works against any thing you could do besides AI work and thus has to be considered in that wider frame. Going to the gym does mean less time for AI go well. Building a house. Watching Netflix. Some of these are longer time investments and some shorter, but the question still remains. Answer the question first how much effort you want to invest into AI go well vs. all other things you can do and then consider the fraction for children.

comment by Viliam · 2023-12-26T00:04:49.382Z · LW(p) · GW(p)

Perhaps people who can't contribute to AI alignment directly could help indirectly by providing free babysitting for the people working on AI alignment?

comment by the gears to ascension (lahwran) · 2024-03-28T19:42:35.141Z · LW(p) · GW(p)

strong AGI could still be decades away [LW · GW]

Replies from: dave-orr
comment by Dave Orr (dave-orr) · 2024-03-29T05:23:20.366Z · LW(p) · GW(p)

Heh, that's why I put "strong" in there!

comment by Gunnar_Zarncke · 2023-12-27T11:54:09.807Z · LW(p) · GW(p)

I agree with this take. I already have four children, and I wouldn't decide against children because of AI risks. 

Replies from: Sherrinford
comment by Sherrinford · 2024-01-06T22:29:22.137Z · LW(p) · GW(p)

Did you take such things into account when you made the decision, or decisions?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2024-01-07T01:36:31.246Z · LW(p) · GW(p)

Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.  
 

comment by VojtaKovarik · 2023-12-27T17:59:22.360Z · LW(p) · GW(p)

In fact it's hard to find probable worlds where having kids is a really bad idea, IMO.

One scenario where you might want to have kids in general, but not if timelines are short, is if you feel positive about having kids, but you view the first few years of having kids as a chore (ie, it costs you time, sleep, and money). So if you view kids as an investment of the form "take a hit to your happiness now, get more happiness back later", then not having kids now seems justifiable. But I think that this sort of reasoning requires pretty short timelines (which I have), with high confidence (which I don't have), and high confidence that the first few years of having kids is net-negative happiness for you (which I don't have).

(But overall I endorse the claim that, mostly, if you would have otherwise wanted kids, you should still have them.)
 

Replies from: Bezzi
comment by Bezzi · 2023-12-28T08:57:22.024Z · LW(p) · GW(p)

My anecdotal evidence from relatives with toddlers is that the first few years of having your first child is indeed the most stressful experience of your life. I barely even meet them anymore, because all their free time is eaten by childcare. Not sure about happiness, but people who openly admit to regretting having their kids face huge social stigma, and I doubt you could get honest answer on that question.

answer by maia · 2023-12-27T16:04:38.750Z · LW(p) · GW(p)

Empirically my answer to this is yes: I'm due in January with my second.

When I had my first child, I was thinking in terms of longer timelines. I assumed before having them that it would not be worth having a child if the world ended within a few years of their birth, because I would be less happy and their utility wouldn't really be much until later.

One month after my first baby was born, I had a sudden and very deep feeling that if the world ended tomorrow, it would have been worth it.

YMMV of course, but having kids can be a very deep human experience that pays off much sooner than you might think.

answer by nim · 2023-12-26T03:08:56.162Z · LW(p) · GW(p)

If I was in a relationship where everyone involved wanted a kid and I believed the kid would have a good chance of having positive role models and the kind of environment I'd wish for someone I love throughout its formative years, yes.

The "what if my child can't do intellectual labor because of AI?" question is, IMO, a very similar shape of risk to "what if my child can't do intellectual labor because they have an intellectual disability?".

If you'd love a kid even if they turned out to be in a low percentile of society intellectually, then you're ready for a kid regardless of whether the world you're bringing it into happens to have AI smarter than it. If your desire to add to your family is contingent on assumptions about how the new addition's abilities would compare to those of other agents it interacts with, it might be worth having a good think about whether that's a childhood environment that you would wish upon a person whom you love.

answer by jessicata · 2023-12-25T06:57:42.679Z · LW(p) · GW(p)

You're providing no evidence that superintelligence is likely in the next 30 years other than a Yudkowsky tweet. I expect that 30 years later we will not have superintelligence (of the sort that can build the stack to run itself on, growing at a fast rate, taking over the solar system etc).

comment by Sune · 2023-12-25T07:47:22.375Z · LW(p) · GW(p)

There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years. Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2023-12-25T19:52:58.295Z · LW(p) · GW(p)

The post is phrased as "do you think it's a good idea to have kids given timelines?". I've said why I'm not convinced timelines should be relevant to having kids. I think if people are getting their views by copying Eliezer Yudkowsky and copying people who copy his views (which I'm not sure if OP is doing) then they should get better epistemology.

Replies from: martinkunev, Sherrinford
comment by martinkunev · 2023-12-26T01:15:46.270Z · LW(p) · GW(p)

I didn't provide any evidence because I didn't make any claim (about timelines or otherwise). I'm trying to get my views by asking on lesswrong and I get something like "You have no right to ask this".

I quoted Yudkowski because he asks a related question (whether you agree with his assessment or not).

 

"I'm not convinced timelines should be relevant to having kids"

Thanks, this looks more like an answer.

comment by Sherrinford · 2023-12-25T20:15:53.689Z · LW(p) · GW(p)

The post's starting point is "how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)". You don't need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for "potential international conflict", for example.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2023-12-25T20:23:18.541Z · LW(p) · GW(p)

Oh, I thought this was mainly about x risk, especially due to the Yudkowsky reference. On the other points I think they're not a huge change either. If you predict the economy will have lots of AI in the future then you can give your child an advantage by training them in relevant skills. Also, many jobs like service jobs are likely to be around, there are lots of things AI has trouble with or which humans generally prefer humans to do. AI would increase material productivity and that would be expected to decrease cost of living as well. See Yudkowsky's post on AI unemployment.

Regarding international conflict, I haven't seen a convincing model laid out for how AI would make international conflict worse. Drone warfare is a possibility, but would tend to concentrate military power in technical countries such as Taiwan, UK, USA, and Israel. I don't know where OP lives but I don't see how it would make things worse for USA/UK children. Drones would be expected to have a better civilian casualty ratio than other methods like conventional explosives, nukes, or bio-weapons.

Replies from: martinkunev, Sherrinford
comment by martinkunev · 2023-12-26T01:18:33.339Z · LW(p) · GW(p)

For example, US-China conflict is fueled in part by the AI race dynamics.

comment by Sherrinford · 2023-12-25T20:31:00.364Z · LW(p) · GW(p)

Thanks. What are the things that AI will, in 10, 20 or 30 years, have "trouble with", and want are the "relevant skills" to train your kids in?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2023-12-25T20:36:20.304Z · LW(p) · GW(p)

Relevant skills for an AI economy would include mathematics, programming, ML, web development, etc.

It's hard to extrapolate out that far, but AI still has a lot of trouble with robotics (e.g. we don't have good dish washing household robots). So there will probably be e.g. construction jobs for a while. AI is helpful for programming but using AI to program relies on a lot of human support; I doubt programming will be entirely automated in 30 years. AI tends to have trouble with contextualized, embodied/embedded problems; it's better at decontextualized, schoolwork-like problems. For example if you're doing sales you need to manage a set of relationships whose data is gathered over a lot of contexts, mostly not recorded, and AI is going to have more trouble with parsing that context into something a transformer can operate on and give a good response to. Self-driving is an example of an embedded, though low-context, problem and progress on that has been slower than expected, although due to all the data from electric cars it's possible to train a transformer to imitate humans using that data.

comment by Mateusz Bagiński (mateusz-baginski) · 2023-12-25T18:35:20.355Z · LW(p) · GW(p)

Jessica, do you have a post or sth that distills/summarizes your current views on this?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2023-12-25T19:39:47.218Z · LW(p) · GW(p)

From 2018, AI timelines section of mediangroup.org/research.

Modeling AI progress through insights

We assembled a list of major technical insights in the history of progress in AI and metadata on the discoverer(s) of each insight.

Based on this dataset, we developed an interactive model that calculates the time it would take to reach the cumulation of all AI research, based on a guess at what percentage of AI discoveries have been made.

AI Insights dataset: data (json file), schema

Feasibility of Training an AGI using Deep Reinforcement Learning: A Very Rough Estimate

Several months ago, we were presented with a scenario for how artificial general intelligence (AGI) may be achieved in the near future. We found the approach surprising, so we attempted to produce a rough model to investigate its feasibility. The document presents the model and its conclusions.

The usual cliches about the folly of trying to predict the future go without saying and this shouldn't be treated as a rigorous estimate, but hopefully it can give a loose, rough sense of some of the relevant quantities involved. The notebook and the data used for it can be found in the Median Group numbers GitHub repo if the reader is interested in using different quantities or changing the structure of the model.

[Download PDF](http://mediangroup.org/docs/Feasibility of Training an AGI using Deep Reinforcement Learning, A Very Rough Estimate.pdf)

(note: second has a hard-to-estimate "real life vs alphago" difficulty parameter that the result is somewhat dependent on, although this parameter can be adjusted in the model)

I recommend articles (not by me) Why I am not an AI doomer, Diminishing Returns in Machine Learning.

Replies from: Sherrinford
comment by Sherrinford · 2024-01-06T22:36:20.489Z · LW(p) · GW(p)

So your timelines are the same as in 2018?

Thanks for the article recommendations.

answer by VojtaKovarik · 2023-12-27T18:16:35.565Z · LW(p) · GW(p)

An aspect that I would not take into account is the expected impact of your children.

Most importantly, it just seems wrong to make personal-happiness decisions subservient to impact.
But even if you did want to optimise impact through others, then betting on your children seems riskier and less effective than, for example, engaging with interested students. (And even if you wanted to optimise impact at all costs, then the key factors might not be your impact through others. But instead (i) your opportunity costs, (ii) second order effects, where having kids makes you more or less happy, and this changes the impact of your work, and (iii) negative second order effects that "sacrificing personal happiness because of impact" has on the perception of the community.)

answer by sapphire · 2023-12-27T17:26:52.459Z · LW(p) · GW(p)

If a friend or partner wanted to have a child with me Id potentially be down. Though we would need to have extremely aligned view on non-coercive parenting. Also im a sleepy boy so we gotta be aligned on low effort parenting too.

answer by lc · 2023-12-25T21:17:05.092Z · LW(p) · GW(p)

Having kids will make you care more about the future.

answer by Gesild Muka · 2023-12-27T17:06:59.133Z · LW(p) · GW(p)

Yes. The more the merrier.

answer by chaosmage · 2023-12-25T12:10:55.419Z · LW(p) · GW(p)

PSA: you have less control over whether you have kids, or how many you get, than people generally believe. There are biological problems you might not know you have, there are women who lie about contraception, there are hormonal pressures you won't feel till you reach a certain age, there are twins and stillbirth, and most of all there are super horny split second decisions in the literal heat of the moment that your system 2 is too slow to stop.

I understand this doesn't answer the question, I just took the opportunity to share a piece of information that I consider not well-understood enough. Please have a plan for the scenario where your reproductive strategy doesn't work out.

comment by dr_s · 2023-12-25T19:27:26.751Z · LW(p) · GW(p)

There are biological problems you might not know you have, there are women who lie about contraception, there are hormonal pressures you won't feel till you reach a certain age, there are twins and stillbirth, and most of all there are super horny split second decisions in the literal heat of the moment that your system 2 is too slow to stop.

This is absolutely nonsense IMO for any couple of grown ups of at least average intelligence who trust each other. People plan children all the time and are often successful; with a little knowledge and foresight I don't think the risk of having unplanned children is very high.

Replies from: nim, blf, chaosmage
comment by nim · 2023-12-26T03:13:10.536Z · LW(p) · GW(p)

Plenty of grown ups of average or even above average intelligence assume that 99.9% effective contraception means they'll never be in the .01% statistic.

If you've had the "if the highly effective redundant contraception fails, should we abort?" conversation before getting any sperm anywhere near any eggs with every partner you've ever had, I'd posit that you're in a slim minority of humanity.

And no human, no matter how rational, can predict with perfect accuracy what their emotional response will be to experiencing a physiological event that is completely novel to them.

Replies from: dr_s
comment by dr_s · 2023-12-26T07:57:38.201Z · LW(p) · GW(p)

I mean, .01% is a tiny rate if we're talking yearly, and I think an acceptable risk even if you don't plan to abort. But at this point we're completely away from the original point because .01% would mean you have a lot of control, which is my point exactly. Nothing is perfect, but the estimated efficacy of good contraception is probably largely dragged down by a long tail of people who are really bad at it or downright lie in self reporting studies.

comment by blf · 2023-12-25T23:39:03.068Z · LW(p) · GW(p)

Strong disagree.  Probably what you say applies to the case of a couple that cares sufficiently to use several birth control methods, and that has no obstruction to using some methods (e.g., bad reactions to birth-control pills).

Using only condoms, which from memory was the advice I got as a high-schooler in Western Europe twenty years ago, seems to have a 3% failure rate (per year, not per use of course!) even when used correctly (leaving space at the tip, using water-based lubricant). That is small but not negligible.

It would a good public service to have an in depth analysis of available evidence on contraception methods. Or maybe we should ask Scott Alexander to add a question on contraception failure to his annual survey?

Replies from: dr_s, Viliam
comment by dr_s · 2023-12-26T00:00:14.532Z · LW(p) · GW(p)

You mentioned things like women who lie about contraception and split second decisions, which IMO are nonsense to bring up in this context. But going back to condoms: yes, I believe that 3% figure to be garbage. The 3% figure is average and based on people self-reporting. But in practice, condoms are hard to break, and even if they do break it's easy to realise. Morning after pills are a thing for "accidents" you notice. So IMO reasonably conscientious people that actually use condoms properly (rather than just saying so in questionnaires) and double down with morning after pill in case of accidents will achieve a much better rate. 3% is an upper bound, because it includes a lot of confounders that skew the rate to be worse.

comment by Viliam · 2023-12-26T00:16:29.395Z · LW(p) · GW(p)

Using only condoms ... seems to have a 3% failure rate (per year, not per use of course!) even when used correctly ... That is small but not negligible.

The number seems unbelievably high to me. I don't have strong evidence to the contrary, but I also don't trust self-reported correct use.

comment by chaosmage · 2023-12-27T07:46:48.942Z · LW(p) · GW(p)

I didn't say the risk was "very high" (which would indeed be nonsense), I said it is non-zero. And I personally know two men who were tricked into becomng fathers.

And the thing with average intelligence is that only 50% of the population has it. For both partners to have it has to be (slightly) less likely than that.

Replies from: dr_s
comment by dr_s · 2023-12-27T15:29:55.188Z · LW(p) · GW(p)

No risk is zero, that's not a reasonable way to think about control over one's life. And you don't choose partners at random so intelligence send conscientiousness in couples probably correlate far better than that.

2 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2024-01-05T13:54:20.095Z · LW(p) · GW(p)

I look at this, having long ago adopted a combination of transhumanism and antinatalism: we have a real chance of achieving something much better than the natural human condition, but meanwhile, this is not a kind of existence in which one should create a life. Back in 2012, I wrote [LW · GW]:

We are in the process of gaining new powers and learning new things, there are obvious unknowns in front of us that we are on the way to figuring out, so at least hold off until they have been figured out and we have a better idea of what reality is about, and what we can really hope for, from existence.

As a believer in short timelines (0-5 years until superintelligence), there does not seem much more time to wait. The AI era has arrived, and a new ecosystem of mind is taking shape around us. It may become very bad for human beings, just thanks to plain old darwinian competition, to say nothing of superintelligences with unfriendly value systems. We are now all hostage to how this transformation turns out. 

comment by Kaj_Sotala · 2023-12-26T09:01:09.571Z · LW(p) · GW(p)

Related previous discussion: Is the AI timeline too short to have children? [LW · GW]