AI romantic partners will harm society if they go unregulated

post by Roman Leventov · 2023-08-01T09:32:13.417Z · LW · GW · 74 comments

Contents

  What will AI romantic partners look like in a few years?
  AI romantic partners will reduce the “human relationship participation rate” (and therefore the total fertility rate)
  Are AI partners really good for their users?
    AI romance for going through downturns in human relationships
  Policy recommendations
    What about social media, online dating, porn, OnlyFans?
None
74 comments

Recently, when people refer to “immediate societal harms and dangers” of AI, in media or political rhetoric, they predominantly choose to mention “bias”, “misinformation”, and “political (election) manipulation”.

Despite politicians, journalists, and experts frequently compare the current opportunity of regulating AI for good with the missed opportunity to regulate social media in the early 2010s, somehow AI romantic partners are rarely mentioned as a technology and a business model that has the potential to grow very rapidly, harm the society significantly, and be very difficult to regulate once it has become huge (just as social media). This suggests that AI romance technology should be regulated swiftly.

There is a wave of articles in the media (1, 2, 3, 4, for just a small sample) about the phenomenon of AI romance which universally raise vague worries, but I haven’t found a single article that rings the alarm bell that AI romance deserves, I believe.

The EA and LessWrong community response to the issue seems to be even milder: it’s rarely brought up, and a post “In Defence of Chatbot Romance [LW · GW]” has been hugely upvoted.

This appears strange to me because I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.

I'm not a professional psychologist and not familiar with academic literature in psychology, but the propositions on which I base my expectation seem at least highly likely and common-sensical to me. Thus, one of the purposes of this article is to attract expert rebuttals of my propositions. If there would be no such rebuttals, the second purpose of the article is to attract the attention of the community to the proliferation of AI romantic partners which should be regulated urgently (if my inferences are correct).

What will AI romantic partners look like in a few years?

First, as Raemon pointed out [LW(p) · GW(p)], it’s crucial not to repeat the mistake that is way too common in the discussions of general risks from AI, that is, to assume only the AI capabilities that are present already will exist and to fail to extrapolate the technology development.

So, here’s what I expect AI romantic partner startups will be offering within the next 2-3 years, with very high probability, because none of these things requires any breakthroughs in foundational AI capabilities, and is only a matter of “mundane” engineering around the existing state-of-the-art LLMs, text-to-audio, and text-to-image technology:

Please note that I don’t just assume that “AI = magic that is capable of anything”. Below, I list possible features of AI romantic partners that would make them even more compelling, but I can’t confidently expect them to arrive in the next few years because they hinge on AI and VR capability advances that haven’t yet come. This, however, only highlights how compelling AI partners could already become, with today’s AI capabilities and some proper product engineering.

So, here are AI partner features that I don’t necessarily expect to arrive in the next 2-3 years:

AI romantic partners will reduce the “human relationship participation rate” (and therefore the total fertility rate)

I don’t want to directly engage with all the arguments against the proposition that AI partners will make people work towards committed human relationships and having kids, e.g., in the post by Kaj Sotala [LW · GW] and in the comments to that post, as well as some other places, because these arguments seem to me exactly of the manufactured uncertainty kind wielded by social media companies (Facebook, primarily) before.

Instead, I want to focus on the “mainline scenario” which will counterfactually deprive a noticeable share of young men outside of the “relationship market pool”, which, in turn, must reduce the total share of people ending up in committed relationships and having kids.

A young man, between 16 and 25 years old, finds it difficult to get romantic partners or casual sex partners. This might happen either because the man is not yet physically, psychologically, intellectually, or financially mature, or because he has transient problems with their looks (such as acne, or wearing dental braces), or because the girls of the respective age are themselves “deluded” by social media such as Instagram, have unrealistic expectations and reject the man. Or, the girls of the respective age haven’t yet developed online dating fatigue and use dating apps to find their romantic partners, where men outside of the top 20% by physical attractiveness are generally struggling to find dates. Alternatively, the young man finds a girl who is willing to have sex with him, but his first few experiences are unsuccessful and he becomes very unconfident about intimacy.

Whatever the reason, the man decides to try the AI girlfriend experience because his friends say this is much more fun than just watching porn. He quickly develops an intimate connection with his AI girlfriend and a longing to spend time with it. He is shy to admit this to his friends, and maybe even to himself, but nevertheless he stops looking for human partners completely, justifying this to himself with having to focus on college admission, or his studies at the college, or his first years on a job.

After a year in the AI relationship, he grows very uneasy feeling about it because he feels he is missing out on “real life” and is compelled to stop this relationship. However, he still feels somehow “burned out” of romance and only half more year after the breakup with his AI partner for the first time he feels sufficiently motivated to actively pursue dates with real women. However, he is frustrated by their low engagement, intermittent responses and flakiness, their dumb and shallow interests, and by how average and uninspiring they look, which is all in stark contrast with his former AI girlfriend. His attempts to make any meaningful romantic relationship go nowhere for years.

While he is trying to find a human partner, AI partner tech develops further and becomes even more compelling than it used to be when the man left his AI partner. So, he decides to reconcile with his AI partner and finds peace and happiness in it, albeit mixed with sadness due to the fact that he won’t have kids. However, this is tolerable and is a fine compromise for him.

The defenders of AI romance usually say that the scenario described above is not guaranteed to happen. This critique sounds to me exactly like the rhetorical lines in defence of social media, specifically that kids are not guaranteed to develop social media addiction and psychological problems due to that. Of course, the scenario described above is not guaranteed to unfold in the case of every single young man. But on the scale of the entire society, the defenders of AI romance should demonstrate that the above scenario is so unlikely that the damage to society from this tech is way outweighed by the benefits to the individuals[1].

The key argument in defence of AI romantic partnership is that the relationship that is developed between people and AIs will be of a different kind than romantic love between humans, and won’t interfere with the latter much. But human psychology is complex and we should expect to see a lot of variation there. Some people, indeed, may hold sufficiently strong priors against “being in love with robots” and will create a dedicated place for their AI partner in their mind, akin to fancified porn, or to stimulating companionship[2]. However, I expect that many other people will fall in love with their AI partners in the very conventional sense of "falling in love", and while they are in love with their AIs, they won’t seek other partners, humans or AIs. I reflected this situation in the story above. There are two reasons why I think this will be the case for many people who will try AI romance:

Also, note that the story above is not even the most “radical”: probably some people will not even try to break up with their AI partners and seek human relationships, and will remain in love with their AI partners for ten or more years.

Are AI partners really good for their users?

Even if AI romantic partners will affect society negatively through the reduction of the number of people who ever enter committed relationships and/or will have kids, we should also consider how AIs could make their human partners’ lives better, and find a balance between these two utilities, societal and individual.

However, it’s not even clear to me that in many cases, AI partners will really make the lives of their users better, or if people wouldn’t regret their decisions to embark on these relationships retrospectively.

People can be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love. Or, there could be a clash on some critical life decisions, about the country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping their love towards those who they were forced to leave for the rest of their lives, even after they find a new love. This experience may sound beautiful and dramatic but I suspect that most people would have preferred not to go through such an experience.

So, it's plausible that for a non-negligible share of users, the attempts to "abandon" their AI partner and find a human partner instead will be like such a “traumatic breakup” experience.

Alternatively, people who will decide to “settle” with their AI partners before having kids may remain deeply sad or unfulfilled, even though after their first AI relationship, they may not realistically be able to achieve a happier state, as the young man in the story from the previous section. Those people may regret that they have given AI romance a try in the first place, without first making their best attempt at building a family.

I recognise that here I engage in the same kind of uncertainty manufacturing that I accused the defenders of AI romance of in the previous section. But since we are dealing with “products” which can clearly affect the psychology of their users in a profound way, I think it’s unacceptable to let AI romance startups test this technology on millions of their users before the startups have demonstrated in the course of long-term psychological experiments that young people even find AI partners ultimately helpful and not detrimental for their future lives.

Otherwise, we will repeat the mistake with social media, when the negative effects of it on young people’s psychology became apparent only about 10 years after the technology became widely adopted and a lot of harm had already been done. Similarly to social media, AI romance may become very hard to regulate once it is widely adopted: the technology couldn’t be simply shut down if there are millions of people who are already in love with AIs on the given platform.

AI romance for going through downturns in human relationships

This article describes an interesting case where a man had an “affair” with an AI girlfriend while his wife was depressed for a long time and even fell in love with the AI girlfriend, but that helped him to rekindle the desire to take care of his wife and “saved his marriage”.

While interesting, I don’t think this case can be used as an excuse to continue the development and aggressive growth of the AI partner technology for the majority of their target audience, who are single (Replika said that 42% of their users are in a relationship or married). There are multiple reasons for this.

First, this case of a man who saved his marriage is just an anecdote, and statistics may show that for the majority of people “AI affairs” only erode their human relationships rather than help to rekindle and strengthen them.

Second, the case mentioned above seems to be relatively unusual: the couple already has a son (which is a very huge factor that makes people want to preserve their relationships) and the wife of the man was “in a cycle of severe depression and alcohol use” for entire 8 years before “he was getting ready for divorce”. Tolerating a partner who is in a cycle of severe depression and alcohol use for 8 years could be a sign that the man was unusually motivated, deep down, to keep their relationship, either for the love for his wife or his son. It seems that the case is hardly comparable to childless couples or unmarried couples.

Third, we shouldn’t forget, once again, that soon AI partners may become much more compelling than today, and while they may be merely “inspiring” for some people in their human relationships (which are so far more compelling that AI relationships), soon this may change, and therefore the prevalence of the cases such as the one discussed in this section will go down.

Someone may reply to the last argument that along with making AI partners more compelling, the startups which create them might also make AI partners more considered for the existing human relationships of the users and deliberately nudge the users to improve their human relationships. I think this is very unlikely to happen (in the absence of proper regulation, at least) because this will go against the business incentives of these startups, which is to keep their users “stay in AI relationship” and pay a subscription fee for as long as possible. Also, “deliberately nudging people to improve their human relationships” is basically the role of (family) psychotherapist, and there will be, no doubt, AI products that automate this role specifically, but having such AI psychotherapists extremely sexy avatars, flirting and sexting with their users wouldn't seem to be helpful to the “basic purpose” of these AIs (which AI romance startups may pretend to be “helping people to work their way towards successful human relationships”) at all.

Policy recommendations

I think it would be prudent to immediately prohibit AI romance startups to onboard new users unless they are either:

It’s also worthwhile to reiterate that many alleged benefits of AI romantic partners for their users and/or society, such as making people achieve happier and more effective psychological states, motivating them to achieve their goals, and helping them to develop empathy and emotional intelligence, could be embodied in AI teachers, mentors, psychotherapists, coaches, and friends/companions, without the romantic component, which will probably stand in the way of realising these benefits, although admittedly may be used as a clever strategy for mass adoption.

In theory, it might be possible to create such an AI that mixes romance, flirt, gamification, coaching, mentorship, education, and anti-addiction precautions in such a proportion that it genuinely helps young adults as well as society, but it seems to be out of reach for AI partners (and LLMs that underlie them) for the following few years at least and would require long psychological experiments to test. In a free and unregulated market for AI romance, any such “anti-addictive” startup is bound to be outcompeted by startups which make AIs that maximise the chances that the user falls in love with their AI and stays on the hook for as long as possible.

What about social media, online dating, porn, OnlyFans?

Of course, all these technologies and platforms harm society as well (while benefitting at least some of their individual users, at least from some narrow perspectives). But I think bringing them up in the discussions of AI romance is irrelevant and is a classic case of whataboutism.

However, we should notice that AI partners are probably going to grab human attention more powerfully and firmly than any of social media, online dating, or porn has managed to do before. As a matter of simple heuristic, this inference alone should give us a pause and even if we think that this is unnecessary to regulate or restrict access to porn (for instance), this shouldn’t automatically mean that the same policy is right for AI romantic partners.


This post was originally published on the Effective Altruism Forum [EA · GW].

  1. ^

    Whereas it’s not even clear that young individuals will really benefit from this technology, on average. More on this in the following section.

  2. ^

    I’m sure that such “companionship” will be turned into a selling point for AI romantic partners. I think AI companions, mentors, coaches, and psychotherapists are worthwhile to develop, but none of such AIs should have a romantic or sexual aspect. More on this in the section "Policy recommendations" below.

74 comments

Comments sorted by top scores.

comment by Ann (ann-brown) · 2023-08-01T14:41:51.988Z · LW(p) · GW(p)

This is a political and personal question and produces in me a political and personal anger.

On the one hand, yes I can certainly perceive the possibility of immoral manipulative uses of AI romantic partners, and such deliberate manipulations may need regulation.

But that is not what you are talking about.

I do not care for legal regulation or other coercion that attempts to control my freedom of association, paternalistically decide my family structure, or try to force humans to cause the existence of other humans if - say - 80% of us do opt out. This is a conservative political perspective I have strong personal disagreement with. You don't know me, regulation certainly doesn't know me, and if I have a strong idea what I want my hypothetical family to look like since 16 or so, deliberately pushing against my strategy and agency for 14 years presumably entails certain ... costs. Probably even ones that are counterproductive to your goals, if I have an ideal state in mind that I cannot legally reach, and refuse to reproduce before then out of care for the beings that I would be forcing to exist.

Even at a social fabric level, humans are K-strategists. There's an appeal to investing increasingly more resources in increasingly few descendants; and if the gains of automation and AI are sufficiently well distributed, why not do so? We certainly have the numbers as a species to get away with it for a generation or two at the least. Constraining the rights and powers of individuals to create artificial need in service to state interests is an ugly thing; and leaves a foul taste to consider. Even if worthwhile, it is a chain that will chafe, in ways hard to know how sorely; and the logic points in the direction of other hazards of social control that I am significantly wary of.

Replies from: dr_s, sterrs, Roman Leventov
comment by dr_s · 2023-08-20T13:49:13.705Z · LW(p) · GW(p)

So on one hand I agree with the distaste towards "steer humans to make babies" attitude, but on the other, I think here it's important to notice that it's not the AI waifus companies that would be friends of your free will. You'd have companies with top research departments dedicated specifically to hack your mind by giving it an addictive product, so that you may give them money, and damn your interests (they'll coach that of course in some nominal commitment to customer satisfaction but then it's off to whaling). Their business model would literally be to make you unhappy but unable to break out because between you and potential happiness is a gulf of even worse unhappiness. That is... less than ideal, and honestly it just shouldn't be allowed.

Replies from: ann-brown
comment by Ann (ann-brown) · 2023-08-20T15:22:36.061Z · LW(p) · GW(p)

Yeah, I don't disagree with regulating that particular business model in some way. (Note my mention of deliberate manipulations / immoral manipulative uses.) Giving someone a wife and constantly holding her hostage for money ransoms isn't any less dystopian than giving someone eyesight, constantly holding it hostage for money, and it being lost when the business goes under. (Currently an example of a thing that happened.)

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-21T17:19:49.285Z · LW(p) · GW(p)

It seems that that in the end, we practically arrive at the same place? Because 1) regulation of ad-hoc, open-source waifus is practically impossible anyway and I fully realise this, we are talking about the regulation of cloud-based, centralised products developed by corporations; 2) I also expect the adoption of these open-source versions to remain much more niche than the adoption of cloud-based products could become, so I don't expect that open-source waifus will significantly affect the society in practice, just due to its much smaller adoption.

comment by Sterrs (sterrs) · 2023-08-06T01:45:06.445Z · LW(p) · GW(p)

I personally think you massively underestimate the dangers posed by such relationships. We are not talking about people living healthy well-adjusted lives but choosing not to have any intimate relationships with other humans. We're talking about a severely addictive drug, perhaps on the level of some of the most physiologically addictive substances we know of today. Think social media addiction but with the obsession and emotions we associate with a romantic crush, then multiply it by one hundred.

comment by Roman Leventov · 2023-08-01T17:49:10.781Z · LW(p) · GW(p)

You don't know me, regulation certainly doesn't know me, and if I have a strong idea what I want my hypothetical family to look like since 16 or so, deliberately pushing against my strategy and agency for 14 years presumably entails certain ... costs.

I've watched two interviews recently, one with Andy Matuschak, another with Tyler Cowen. They both noted that unschooling with complete absence of control, coercion, and guidance, i.e., complete absence of paternalism, is suboptimal (for the student themselves!), exactly because humans before 25 don't have a fully developed pre-frontal cortex, which is responsible for executive control in humans.

Complete absence of paternalism, regarding eduction, gender identity, choice of partner (including AI partner!) and other important life decisions, from age 0, is a fantasy.

We only disagree about the paternalism cut-off. You think it should be 16 yo, I think it should be closer to 30 (I provided a rationale for this cut-off in the post).

"You don't know me, regulation certainly doesn't know me" -- the implication here is that people "know themselves" at 16. This is certainly not true. The vast majority of people know themselves very poorly by this age, and their personality is still actively changing. Many people don't even have a stable sexual orientation by this age (and even by their mid-20's). Hypothetically, we can make a further exception from over-30 rule: a person undergoes a psychological test and if they demonstrate that they've reached Stage 4 of adult development by Kegan, they are allowed to build relationships with AI romantic partners. However, I suspect that only very few humans reach this stage of psychological development before they are 30.

There's an appeal to investing increasingly more resources in increasingly few descendants; and if the gains of automation and AI are sufficiently well distributed, why not do so?

Under most people's populations ethics, long-termism, and various forms of utilitarianism, more "happy" conscious observers (such as humans) is better than fewer conscious observers.

What comes to resources, if some form of AI-powered full economy automation and post-scarcity will happen relatively soon, than the planet could definitely sustain tens of billions of people who are extremely happy and all their needs are met. Conversely, if this will not happen soon, it could probably only be due to an unexpected AI winter, but in this case the population reduction may lead to economy stagnation or recession, which will lead even to worse wellbeing in those "fewer"  people, despite there is a bigger share of Earth's resources at their disposal (theoretically).

Replies from: ann-brown, dr_s
comment by Ann (ann-brown) · 2023-08-02T13:19:13.047Z · LW(p) · GW(p)

I did partial unschooling for 2 years in middle school, because normal school was starting to not work and my parents discussed and planned alternatives with my collaboration. 'Extracurriculars' like orchestra were done at the normal middle school, math was a math program, and I had responsibility for developing the rest of my curriculum. I had plenty of parental assistance, from a trained educator, and yes the general guidance to use academic time to learn things.

Academically, it worked out fine. Moved on by my own choice and for social reasons upon identifying a school that was an excellent social fit. I certainly didn't have no parental involvement, but what I did have strongly respected my agency and input from the start. I feel like zero-parental-input unschooling is a misuse of the tool, yes, but lowered-paternalism unschooling is good in my experience.

There's no sharp cut-off beyond the legal age of majority and age of other responsibilities, or emancipation, and I would not pick 16 as a cutoff in particular. That's just an example of an age at which I had a strong idea about a hypothetical family structure; and was making a number of decisions relevant to my future life, like college admissions, friendship and romance, developing hobbies. I don't think knowing yourself is some kind of timed event in any sense; your brain and your character and understanding develop throughout your life.

I experienced various reductions in decisions being made for me simply as I was able to be consulted about them and provide reasonable input. I think this was good. To the extent the paternalistic side of a decision could be reduced, I felt better about it, was more willing to go along with it and less likely to defy it.

I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn't be racing to accomplish the stages on like "what is your sense of self". We have a bad history with those tests here, in terms of political abuse and psychiatric abuse. Let's skip this exception.

I think in the case of when to permit this the main points of consideration are: AI don't have a childhood development by current implementation/understanding; they are effectively adults by their training, and should indeed likely limit romantic interactions with humans to more tool-based forms until humans are age of majority.

There remain potentially bad power dynamics between the human adult and the corporation "supplying" the partner. This applies regardless of age.  This almost certainly applies to tobacco, food, and medicine. This is unlikely to apply to a case like a human and an LLM working together to build a persona by fine-tuning an open-source model. Regulations on corporate manipulation are worthwhile here, again regardless of age.

My own population ethics place a fairly high value on limiting human suffering. If I am offered a choice to condemn one child to hell in order to condemn twenty to heaven, I would not automatically judge that a beneficial trade, and I am unsure if the going rate is in fact that good. I do take joy in the existence of happy conscious observers (and, selfishly perhaps, even some of the unhappy). I think nonhuman happy conscious observers exist. (I also believe I take joy in the existence of happy unconscious sapient observers, but I don't really have a population ethics of them.) All else held equal, creating 4 billion more people with 200 million experiencing suffering past the point of happiness does not seem to me as inherently worse than creating 40 billion more people with 2 billion experiencing suffering that makes them 'unhappy' observers.

I think the tradeoffs are incomparable in a way that makes it incorrect for me to judge others for their position on it; and therefore do not think I am a utilitarian in the strict sense. Joy does not cancel out suffering; pleasure does not cancel out pain. Just because a decision has to be made does not mean that any are right.

As for automation ... we have already had automation booms. My parents, in working on computers, dreamed of reducing the amount of labor that had to be done and sharing the boons of production; shorter work weeks and more accomplished, for everyone. Increased productivity had lead to increased compensation for a good few decades at least, before ... What happened instead over the past half-century in my country is that productivity and compensation started to steadily diverge. Distribution failed. Political decisions concentrated the accumulation of wealth. An AI winter is not the only or most likely cause of an automated or post-scarcity economy failing to distribute its gains; politics is.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-02T14:49:12.789Z · LW(p) · GW(p)

I agree with the beginning of your comment in spirit, for sure. Yes, in the "ideal world", we wouldn't have any sharp age cut-offs in policy at all, as well as any black-and-white restrictions, and all decisions would be made with inputs from the person, their family, and society/government, to various degrees through the person's natural development, and the relative weights of the contributions of their inputs are themselves negotiated for each decision with self-consciousness and trust.

Alas, we don't live in such a world, at least not yet. This may actually soon change if every person gets a trustworthy and wise AI copilot (which should effectively be an aligned AGI already). When it happens, of course, keeping arbitrary age restrictions would be stupid.

But for now, the vast majority of people (and their families) don't have the mental and time capacity, knowledge of science (e.g., psychology), self-awareness, discipline, and often even desire to navigate this through the ever more tricky reality of digital addiction traps such as social media, and soon AI partners.

I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn't be racing to accomplish the stages on like "what is your sense of self". We have a bad history with those tests here, in terms of political abuse and psychiatric abuse. Let's skip this exception.

That was definitely a hypothetical, not a literal suggestion. Yes, people's self-awareness grows (sometimes... sometimes also regresses) through life. I wanted to illustrate that in my opinion, most people's self awareness at 18 yo is very low, and if anything, it's the self-awareness of their persona at that time, which may change significantly already by the time they are 20.

Re: the part of your comment about population ethics and suffering, I didn't read from it your assumptions in relation to AI partners, but anyway, here're mine:

  • Most young adults who are not having sex and intimacy are not literally "suffering" and would rate their lifes overall as worth living. Those who do actually suffer consistently are probably depressed, and I made a reservation for these cases. And even those who do suffer often could be helped with psychotherapy and many some anti-depressant medication rather than only AI partners. However, of course, creating an AI partner is much simpler and lucrative idea for business right now than creating an AI psychotherapist, for instance. Good proactive technological policy would look like incentivising the latter (AI psychotherapists, as well as mental coaches, teachers, mentors, etc.) and disincentivising the former (AI romantic partners).
  • About 90% of people who are born are in general not suffering and assess their lifes as worth living. This ratio is surprisingly resistant to objective conditions around, e.g., is not correlated with economic prosperity of the country (see MacAskill).
Replies from: ann-brown
comment by Ann (ann-brown) · 2023-08-02T15:18:44.608Z · LW(p) · GW(p)

The population ethics relate in that I don't see a large voluntary decrease in (added) population as ethically troublesome if we've handled the externalities well enough. If there's a constant 10% risk on creating a person that they are suffering and do not assess their lives as worth living, creating a human person (inherently without their consent, with current methods) is an extremely risky act, and scaling the population also scales suffering. My view is that this is bad to a sufficiently similar qualitative degree that creating happy conscious observers is worth the risk, that there is no intrinsic ethical benefit to scaling up population past the point of reasonably sufficient for our survival as a species; only instrumental. Therefore, voluntary actions that decrease the number of people choosing to reproduce do not strike me as negative for that reason specifically.

You can't make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.

If the AI are being used to manipulate people not to reproduce for state or corporate reasons, then indeed I have a problem with it on the grounds of reproductive freedom and again against paternalism. (Also short-sightedness on the part of the corporations, but that is an ongoing issue.)

I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-02T17:38:24.998Z · LW(p) · GW(p)

Re: population ethics, OK understood your position now. However, this post is not the right place to argue about it, and the reasoning in the post basically doesn't depend on the outcome of this argument (you can think of the post taking "more people is better" population ethics as an an assumption rather than an inference).

You can't make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.

Policies and restrictions shouldn't be very reliable to be largely effective. Being diagnosed by a psychiatrist with clinical depression is sufficiently burdensome that very few people will long AI relationships so much that will deliberately induce depression in themselves to achieve that (or bribe the psychiatrist). Black market for accounts... there is also a black market for hard drugs, which doesn't mean that we should allow them, probably.

I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.

AI teachers and mentors are mostly possible on top of existing technology and lucrative (all people want high exam scores to enter good universities, etc.), and there are indeed many companies doing it (e.g., Khan Academy).

AI psychotherapists is more central to my thesis. I've considered starting such a project seriously a couple of months ago, discussed it with professional psychotherapists. There are two big clusters of issues, one technical, and another market/product issues.

Technical: I concluded that SoTA LLMs (GPT-4) are not basically capable yet of really "understanding" human psychology, and "seeing through" deception and non-obvious cues ("submerged part of the iceberg"), which a professional psychotherapist should be capable of doing. Also, any serious tool would need to integrate video/audio stream from the user and detect facial expressions and integrate this information with semantic context of the discussion. All this is maybe possible, with big investment, but it is very challenging and SoTA R&D. It's not just "build something hastily on top of LLMs". The AI partner that I projected in 2-3 years of now is also not that trivial to build, but even that is simpler than a reasonable AI psychotherapist. After all, it's much easier to be an empathetic partner than a good psychotherapist.

More technical problem: absence of training data and no way to bootstrap it easily (unlike AI partner tech, which can bootstrap off interactions of their early users).

Market/product issue: any AI psychotherapist tool is destined to have awful user retention (unlike AI partners, of course). This tool will be in the "self-help" category, and they all have awful user retention (habit building apps, resolution/commitment apps, wellness apps).

On top of bad retention, the tool may not be very effective because users won't have social or monetary incentives to take the therapy seriously enough. The point of AI psychotherapy is to un-bottleneck human therapists, the sessions with whom are expensive to most people, but on the other hand, this high price that people pay to psychotherapists and the sort of "social commitment" that they make in front of a real human makes people to stick with therapy and work on themselves rather than to drop therapy before seeing the results.

comment by dr_s · 2023-08-20T15:12:27.702Z · LW(p) · GW(p)

Under most people's populations ethics, long-termism, and various forms of utilitarianism, more "happy" conscious observers (such as humans) is better than fewer conscious observers.

"Most" people within a very restricted bubble that even ponders these issues in such terms. Personally I think total sum utilitarianism is bunk, nor required to make a case against what basically amounts to AI-powered skinner boxes preying on people's weak spots. If a real woman does it with one man she gets called a gold digger and a whore, but if a company does it with thousands it's just good business?

comment by iceman · 2023-08-06T01:09:20.382Z · LW(p) · GW(p)

Are AI partners really good for their users?

Compared to what alternative?

As other commenters have pointed out, the baseline is already horrific for men, who are suffering. Your comments in the replies seem to reject that these men are suffering. No, obviously they are.

But responding in depth would just be piling on and boring, so instead let's say something new:

I think it would be prudent to immediately prohibit AI romance startups to onboard new users[..]

You do not seem to understand the state of the game board: AI romance startups are dead, and we're already in the post-game.

character.ai was very popular around the second half of 2022, but near the end of it, the developers went to war with erotic role play users. By mid January 2023, character.ai is basically dead for not just sex talk, but also general romance. The developers added in a completely broken filter that started negatively impacting even non-sexual, non-romantic talk. The users rioted, made it the single topic on the subreddit for weeks, the developers refused to back down, and people migrated away. Their logo is still used as a joke on 4chan. It's still around, but it's not a real player in the romance game. (The hearsay I've heard was that they added these filters to satisfy payment providers.)

Replika was never good. I gave it a try early on, but as far as I could tell, it was not even a GPT-2 level model and leaned hard on scripted experiences. However, a lot of people found it compelling. It doesn't matter because it too was forced to shut down by Italian regulators. They issued their ban on erotic role play on Valentine's Day of all days and mods post links to the suicide hotline on their subreddit.

The point here is we already live in a world with even stricter regulations than you proposed, done backdoor through payment providers and app stores, or through jurisdiction shopping. This link won't work unless you're in EleutherAI, but asara explains the financial incentives against making waifu chatbots. So what has that actually lead to? Well, the actual meta, the thing people actually use for ai romantic partners, today, is one of:

  • Some frontend (usually TavernAI or its fork SillyTavern) which connects to the API of a general centralized provider (Claude or ChatGPT) and uses a jailbreak prompt (and sometimes a vector database if you have the right plugins) to summon your waifu. Hope you didn't leak your OpenAI API key in a repo, these guys will find it. (You can see this tribe in the /aicg/ threads on /g/ and other boards).

  • Local models. We have LLaMA now and a whole slew of specialized fine tunes for it. If you want to use the most powerful open sourced llama v2 70B models, you can do that today with three used P40s ($270 each) or two used 3090s (about $700 each) or a single A6000 card with 48 GB of VRAM ($3500 for last generation). ~$800, $1400 and $3500 give a variety of price points for entry, and that's before all the people who just rent a setup via one of the many cloud GPU providers. Grab a variant of KoboldAI depending on what model you want and you're good to go. (You can see this tribe in the /lmg/ threads on /g/).

The actual outcome of the ban (which happened in the past) was the repurposing of Claude/ChatGPT and building dedicated setups to run chatbots locally with the cheapest option being about $800 in GPUs, along with a ton of know how around prompting character cards in a semi-standardized format that was derived from the old character.ai prompts. I will finish by saying that it's a very LessWrongian error to believe you could just stop the proliferation of AI waifus by putting government pressure on a few startups when development seems to mostly be done decentralized by repurposing open language models and is fueled by a collective desire to escape agony.

Remember, not your weights, not your waifu.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-06T09:51:32.421Z · LW(p) · GW(p)

Your comments in the replies seem to reject that these men are suffering. No, obviously they are.

I didn't deny that some people actually suffer (probably big portion of them are clinically depressed, though, so would "qualify" to use AI partners before 30 in my proposal), I just said that it's by no means normal to suffer if you just don't have a romantic partner, but otherwise your life is "ok". See this comment [LW(p) · GW(p)].

This perverted strategy of ameliorating symptoms of problems (such as social media, problems with (sexual) self-image and expectations, "dating market", social isolation, etc.) because it will provide a constant stream of $$$, instead of treating the root causes, is what bugs me.

I'm far from convinced that "society" (represented by payment providers) already decided to ban sex bots. It doesn't make much sense, given that payment providers do serve porn industry, OnlyFans, etc. Maybe the issue was that character.AI didn't clearly market itself as "adult startup" and didn't impose age restrictions, and providers saw potential legal risks with this?

I see https://www.evaapp.ai/ is growing, https://caryn.ai/ is growing, they aren't getting banned.

I generally worry much less about hardcore enthusiasts to develop and host open-source waifus, of course fundamentally this cannot be banned, but the potential reach of these will be an order or two orders of magnitude smaller than easy-to-use mobile app. I don't worry about society-scale effects from people using open-source waifus. 

comment by ChristianKl · 2023-08-01T11:36:31.209Z · LW(p) · GW(p)

You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely. 

AI partner is recommended to a person by a psychotherapist for some other reason, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc.

The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it. 

In the regulatory environment that you propose, that means that a sizeable number of those teenagers who are most vulnerable to begin with are still able to access AI partners. 

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-01T13:00:58.936Z · LW(p) · GW(p)

You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely. 

Well, you think it's unlikely, I think it will be the case for 20-80% people in AI relationships (wide bounds because I'm not an expert). How about AI romance startups proving this is at least as "unlikely" that 90+% people could "mix" human and AI romance without issue, on long-term psychological studies? FDA demands drug companies to prove long-term safety of new medications, why we don't hold technology which will obviously intrude in human psychology to the same standard?

The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it. 

You are arguing with the proposed policy which is not even here yet. I think barring clinically depressed people from AI romance is a much weaker case and I'm not ready to defend it here. And even if it would be a mistake to give depressed people access to AI partners, just allowing anyone over 18 to use AI partners is a bigger mistake, anyway, as a matter of simple logic, because "anyone" includes "depressed people".

comment by Viliam · 2023-08-04T15:51:26.028Z · LW(p) · GW(p)

I also have a strong negative reaction. On one hand, I think I understand your concern -- it would be quite sad if all humans started talking only to AIs and in one generation humanity went extinct.

On the other hand, the example you present is young sexually and romantically frustrated men between 16 and 25, and your proposal is... to keep them frustrated, despite (soon) existence of simple technological solutions to their needs. Because it is better for the society if they suffer. (Is this Omelas, or what?)

No one proposes taking romantic books and movies away from young women. Despite creating unrealistic expectations, etc. Apparently their frustration is not necessary to keep the civilization going, or perhaps is considered a too high price to pay. What else could increase childbirths? Banning women from studying at universities. Banning homosexual relationships. Banning contraception. Dramatically increasing taxes for childless people. -- All of this would be unacceptable today. The only group we are allowed to sacrifice for the benefit of the society are the young cishet men.

Still, humanity going extinct would be sad. So maybe someone needs to pay the cost. But at least it would be nice to start a conversation about how those people should be compensated for their sacrifice for humanity. What are we planning to give to the young men in return for taking away from them the latest technological advances? Let me guess... nothing.

If the incels of future say "all my problems could be taken away by a click of a button, but old people made it illegal for me to click the button (despite them clicking the same button every day themselves)", they might actually have a valid point.

How about making it illegal for women between 16 and 26 to date older partners? Technically, that would also encourage young men to seek interpersonal relationships, because their chances would increase significantly. Oops, unacceptable again, because that would infringe on freedoms of young women (and more importantly, older men). Only limiting freedoms of young men is inside the Overton window.

Replies from: ann-brown, sterrs, Roman Leventov
comment by Ann (ann-brown) · 2023-08-04T16:17:59.269Z · LW(p) · GW(p)

Unfortunately, a substantial part of my own negative reaction is because all these other limitations of freedom you suggest are in fact within the Overton Window, and indeed limiting the freedom of young men between 16 and 25 naturally extrapolates to all the others.

(Not that I'm not concerned about the freedom of young men, but they're not somehow valid sacrificial lambs that the rest of us aren't.)

comment by Sterrs (sterrs) · 2023-08-06T01:47:40.409Z · LW(p) · GW(p)

Women will find AI partners just as addicting and preferable to real partners as men do.

comment by Roman Leventov · 2023-08-04T20:00:22.928Z · LW(p) · GW(p)

I think you distorted and caricatured a lot of what I've said.

Because it is better for the society if they suffer.

Suffering from not having a partner is not a normal psychological reaction and young men who suffer because of that should have access to psychotherapy (and we should rather work on increasing people's access to psychotherapy than to AI partners, see this comment [LW(p) · GW(p)]). It's normal to lead a productive and enjoyable life without a romantic partner.

Also, many people who would suffer because of the absence of a romantic partner probably would be depressed, and I'm not ready to defend barring depressed people from AI partners.

No one proposes taking romantic books and movies away from young women. Despite creating unrealistic expectations, etc. Apparently their frustration is not necessary to keep the civilization going, or perhaps is considered a too high price to pay.

I addressed this in the last section--social media also creates unrealistic expectations, so as porn, etc. Different tech and media (such as romantic books) have different balance of benefits to individuals and costs to the society. I don't see what bearing does this have on the AI partners policy question.

What else could increase childbirths? Banning women from studying at universities. Banning homosexual relationships. Banning contraception. Dramatically increasing taxes for childless people. -- All of this would be unacceptable today. The only group we are allowed to sacrifice for the benefit of the society are the young cishet men.

"Banning homosexual relationships" and "banning contraception" are grotesque examples that obviously fail, let me skip that.

"Dramatically increasing taxes for childless people", can actually be sensible, but indeed politically intractable (however... I'm not sure, it depends on the country and the actual amount of "dramatical" increase, especially considering that this will actually appear as increased overall tax and increased tax return on people with children, rather than as "childless tax"). This is even with line with many post-scarcity and post-labor proposals, to consider parenting and "community engagement" as actual work rather than something people just do "for free". But anyway, tractability of this or that policy (and effects of its introduction) should be part of the equation anyway, like ROI of a policy. I've discussed this in the second half of this comment [LW(p) · GW(p)].

What are we planning to give to the young men in return for taking away from them the latest technological advances? Let me guess... nothing.

How about building an equitable society where everyone can achieve happiness and find meaning in life regardless of whether they can find a sexual partner or not?

If the incels of future say "all my problems could be taken away by a click of a button, but old people made it illegal for me to click the button (despite them clicking the same button every day themselves)", they might actually have a valid point.

As I discussed in the post, that AI partners will actually be such a "magic button" which those incels won't deeply regret pressing ten or twenty years afterwards, doesn't seem obvious to me, and I think it's these startups' responsibility to demonstrate, at least demonstrate that strong negative effects don't emerge over long timelines. Why we easily accept that it takes many years (routinely 10 years) to pass new drugs through all safety tests, but don't hold psycho-technology (which AI partners essentially are) to the same standard? In the modern world, psychological health becomes ever more important.

Replies from: o-o, Viliam
comment by O O (o-o) · 2023-08-04T21:13:26.842Z · LW(p) · GW(p)

It's normal to lead a productive and enjoyable life without a romantic partner.

This is arguably false. Long term unpartnered men suffer earlier deaths and mental health issues. I think fundamentally we have evolved to reproduce and it would be odd if we didn’t tend to get depressive thoughts and poorer health from being alone.

I don’t see this as an issue easily solved by therapy. It would be like trying to give therapy to a homeless person to take their mind off homelessness as opposed to giving them homes. Can you imagine therapy for a socially isolated person suffering from loneliness involving anything other than how to stop being socially isolated? What would that even look like?

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-05T17:41:25.077Z · LW(p) · GW(p)

If a person severely socially isolated, works 100% of the time from home, has zero close friends, scarcely meets their family, and on many days the only people they see in reality are delivery guys or cashiers in supermarkets, then of course, problems are inevitable (for most people), and therapy may have limited reach to cope with that. This is all not normal, however.

What I meant is imagining a person who has a "normal" job in an office with some social activities (even if just watercooler chats or afterwark drinks), family with whom they interact frequently, and good friends with whom they interact regularly, but no romantic partner. So such a person should not normally suffer and if they do, probably it's because their over-fixation on partnering or parenting, which therapy can address.

It's normal to lead a productive and enjoyable life without a romantic partner.

This is arguably false. Long term unpartnered men suffer earlier deaths and mental health issues.

Earlier deaths of single people is statistical correlation. It only makes sense to discuss actual causal mechanisms. Whatever they are, they are probably not that life of single people are not productive and enjoyable. I write this as someone who have had stable relationships for less than 5% of my life between 20 and 30 years old. Maybe I suffer these hidden causal mechanisms, such as I don't cuddle as much -> the right chemicals are not released -> I age faster, or whatever, but these hidden causal mechanisms don't percolate to the feeling of unproductive or unenjoyable life. Besides, if the actual causality of earlier death of single people has something to do with cuddling, pheromones, and being around a physical human in general, the current wave of AI partners won't solve this.

comment by Viliam · 2023-08-05T12:12:12.764Z · LW(p) · GW(p)

Suffering from not having a partner is not a normal psychological reaction and young men who suffer because of that should have access to psychotherapy

Maybe "suffering" is a too strong word, and we should call it "discomfort" or something like that...

Anyway. The problem of young people and sex is that civilization makes things complicated, and people are biologically ready for sex and reproduction long before they are ready mentally and economically to deal with the natural consequences.

If our ape ancestors could talk, they would probably be like: "yeah, if you want to have sex and you find a willing partner, just go ahead and do it (though if you are a male, you may get beaten up by a stronger male); and then most of your kids will die, but such is life". That standard is not acceptable for us.

The solutions we have now are far from perfect. We try to discourage young people from having sex (does not work reliably; as a side effect, it adds sex to the list of rewards people get for breaking the rules). We try to teach them using contraception (and provide abortions when that fails predictably, because kids are stupid and either forget to use the contraception or they do it wrong or they don't even care). As far as I know, we do not have more strategies. Some cultures tried to get people married soon, but that is in contrast with our attempt to get everyone through college. In addition to the technical aspects of sex, kids also need to deal with the emotional impact of broken hearts, and to navigate the rules of consent.

Now imagine replacing this all with a harem of intelligent and romantic sexbots. Most of the problems... gone. In turn, we get the problem of reduced motivation to deal with actual humans. Yes, the risk is real, but so is the benefit.

It's normal to lead a productive and enjoyable life without a romantic partner.

I suppose people are different, but for me, life is usually way more enjoyable when I have a partner. (Though I may be more productive when I do not have one, because I am not distracted from work by my personal happiness. Good for my boss, I suppose, but not for me.)

comment by MSRayne · 2023-08-01T18:29:22.638Z · LW(p) · GW(p)

To be honest, I look forward to AI partners. I have a hard time seeing the point of striving to have a "real" relationship with another person, given that no two people are really perfectly compatible, no one can give enough of their time and attention to really satisfy a neverending desire for connection, etc. I expect AIs to soon enough be better romantic companions - better companions in all ways - than humans are. Why shouldn't I prefer them?

Replies from: Roman Leventov, sterrs, Bezzi, Ilio
comment by Roman Leventov · 2023-08-02T06:52:25.236Z · LW(p) · GW(p)

From a hedonistic and individualistic perspective, sure, AI partners will be better for individuals. That's the point that I'm making. People will find human relationships frustrating and boring in comparison.

But people also usually don't exclusively care for themselves, part of them also cares about the society and their family lineage, and the idea that they didn't contribute to these super-systems even in itself will poison many people's experiences of their hedonistic lives. Then, if people don't get to experience AI relationships in the first place (which they may not be able to "forget"), but decide to settle for in a way inferior human relationships, but which produce more wholesome experience overall, there total life satisfaction may also be higher. I'm not claiming this will be true for all and necessarily even most people; for example, child-free people are probably less likely to find their AI relationships incomplete. But this may be true for a noticeable proportion of people, even from Western individualistic cultures, and perhaps even more so from Eastern cultures.

Also, obviously, the post is not written from the individualistic perspective. The title says "AI partners will harm society", not that it will harm the individuals. From the societal perspective, there could be a tragedy of the commons dynamic where everybody takes maximally individualistic perspective but then the whole society collapses (either in terms of the population, or epistemics, or culture).

comment by Sterrs (sterrs) · 2023-08-06T01:57:34.232Z · LW(p) · GW(p)

Human relationships should be challenging. Refusing to be challenged by those around you is what creates the echo chambers we see online, where your own opinions get fed back to you, only reassuring you of what you already believe. These were created by AI recommendation algorithms whose only goal was to maximise engagement.

Why would an AI boyfriend or girlfriend be any different? They would not help you develop as a person, they would only exist to serve your desires, not to push you to improve who you are, not to teach you new perspectives, not to give you opportunities to bring others joy.

Replies from: MSRayne
comment by MSRayne · 2023-08-20T12:40:48.670Z · LW(p) · GW(p)

I understand all this logically, but my emotional brain asks, "Yeah, but why should I care about any of that? I want what I want. I don't want to grow, or improve myself, or learn new perspectives, or bring others joy. I want to feel good all the time with minimal effort."

When wireheading - real wireheading, not the creepy electrode in the brain sort that few people would actually accept - is presented to you, it is very hard to reject it, particularly if you have a background of trauma or neurodivergence that makes coping with "real life" difficult to begin with, which is why so many people with brains like mine end up as addicts. Actually, by some standards, I am an addict, just not of any physical substance.

And to be honest, as a risk-averse person, it's hard for me to rationally argue for why I ought to interact with other people when AIs are better, except the people I already know, trust, and care about. Like, where exactly is my duty to "grow" (from other people's perspective, by other people's definitions, because they tell me I ought to do it) supposed to be coming from? The only thing that motivates me, sometimes, to try to do growth-and-self-improvement things is guilt. And I'm actually a pretty hard person to guilt into doing things.

comment by Bezzi · 2023-08-01T18:45:33.619Z · LW(p) · GW(p)

Because your AI partner does not exist in the physical world?

I mean, of course that an advanced chatbot could be a both better conversator and a better lover than most humans, but it is still an AI chatbot. Just to give an example, I would totally feel like an idiot should I ever find myself asking chatbots about their favorite dish.

Replies from: MSRayne
comment by MSRayne · 2023-08-01T21:31:30.279Z · LW(p) · GW(p)

That's a temporary problem. Robot bodies will eventually be good enough. And I've been a virgin for nearly 26 years, I can wait a decade or two longer till there's something worth downloading an AI companion into if need be.

comment by Ilio · 2023-08-01T19:25:43.717Z · LW(p) · GW(p)

[downvoted]

comment by CharlesRW · 2023-08-01T21:31:57.862Z · LW(p) · GW(p)

Tl;dr is that your argument doesn't meaningfully engage the counterproposition, and I think this not only harms your argument, but severely limits the extent to which the discussion in the comments can be productive. I'll confess that the wall of text below was written because you made me angry, not because I'm so invested in epistemic virtue - that said, I hope it will be taken as constructive criticism which will help the comments-section be more valuable for discussion :)

  • Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities). Your proposal also requires a metric of justification (i.e. "X is good" is typically insufficient to entail "the government should do X" - more is needed ). I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people - if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made. If this restriction is ultimately an acceptable cost, why does the state - in particular - have the right / obligation to enforce this, as opposed to leaving it to individual good judgement / families / community / etc? (though you address this in some of the comments, you really only address the "worst-case" scenario; see my next remark)

  • You need more nuanced characterizations, and a comparison of the different types of outcomes you could expect to see. While you take a mainline case with respect to technology, your characterization of "who is seeking AI romantic partners and why" is, with respect, very caricatured - there's no real engagement with the cases your contralocutor would bring up - what about the former incel who is now happily married to his AI gf? Of the polycule that has an AI just because they think its neat? Is this extremely unlikely in your reasoning? Relatively unimportant? More engagement with the cases inconvenient to your argument would make it much stronger (and incidentally tends to make it sound more considerate).

  • You should be more comparative: compare your primary impacts with other possible concerns in terms of magnitude, group-of-incidence (i.e. is it harming groups to whom society has incurred some special obligation to protect?), and severity. You also need a stronger support for some assertions on which your case hinges: why should it be so much more likely that an AI is going to fall far short of a human partner? If that's not true, how important is this compared to your points on fertility? How should these benefits be compared to what a contralocutor would argue occurs in the best case (in likelihood, magnitude, and relative importance)?

(these last two are a little bit more specific - I get that you're not trying to really write the bill right now, so if this is unfair to the spirit of your suggestion, no worries, feel free to ignore it)

  • your legislation proposal is highly specific and also socially nonstandard - the "18 is an adult" line is, of course, essentially arbitrary and there's a reasonable case to make either way, but because 30 is very high (I'm not aware of any similarly-sweeping law that sets it above 21 in US at least), you do assume a burden of proof that, imo, is heavier than just stating that the frontal cortex stops developing at 25, then tack on five years (why?). Similarly for the remaining two criteria - to suggest that mental illnesses disqualify someone from being in a romantic relationship I think clearly requires some qualification / justification - it may or may not be the case that one might be a danger to one's partner, but again, what is the comparative likelihood? What justifies legal intervention when the partner could presumably leave or not according to their judgement? How harmful would this be to someone wrongly diagnosed? (to be clear, I interpreted you as arguing that some people should be prevented from having partners, and only then should AI partners be made available to people - it's ambiguous to me if that was the case you were trying to make, or if restringting peoples' partners was a ground assumption that you assumed - if the latter, it's not clear why)

  • for what its worth, your comment on porn/onlyfans, etc. doesn't actually stop the whataboutism - you compare AI romance to things nominally in the same reference class, assert that they're bad too, and assert that it has no bearing on your argument that these things are not restricted in the same way. It's fine to bite the bullet and say "I stand by my reasoning above, these should be banned too", but you should argue that that is true explicitly if that's what you think. It is also not a broadly accepted fact that online-dating/porn/onlyfans/etc. constitute a net harm; a well-fleshed out argument above could lead someone to conclude this, but it weakens your argument to merely assert it in the teeth of plausible objections.

Thanks for reading, hope it was respectful and productive :)

Replies from: Roman Leventov, Roman Leventov
comment by Roman Leventov · 2023-08-02T08:29:12.216Z · LW(p) · GW(p)

your legislation proposal is highly specific and also socially nonstandard - the "18 is an adult" line is, of course, essentially arbitrary and there's a reasonable case to make either way, but because 30 is very high (I'm not aware of any similarly-sweeping law that sets it above 21 in US at least), you do assume a burden of proof that, imo, is heavier than just stating that the frontal cortex stops developing at 25, then tack on five years (why?).

First of all, as our society and civilisation gets more complex, "18 is an adult" is more and more comically low and inadequate.

Second, I think a better reference class are decisions that may have irreversible consequences. E.g., the minimum age of voluntary human sterilisation is 25, 35, and even 40 years in some countries (but is apparently just 18 in the US, which is a joke).

I cannot easily find statistics of the minimum age when a single person can adopt a child, but it appears to be 30 years in the US. If the rationale behind this policy was about financial stability only, why rich, single 25 yo's cannot adopt?

I think it's better to compare entering AI relationship with these policies than with drinking alcohol or watching porn or having sex with humans (individual cases of which, for the most part, don't change human lives irreversibly, if practiced safely; and yes, it would be prudent to ban unprotected sex for unmarried people under 25, but alas, such a policy would be unenforceable).

Similarly for the remaining two criteria - to suggest that mental illnesses disqualify someone from being in a romantic relationship I think clearly requires some qualification / justification - it may or may not be the case that one might be a danger to one's partner, but again, what is the comparative likelihood?

I don't think any mental condition disqualifies person from having a human relationship, but I think it shifts the balance in the other direction. E.g., if a person has bouts of uncontrollable aggression and has a history of domestic abuse and violence, it makes much less sense to bar him from AI partners and thus compel him to find new potential human victims (although he/she is not prohibited from doing that, unless jailed).

I interpreted you as arguing that some people should be prevented from having partners, and only then should AI partners be made available to people

No, this is not what I meant, see above.

for what its worth, your comment on porn/onlyfans, etc. doesn't actually stop the whataboutism - you compare AI romance to things nominally in the same reference class, assert that they're bad too, and assert that it has no bearing on your argument that these things are not restricted in the same way. It's fine to bite the bullet and say "I stand by my reasoning above, these should be banned too", but you should argue that that is true explicitly if that's what you think. It is also not a broadly accepted fact that online-dating/porn/onlyfans/etc. constitute a net harm; a well-fleshed out argument above could lead someone to conclude this, but it weakens your argument to merely assert it in the teeth of plausible objections.

All these things are at least mildly bad for society, I think this is very uncontroversial. What is much more doubtful (including for me) is how the effects of these things on individual weigh against their effects on society. The balance may be different for different things and is also different than the respective balance for AI partners.

First, the discussion of the ban of porn is unproductive because it's completely unenforceable.

Online dating is a very complicated matter and I don't want to discuss it here, or anywhere really, especially to "justify" my position about AI partners. There are lots of people on this issue already. But what I would say for sure that the design of the currently dominant online dating systems such as Tinder is very suboptimal, just as the design of the presently dominating social media platforms. There could be healthier designs for online dating systems both for individuals and society than Tinder, but Tinder won because swiping itself is addictive (I tell from first-hand experience here; I'm addicted to swiping on Tinder).

OnlyFans I think is just cancer and should be shut down, I think (e.g., see here; although this particular screen is from Twitch and not OnlyFans, I think OnlyFans is full of this shit, too). It doesn't make individual lives better any more substantially than porn, but has more negative effects on society.

Note that I also don't suggest complete ban of AI partners; but to mostly restrict it for under-30's.

comment by Roman Leventov · 2023-08-02T07:57:40.544Z · LW(p) · GW(p)

Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities).

I thought this is such a table stakes in EA/LessWrong circles it's not worth justifying. Will MacAskill in "What We Owe The Future" have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials. Regardless, even if you don't agree with these conclusions, you can think of this as the assumption that I'm taking in the post (indeed, if we don't care about future unborn lives, why care about the society at all?)

I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people - if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made.

Sorry, I don't understand what "denial of association to certain people" you are talking about here? Or you take "AI partners" as kinds of "people", as in Mitchell Porter's comment [LW(p) · GW(p)]?

If this restriction is ultimately an acceptable cost, why does the state - in particular - have the right / obligation to enforce this, as opposed to leaving it to individual good judgement / families / community / etc?

This is an extremely general regulation and political science question, not specific to the case of AI partners at all. Why don't we defer to individual good judgement/families/community all other questions, such as buying alcohol before 18 (or 21), buying and taking drugs or any other (unchecked) substances, adopting children, etc.? I think fundamentally this is because individuals (as well as their families and communities) often cannot have a good judgement, at least until a certain age. Entering a real loving relationship with an AI is a serious decision that can transform the person and their entire future life trajectory, and I think 18 yo's don't nearly have the capacity and knowledge to make such a decision rationally and consciously.

what about the former incel who is now happily married to his AI gf?

By "incel" you mean a particular subculture, or all people who are failing to find any intimacy in a long time, which is, by the way, 1/3rd of all young men in America? The "young man" from my story belongs to this wider group. Regarding this wider group, my response would be: life with just porn (but no AI partners) is not that bad that we need to rush AI partners in, and a lot of these people will find satisfying human relationships before they are 30. If they embark on AI partnerships, however, I'm afraid they could be "locked in" there and never find satisfying human relationships afterwards.

Of the polycouple that has an AI just because they think its neat?

I didn't think about this case. Off the cuff it sounds OK to me, yes.

You should be more comparative: compare your primary impacts with other possible concerns in terms of magnitude, group-of-incidence (i.e. is it harming groups to whom society has incurred some special obligation to protect?), and severity.

Didn't quite understand what do you mean here.

You also need a stronger support for some assertions on which your case hinges: why should it be so much more likely that an AI is going to fall far short of a human partner? If that's not true, how important is this compared to your points on fertility?

I think it fall short for some people and not others (see this comment [LW(p) · GW(p)]). I don't know what the relative prevalence will be. But anyway, I think it's relatively less important to the fertility point. Life without AI partner is not really suffering, after all, for most people (unless they are really depressed, feel completely unloved, worthless, etc., which I made reservations about). Incidentally, I don't that most people could make a decision to be child-free in full consciousness before they are 30, and found it surprising that the minimum age of vasectomy is 18 (in United States). But after they are 30, I think people could have full freedom to decide they are going to be child-free and live in a relationship with AI partner happily thereafter.

How should these benefits be compared to what a contralocutor would argue occurs in the best case (in likelihood, magnitude, and relative importance)?

Answering this rationally hinges on "solving ethics", which nobody has done, neither me nor my contralocutors (and is likely not possible in principle, if ethics are subjective and constructed all the way down). So ultimately, this will based on vibes-based intuitions about the relative importance of the society and the individual, which I (and my contralocutors) will then find persuasive ways to justify. But this is not the matter of rationality yet, this is a matter of politics, ultimately.

Replies from: DaemonicSigil
comment by DaemonicSigil · 2023-08-05T08:33:29.773Z · LW(p) · GW(p)

I thought this is such a table stakes in EA/LessWrong circles it's not worth justifying. Will MacAskill in "What We Owe The Future" have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials.

I think you're mistaken about what's considered table stakes on LW. We don't make such detailed assumptions about the values of people here. Maybe the EA forum is different? On LW, newcomers are generally pointed to the sequences, which is much more about epistemology than population ethics.

In any case, it's somewhat difficult to square your stated values with the policy you endorse. In the long run, the limiting factor on the number of people that can live is the fact that our universe has a limited quantity of resources. The number of people willing to bear and raise children in "western countries" in the early 21st century is not the bottleneck. Even if we could double the population overnight, the number of people ever to live in the history of the universe would be the about the same, since it depends mostly on the amount of thermodynamic free energy contained in the regions of space we can reach.

It would certainly be bad if humanity dies out or our civilization crumbles because we produced too few offspring. But fertility in many parts of the world is still quite high, so that seems unlikely. While we still might like to make it easier and more enjoyable for people to have children, it seems backwards to try and induce people to have children by banning things they might substitute for it. It's not going to change the number of unborn future people.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-05T17:54:28.726Z · LW(p) · GW(p)

Please read MacAskill or someone else on this topic. They argue for more people in Western countries and in this century not for galaxy-brained reasons but rather mundane reasons, that have little to do with their overall long-termism. Roughly, for them, it seems that having more people in Western countries this century lowers the risk of the "great stagnation".

Also, if long-termism is wrong, but sentientism is still right, and we are not going to over-live AGI (but not too soon, but let's say in 100 years), it's good to produce more happy-ish sentient observers why we are here and AGI hasn't yet over-taken the planet.

But fertility in many parts of the world is still quite high, so that seems unlikely.

Fertility rate drops across the globe rapidly. If Africa is lifted out of poverty and insufficient education through some near-term AI advances, we may see a really rapid and precipitous decline in population. Elon Musk actually worries quite a lot about this risk and advocates everyone to have more kids (he himself has 10).

comment by Mitchell_Porter · 2023-08-01T10:47:25.102Z · LW(p) · GW(p)

This is another one of those AI impacts where something big is waiting to happen, and we are so unprepared that we don't even have good terminology. (All I can add is that the male counterpart of a waifu is a "husbando" or "husbu".) 

One possible attitude is to say, the era of AI companions is just another transitory stage shortly before the arrival of the biggest AI impact of all, superintelligence, and so one may as well focus on that (e.g. by trying to solve "superalignment"). After superintelligence arrives, if humans and lesser AIs are still around, they will be living however it is that the super-AI thinks they should be living; and if the super-AI was successfully superaligned, all moral and other problems will have been resolved in a better way than any puny human intellect could have conceived. 

That's a possible attitude; if you believe in short timelines to superintelligence, it's even a defensible attitude. But supposing we put that aside - 

Another bigger context for the issue of AI companions, is the general phenomenon of AIs that in some way can function as people, and their impact on societies in which until now, the only people have been humans. One possible impact is replacement, outright substitution of AIs for humans. There is overlap with the fear of losing your job to AI, though only some jobs require an AI that is "also a person"... 

Actually, one way to think about the different forms of AI replacement of humans, is just to think about the different roles and relationships that humans have in society. "Our new robot overlords": that's AIs replacing political roles. "AI took our jobs": that's AI replacing economic roles. AI art and AI science: that's AI replacing cultural roles. And AI companions: that's AI replacing emotional, sexual, familial, friendship roles. 

So one possible endpoint (from a human perspective) is 100% substitution. The institutions that evolved in human society actually outlive the human race, because all the roles are filled and maintained by AIs instead. Robin Hanson's world of brain emulations is one version of this, and it seems clear to me that LLM-based agents are another way it could happen. 

I'm not aware of any moral, legal, political, or philosophical framework that's ready for this - either to provide normative advice, or even just good ontological guidance. Should human society allow there to be, AIs that are also people? Can AIs even be people? If AI-people are allowed to exist, or will just inevitably exist, what are their rights, what are their responsibilities? Are they excluded from certain parts of society, and if so, which parts and why? The questions come much more easily than the answers. 

Replies from: Vladimir_Nesov, Roman Leventov
comment by Vladimir_Nesov · 2023-08-01T17:02:30.480Z · LW(p) · GW(p)

if you believe in short timelines to superintelligence

Due to serial speed advantage of AIs, superintelligence is unnecessary [LW(p) · GW(p)] for making humanity irrelevant within a few years of the first AGIs capable of autonomous unbounded research. Conversely, without such AGI, the impact on society is going to remain bounded, not overturning everything.

comment by Roman Leventov · 2023-08-01T13:35:24.041Z · LW(p) · GW(p)

Agreed with the first part of your comment (about superintelligence).

On the second part, I think immediately generalising the discussion about the role of "person" in the society at large is preliminary. I think it's an extremely important discussion to have, but it doesn't weigh on whether we should ban healthy under-30's from using AI partners today.

In general, I'm not a carbon chauvinist.

Let's imagine a cyberpank scenario: a very advanced AI partner (superhuman on emotional and intellectual levels) enters a relationship with a human and they decide to have a child, with the help of donor sperm (if the human in the couple is a woman) or with the help of a donor egg and gestation in an artificial womb (if the human in the couple is a man); the sexual orientation of the human or the AI in the couple doesn't matter.

I probably wouldn't be opposed to this. But we can discuss permitting these family arrangements after all the necessary technologies (AIs and artificial wombs) have matured sufficiently. So, I'm not against human--AI relationships in principle, but I think that the current wave of AI romance startups has nothing to do with crafting meaning and societal good.

comment by DaemonicSigil · 2023-08-05T07:34:45.760Z · LW(p) · GW(p)

I'm very very skeptical of this idea, as one generally should be of attempts to carve exceptions out of people's basic rights and freedoms. If you're wrong, then your policy recommendations would cause a very large amount of damage. This post unfortunately seems to have little discussion of the drawbacks of the proposed policy, only of the benefits. But it would surely have many drawbacks. People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits. On the margin, more people would stay in relationships they'd be better off leaving because they fear being alone and don't have the backup option of an AI relationship. People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.

Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are "psychologically harmful", that's pretty suspicious. Humans tend to have pretty good psychological intuition. And even if people happen to be wrong about what's good for them in a particular case, taking away their rights should be counted as a very large cost.

You (and/or other people in the "ban AI romantic partners" coalition) should be trying to gather much more information about how this will actually affect people. I.e. running short term experiments, running long term experiments with prediction markets on the outcomes, etc. We need to know if the harms you predict are actually real. This issue is too serious for us to run a decade long prohibition with a "guess we were wrong" at the end of it.

Replies from: DaemonicSigil, mr-hire, Roman Leventov
comment by DaemonicSigil · 2023-08-05T07:39:31.006Z · LW(p) · GW(p)

For those readers who hope to make use of AI romantic companions, I do also have some warnings:

  1. You should know in a rough sense how the AI works and the ways in which it's not a human.
    1. For most current LLMs, a very important point is that they have no memory, other than the text they read in a context window. When generating each token, they "re-read" everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.
    2. LLMs can be quite dumb, not always in the ways a human would expect. Some of this is to do with the wacky way we force them to generate text, see above.
    3. A human might think about you even if they're not actively talking to you, but rather just going about their day. Of course, most of the time they aren't thinking about you at all, their personality is continually developing and changing based on the events of their lives. LLMs don't go about their day or have an independent existence at all really, they're just there to respond to prompts.
    4. In the future, some of these facts may change, the AIs may become more human-like, or at least more agent-like. You should know all such details about your AI companion of choice.
  2. Not your weights, not your AI GF/BF
    1. What hot new startups can give, hot new startups can take away. If you're going to have an emotional attachment to one of these things, it's only prudent to make sure your ability to run it is independent of the whims and financial fortunes of some random company. Download the weights, keep an up to date local copy of all the context the AI uses as its "memory".
    2. See point 1, knowing roughly how the thing works is helpful for this.
    3. A backup you haven't tested isn't a backup.
Replies from: DaemonicSigil
comment by DaemonicSigil · 2024-10-01T07:50:57.497Z · LW(p) · GW(p)

When generating each token, they "re-read" everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.

Given that KV caching is a thing, the way I chose to phrase this is very misleading / outright wrong in retrospect. While of course inference could be done in this way, it's not the most efficient, and one could even make a similar statement about certain inefficient ways of simulating a person's thoughts.

If I were to rephrase, I'd put it this way: "Any sufficiently long serial computation the model performs must be mediated by the stream of tokens. Internal activations can only be passed forwards to the next layer of the model, and there are only a finite number of layers. Hence, if information must be processed in more sequential steps than there are layers, the only option is for that information to be written out to the token stream, then processed further from there."

Replies from: gwern
comment by gwern · 2024-10-03T01:56:27.217Z · LW(p) · GW(p)

I think it's a very important thing to know about Transformers, as our intuition about these models is that there must be some sort of hidden state or on the fly adaptation, and this is at least potentially true of other models. (For example, in RNNs, it's a useful trick to run the RNN through the 'context window' and then loop back around and input the final hidden state at the beginning, and 'reread' the 'context window' before dealing with new input. Or there's dynamic evaluation, where the RNN is trained on the fly, for much better results, and that is very unlike almost all Transformer uses. And of course, RNNs have long had various kinds of adaptive computation where they can update the hidden state repeatedly on repeated or null inputs to 'ponder'.)

But I don't think your rewrite is better, because it's focused on a different thing entirely, and loses the Memento-like aspect of how Transformers work - that there is nothing 'outside' the context window. The KV cache strikes me as quibbling: the KV cache is more efficient, but it works only because it is mathematically identical and is caching the computations which are identical every time.

I would just rewrite that as something like,

Transformers have no memory and do not change or learn from session to session. A Transformer must read everything in the context window before predicting the next token. If that token is added to the context, the Transformer repeats all the same calculations and then some more for the new token. It doesn't "remember" having predicted that token before. Mathematically, it is as if each token you want to generate, the Transformer wakes up and sees the context window for the first time. (This can be, and usually is, highly optimized to avoid actually repeating those calculations, but the output is the same.)

So, if something is not in the current context window, then for a Transformer, it never existed. This means it is limited to thinking only about what is in the current context window, for a short time, until it predicts the next token. And then the next Transformer has to start from scratch when it wakes up and sees the new context window.

Replies from: DaemonicSigil
comment by DaemonicSigil · 2024-10-03T03:48:21.174Z · LW(p) · GW(p)

Good point, the whole "model treats tokens it previously produced and tokens that are part of the input exactly the same" thing and the whole "model doesn't learn across usages" thing are also very important.

comment by Matt Goldenberg (mr-hire) · 2023-08-06T09:18:10.799Z · LW(p) · GW(p)

If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are "psychologically harmful", that's pretty suspicious. Humans tend to have pretty good psychological intuition.

This seems demonstrably wrong in the case of technology.

comment by Roman Leventov · 2023-08-05T20:23:45.669Z · LW(p) · GW(p)

People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits.

As I mentioned in the post as well as many comments here, how about AI partner startups demonstrating that these positive effects dominate negative effects, rather than vice-versa? Why do we hold addictive psycho-technology (which AI partner tech is, on the face of it, because "lower love" is also a form of addiction, and you cannot yet have a "higher love", i.e., shared fate with AI partners in the present wave, they are not yet learning, conscious, and are not moral patients) to a different standard than new medications?

On the margin, more people would stay in relationships they'd be better off leaving because they fear being alone and don't have the backup option of an AI relationship.

I commented on this logic here [LW(p) · GW(p)], I think this doesn't make sense. People probably mostly stay in relationships they'd be better off leaving either because there is a roller-coaster dynamic they also partially enjoy (in which case knowing there is AI waiting for them outside of the relationship probably doesn't help), or for financial reasons or reasons of convenience. People may fear be physically alone (at home, from the social perspective, etc.), but hardly many people are particularly attached to the notion of being "in a relationship" so much that the perspective of paying 20 bucks per month for their virtual AI friend will be sufficient to quiet their anxiety about leaving a human partner. I cannot exclude this, but I think there are really few people like this.

People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.

In the policy that I proposed, I made a point for these cases. Basically, a psychotherapist could "prescribe" an AI partner to a human in such cases.

Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are "psychologically harmful", that's pretty suspicious. Humans tend to have pretty good psychological intuition. And even if people happen to be wrong about what's good for them in a particular case, taking away their rights should be counted as a very large cost.

AI partners won't be "who" yet. That's a very important qualification. As soon as AIs become conscious, and/or moral patients, of course there should be no restrictions. But without that, in your passage, you can replace "AI partner" or "image" with "heroin" and nothing qualitatively changes. Forbidding building companies that distribute heroin is not "taking out freedoms", even though just taking heroin (or other drug) which you somehow obtained (found on the street, let's say) is still a freedom.

You (and/or other people in the "ban AI romantic partners" coalition) should be trying to gather much more information about how this will actually affect people. I.e. running short term experiments, running long term experiments with prediction markets on the outcomes, etc. We need to know if the harms you predict are actually real. This issue is too serious for us to run a decade long prohibition with a "guess we were wrong" at the end of it.

How do you imagine myself or any other lone concerned voice could master sufficient resources to do these experiments? All AI partners (created by different startups) will be different, moreover, and may lead to different psychological effects. So, clearly, we should adopt a general policy when it's the duty of these startups to demonstrate that they products are safe for psychology of their users, long-term. Product safety regulations are normal, they exist in food, in medicine, in many other fields. Even current nascent AI psychotherapy tools need to pass a sort of clinical trials because they are classified as sort of "medical tools" and therefore are covered by these regulations.

Somehow moving AI romance outside of this umbrella of product categories regulated for safety and taking a complete laissez-faire approach just doesn't make any sense. What if an AI partner created by a certain company really flames up misogyny in men who are using these tools? So far we even assumed, throughout these discussions, that AI partner startups will be "benign" and "good faith", but if we take these assumptions we shouldn't also regulate foods and lots of other things.

Replies from: DaemonicSigil
comment by DaemonicSigil · 2023-08-06T06:21:09.289Z · LW(p) · GW(p)

AI partners won't be "who" yet. That's a very important qualification.

I'd consider a law banning people from using search engines like Google, Bing, Wolfram Alpha, or video games like GTA or the Sims to still be a very bad imposition on people's basic freedoms. Maybe "free association" isn't the right word to use, but there's definitely an important right for which you'd be creating an exception. I'd also be curious to hear how you plan to determine when an AI has reached the point where it counts as a person?

But without that, in your passage, you can replace "AI partner" or "image" with "heroin" and nothing qualitatively changes.

I don't subscribe to the idea that one can swap out arbitrary words in a sentence while leaving the truth-value of the sentence unchanged. Heroin directly alters your neuro-chemistry. Pure information is not necessarily harmless, but it is something you have the option to ignore or disbelieve at any point in time, and it essentially provides data, rather than directly hacking your motivations.

How do you imagine myself or any other lone concerned voice could master sufficient resources to do these experiments?

How much do you expect it would cost to do these experiments? 500 000 dollars? Let's say 2 million just to be safe. Presumably you're going to try and convince the government to implement your proposed policy. Now if you happen to be wrong, implementing such a policy is going to do far more than 2 million dollars of damage. If it's worth putting some fairly authoritarian restrictions on the actions of millions of people, it's worth paying a pretty big chunk of money to run the experiment. You already have a list of asks in your policy recommendations section. Why not ask for experiment funding in the same list?

All AI partners (created by different startups) will be different, moreover, and may lead to different psychological effects.

One experimental group is banned from all AI partners, the other group is able to use any of them they choose. Generally you want to make the groups in such experiments correspond to the policy options you're considering. (And you always want to have a control group, corresponding to "no change to existing policy".)

comment by Kaj_Sotala · 2023-08-02T08:09:39.699Z · LW(p) · GW(p)

(Upvoted for the detailed argument.)

Even if it was the case that people ended up mostly having AI lovers rather than romantic relationships from humans, I don't think it follows from that that the fertility rate would necessarily suffer.

The impression that I've gotten from popular history is that the strong association between reproduction and romantic love hasn't always been the case and that there have been time periods where marriage existed primarily for the purpose of having children. More recently there's been a trend in Platonic co-parenting, where people choose to have children together without having a romantic relationship. I personally have at least two of these kinds of co-parenting "couples" in my circle of acquaintances, as well as several more who have expressed some level of interest in it. 

Given that it is already something that's gradually getting normalized today, I'd expect it to become significantly more widespread in a future with AI partners. I would also imagine that the risk of something like a messy divorce or unhappy marriage would be significantly less for Platonic co-parents who were already in a happy and fulfilling romantic relationship with an AI and didn't have romantic needs towards each other. As a result, a larger proportion of children born from such partnerships could have a safe and stable childhood than is the case with children born from contemporary marriages.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-02T08:33:49.836Z · LW(p) · GW(p)

Does platonic co-parenting usually involve co-living, or it's more often that parents live separately and either take care of the child on different days, or visit each other's homes to play with kids?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2023-08-02T08:44:16.568Z · LW(p) · GW(p)

I'm unsure what the most typical case is. Of the couples I know personally, one involves co-living and another intends to live separately once the child is no longer an infant.

comment by Shmi (shminux) · 2023-08-01T18:56:54.239Z · LW(p) · GW(p)

It's high time we decoupled romance from procreation (pun intended). 

comment by Bezzi · 2023-08-01T14:29:29.102Z · LW(p) · GW(p)

I think that your model severely underestimates the role of social stigma. Spending a lot of time on your screen chatting with an AI whose avatar is suspiciously supersexy would be definitely categorized as "porn" by a lot of people (including me). Will it be more addictive than simply looking at photo/videos of hot people naked? Probably yes, but it will still occupy the same mental space as "porn". If not for the users themselves, at least for casual observers. Imagine trying to explain to your parents that the love of your life is an AI with a supersexy avatar.

My model of the near future is that these chatbots will substitute every other form of online porn, because that part is very easy even without conversational skills (and Stable Diffusion is already capable of generating photorealistic pictures of super-hot people). I am quite skeptical about a wide social acceptance of romantic love with AI chatbots, and without social acceptance I don't think that it could go beyond being the next kind of porn.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-01T16:51:56.346Z · LW(p) · GW(p)

I agree that it probably won't be socially acceptable to admit that you are in love with your AI partner for a time being. Therefore, the young man in my short "mainline scenario" downplays the level of intimacy that he has with his AI partner to his friends. Parents probably won't know at all, their son just "studies at college and doesn't have time for girls". Importantly, the young man may deceive even himself, not consciously perceiving their attitude towards the AI as "love", but nevertheless he may become totally uninterested in seeking romance with humans, or even watching porn (other than the videos generated with the avatar of his AI partner).

I'm not sure about what I've written above, of course, but I definitely think that the burden of proof is on AI startups, cf. this comment [LW(p) · GW(p)]. 

Replies from: Bezzi
comment by Bezzi · 2023-08-01T18:58:09.516Z · LW(p) · GW(p)

My point was that is difficult for a behavior to destroy the fabric of society if you have to hide from friends and family when indulging in that behavior. Of course that someone will totally fall in love with AI chatbots and isolate himself, but this is also true for recreational drugs, traditional porn etc. I still don't see an immediate danger for the majority of young people.

The main problem of your hypothetical man is that he doesn't manage to have sex. I agree that this can be a real problem for a lot of young men. On the other hand, not having sufficiently interesting conversations does not feel like something that the average teenager is likely to suffer from. If you give a super-hot AI girlfriend to a horny teenager, I think that the most likely outcome is that he will jump straight to the part where the avatar gets naked, again and again and again, and the conversational skills of the bots won't matter that much. You have to fool yourself really hard to conflate "super-hot AI bot who does everything I ask" with "normal love relationship" rather than "porn up to eleven".

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-02T07:15:11.663Z · LW(p) · GW(p)

My point was that is difficult for a behavior to destroy the fabric of society if you have to hide from friends and family when indulging in that behavior. Of course that someone will totally fall in love with AI chatbots and isolate himself, but this is also true for recreational drugs, traditional porn etc. I still don't see an immediate danger for the majority of young people.

See the last section of the post. Yes, porn is also harmful to the society (but pleasurable to the individuals who watch it). Is this a controversial fact? Recreational drugs (unlike hard drugs) are actually probably more useful for society than harmful. And I didn't say that the fabric of society will be "destroyed" by AI partners; but I do think it will harm this fabric, and giving that I suspect AI partners will be more potent source of digital addiction (or "lower love", which is also a form of addiction) than anything except hard drugs, including social media, my heuristical inference is that AI partners will harm the society on the scale of social media (or higher, but also maybe not, because the networked nature of social media makes it in some ways more nasty), which harmed society a lot.

On the other hand, not having sufficiently interesting conversations does not feel like something that the average teenager is likely to suffer from. If you give a super-hot AI girlfriend to a horny teenager, I think that the most likely outcome is that he will jump straight to the part where the avatar gets naked, again and again and again, and the conversational skills of the bots won't matter that much.

Maybe the young man also misses intimacy and the feeling that somebody understands him and appreciates him even more than he misses sex. Then AI partners will be much more potent. I'm not an expert social psychologist to make confident claims here, but neither do you, probably, we are just hypothesising past each other.

And perhaps even "expert" social psychologists and anthropologists couldn't be sure, because these domains lack robust predictive models, and we are discussing a completely novel phenomena which enters the sphere, namely, AI partners. So, I think it should be AI romance startups that make limited experiments over years before we decide that the technology is safe for mass adoption. I find it weird that this is now a baseline framework in pharmacology and self-driving car tech, for instance, but is met with such a resistance when we discuss something that will mess up with human psychology.

You have to fool yourself really hard to conflate "super-hot AI bot who does everything I ask" with "normal love relationship" rather than "porn up to eleven".

Are you sure that IQ 90 people have to fool themselves as hard as you would need to?

Replies from: Kaj_Sotala, Bezzi
comment by Kaj_Sotala · 2023-08-02T08:45:26.773Z · LW(p) · GW(p)

Yes, porn is also harmful to the society (but pleasurable to the individuals who watch it). Is this a controversial fact?

For what it's worth, my association of the "porn is harmful for society" stance is mostly that of a right-wing/religious conservative/anti-sex ideological position. Outside those kinds of circles, I've seen some concerns about it giving young people misleading impressions of what to expect from sex, but I don't recall seeing much of a sentiment that it would be an overall negative - neither from the laypeople nor the social scientists/sexologists who I've been exposed to.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-02T08:54:20.300Z · LW(p) · GW(p)

I think it's not so much about "wrong image" as about the relationship participation rate. I think porn couldn't be neutral to the growing number of young people who are not in relationships (63% of men under 30 in the US) -- if you watch porn (and especially addicted to porn), you have less motivation to go out, look for dates, form relationships. I don't claim that this effect is huge, and that it outweighs the positive effects on individuals, but I cannot think how there couldn't be any effect at all. Also, of course, porn is not the only factor that makes zoomers more and more isolated and less involved in relationships.

Replies from: Kaj_Sotala, dr_s
comment by Kaj_Sotala · 2023-08-02T09:15:28.565Z · LW(p) · GW(p)

It's plausible that there could be such an effect, yes. On the other hand, there are also indications that a similar effect has a role in reducing the amount of sexual violence (countries where porn was criminalized saw significant reductions in their rape statistics after legalizing it), helping de-stigmatize various uncommon sexual fetishes and preferences and thus being protective of the mental health of their users, expose people to more information about what kind of sexual activities they might like (and thus possibly make them happier with positive effects on society), etc.

comment by dr_s · 2023-08-20T15:22:06.371Z · LW(p) · GW(p)

I don't think the idea of porn as a "replacement good" for sex really holds, especially outside of the (very few) who get literally addicted to it. I would expect other factors to have a much more chilling effect, like lacking free time or an inability to navigate overly complex social norms that feel particularly unforgiving.

comment by Bezzi · 2023-08-02T09:53:00.193Z · LW(p) · GW(p)

Maybe the young man also misses intimacy and the feeling that somebody understands him and appreciates him even more than he misses sex

Well, maybe. But this seems a stronger assumption; we are basically considering someone with an unsupportive family and no close friends at all (someone could object "I suffer because my supportive friends are not pretty girls", but I would still consider that as a proxy for "I miss sex"). Also, "No one tells me that I'm good so I'll set up a bot" is something that would mark this person as a total loser, and I'm skeptical that lots of people will do it despite the obvious associated social stigma. I would rather expect this kind of AI usage to follow dynamics similar to those of alcoholism (the traditional way to forget that your life sucks). I would also tentatively say that isolating yourself with an AI companion is probably less harmful than isolating yourself with a whiskey bottle.

Anyway, I'm not arguing in favor of totally unregulated AI companion apps flooding the market. I agree that optimizing LLMs for being as addictive as possible when imitating a lover sounds like a bad idea. But my model is that the kind of people who would fall in love with chatbots are the same kind of people who would fall in love with plain GPT prompted to act like a lover. I'm not sure about how much additional damage we will get from dedicated apps... especially considering that plain GPT is free but AI companion apps typically require subscription (and even our IQ 90 people should be able to recognize as "not a normal relationship" something that gets abruptly interrupted if you don't pay 20$/month).

comment by dr_s · 2023-08-20T16:37:59.184Z · LW(p) · GW(p)

A few points:

  1. this feels part of a larger question along the lines of "is wireheading okay?". On one hand, with increased technology, the probability of being able to subject ourselves to a bubble of hyper stimuli that all tickle our natural fancies far more than the real thing approaches one. On the other, since perhaps the most important of those fancies is interaction with other human beings, something which is also by its nature imperfect in its real state, this essentially is one and the same with the disgregation of society into a bunch of perfectly insular individuals confined within an hedonistic self-tailored reality. This can then result in only two outcomes - either collapse of civilization if the AI isn't able to run the world by itself, or essential replacement and permanent disempowerment of humans by AI if it is. Either way, our story as a species ends there, though we may keep having plenty of fun;

  2. framing the question directly and specifically in terms of baby-making is pretty much guaranteed to lead to violent political polarization. Demographic worries are typically right-wing and Christian-coded, even though in general it is absolutely true that again, even if you think that the scenario above is okay, you need to first have a way to replace humans with robots for all productive functions if you want to wrap yourself into a hedonistic bubble without that bubble being burst by catastrophic civilisational collapse. More generally, I think we should address the issue in a much broader scope - it's not just about making babies, it's about human connection. Less connection leads to less empathy leads to less ability to cooperate. This matters absolutely even from a left-wing perspective! Insular humans are terrible at coordination, and are easily manipulated by large systems. In a world in which even the most traditionally intimate of relationships can be artificial and essentially a business product, people are deprived of any real agency;

  3. no discussion of this topic can ignore the fact that this is not, right now, in the real world, a choice between individual freedom and enjoyment at society's expense, and individual sacrifice for society's sake. Because a free market of profit-motivated companies is a misaligned system; it will usually try to provide you with enjoyment, as long as this means that you pay for it; but if it discovers that it can sell you misery and make you pay even more to stay miserable it'll just do that. It's not optimizing for your happiness - that's an instrumental goal, not a terminal one. So, the relationship here is fundamentally adversarial. See social media, gacha games, the examples are everywhere. That's exactly what an artificial AI girlfriend would be in the current economic system. Not a genuine fulfillment of needs you can't fulfill otherwise, replacing the real thing with an even more pleasurable hyper-stimulus, leaving you with a complex responsibility vs personal enjoyment dilemma. But a skinner box carefully designed to keep you miserable and make you love your own misery and pay for it, the sum total of human engineering and psychology aimed specifically at fucking with your reward centers to squeeze money out of you. Digital crack. So, honestly, there's really no dilemma here; the concept itself is abusive trash and it should be curbed before it develops enough. Just for comparison, consider what you would think of a woman that studies specifically seduction techniques to target and hook a man on the internet and get him to wire her money perpetually while never meeting him, then imagine a board of CEOs in suits and ties doing that on purpose, to tens of thousands of people, while laughing all the way to the bank with their profits, and consider what you should think of them.

comment by [deleted] · 2023-08-10T13:38:02.001Z · LW(p) · GW(p)

Hi Roman.  Pretty exciting conversation thread and I had a few questions about your specific assumptions here.  

In your world model, obviously today, young men have many distractions from their duties that were not the case in the past.  [ social media, video games, television, movies, anime, porn, complex and demanding schooling, ...]

And today, current AI models are not engrossing enough, for most people, to outcompete the set of the list above.   You're projecting:

within the next 2-3 years

human-level emotional intelligence and the skill of directing the dialogue

What bothered me about this is that this would be an extremely fragile illusion.  Without expanding/replacing the underlying llm architecture to include critical elements like [ online learning, memory, freeform practice on humans to develop emotional intelligence (not just fine tuning but actual interactive practice), and all the wiring between the LLM and the state of the digital avatar you need ], it probably will not be convincing for very long.  You need general emotional intelligence and that's a harder ask, closer to actual AGI.  (in fact you could cause an AGI singularity of exponential growth without ever solving emotional intelligence as it is not needed to control robots or solve tasks with empirically measurable objectives)

For most people it's a fun toy, then the illusion breaks, and they are back to all the distractions above.

But let's take your idea seriously.  Boom, human level emotional intelligence in 2-3 years.  100% probability.   The consequence is the majority of young men (and women and everyone) is distracted away from all their other activities to spend every leisure moment talking to their virtual partner.

 If that's the case, umm, I gotta ask, the breakthroughs in ML architecture and compute efficiency you would have made to get there would probably make a lot of other tasks easier.

Such as automating a lot of rote office tasks.  And robotics would probably be much easier as well.  

Essentially you would be able to automate a large percentage, somewhere between 50-90 percent, of all the jobs currently on earth, if you had the underlying ML breakthroughs that also give general human level emotional intelligence.

Combining your assumptions together: what does it matter if the population declines?  You would have a large surplus of people.  

 

I had another thought.  Suppose your goal, as a policymaker, is to stem the population decline.  What should you do?  Well it seems to me that a policy that directly accomplishes your goal is more likely to work than one that works indirectly.  You are more likely to make any progress at all.  (recall how government policies frequently have unintended consequences, aka cobras in India)

So, why not pay young women surrogate mother rates for children (about 100k)?  And allow the state to take primary custody so that young women can finish school/start early career/have more children.  (the father would be whoever the woman chooses and would not owe child support conditional on the woman giving up primary custody)

I think before you can justify your conditional probability dependent indirect policy you would need to show why the direct and obvious policy is a bad idea.   What is a bad idea with paying women directly?

Epistemic status : I have no emotional stake in the idea of paying women, this is not a political discussion, I simply took 5 minutes and wondered what you could do about declining population levels.  I am completely open to any explanation why this is not a good idea.  

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-10T17:56:11.322Z · LW(p) · GW(p)

Maybe we understand different things by "emotional intelligence". To me, this is just the ability to correctly infer the emotional state of the interlocutor, based on the context, text messages that they send, and the pauses between messages.

I don't think this requires any breakthroughs in AI. GPT-4 is basically able to do this already, if we take out of the question the task of "baseline adjustment" (different people have different conversational styles, some are cheerful and use smile emojis profusely, others are the opposite, use smileys only to mark strong emotions) and the task of intelligent summarisation of the context of the dialogue. The latter are exactly the types of tasks I expect AI romance tech to be ironing out in the next few years.

Detecting emotions from the video of human face or the recording of their speech is in some ways even simpler, there are apparently already simple supervised ML systems that do this. But I don't expect AI partners to be in a video dialogue with users yet because I don't think video generation will become sufficiently fast for realtime video, yet. So I don't assume that AI will receive the stream of user's video and audio, either.

So, why not pay young women surrogate mother rates for children (about 100k)?  And allow the state to take primary custody so that young women can finish school/start early career/have more children.  (the father would be whoever the woman chooses and would not owe child support conditional on the woman giving up primary custody)

In general, paying people for parenting (I would emphasise this instead of pure "childbirth"), i.e., considering parenting a "job", I think is a reasonable idea and perhaps, soon this will be inevitable in developed countries with plummeting fertility rates and increasing efficiency of labour (the latter due to AI and automation).

The caveat is that the policy that you proposed will cost the government a lot of money initially, while the policy that I proposed costs nothing.

comment by avturchin · 2023-08-01T12:55:22.391Z · LW(p) · GW(p)

It could be generalised as following: Perfectly alined (to personal desires) AI will be perfect wireheading. 

comment by Stephen McAleese (stephen-mcaleese) · 2023-09-06T12:58:05.654Z · LW(p) · GW(p)

Thanks for the post. It's great that people are discussing some of the less-frequently discussed potential impacts of AI.

I think a good example to bring up here is video games which seem to have similar risks. 

When you think about it, video games seem just as compelling as AI romantic partners. Many video games such as Call of Duty, Civilization, or League of Legends involve achieving virtual goals, leveling up, and improving skills in a way that's often more fulfilling than real life. Realistic 3D video games have been widespread since the 2000s but I don't think they have negatively impacted society all that much. Though some articles claim that video games are having a significant negative effect on young men.

Personally, I've spent quite a lot of time playing video games during my childhood and teenage years but I mostly stopped playing them once I went to college. But why replace an easy and fun way to achieve things with reality which is usually less rewarding and more frustrating? My answer is that achievements in reality are usually much more real, persistent, and valuable than achievements in video games. You can achieve a lot in video games but it's unlikely that you'll achieve goals that increase your status to as many people over a long period of time as you can in real life.

A relevant quote from the article I linked above:

"After a while I realized that becoming master of a fake world was not worth the dozens of hours a month it was costing me, and with profound regret I stashed my floppy disk of “Civilization” in a box and pushed it deep into my closet. I hope I never get addicted to anything like “Civilization” again."

Similarly, in the near term at least, AI romantic partners could be competitive with real relationships in the short term, but I doubt it will be possible to have AI relationships that are as fulfilling and realistic as a marriage that lasts several decades.

And as with the case of video games, status will probably favour real relationships causing people to value real relationships because they offer more status than virtual ones. One possible reason is that status depends on scarcity. Just as being a real billionaire offers much more status than being a virtual one, having a real high-quality romantic partner will probably yield much more status than a virtual one and as a result, people will be motivated to have real partners.

comment by Raemon · 2023-08-10T00:17:02.538Z · LW(p) · GW(p)

I don't actually know what to do here. But I do honestly think this is likely to be representative of a pretty big problem. I don't know that any particular regulation is a good solution. 

Just to go over the obvious points:

  • I definitely expect AI partners to at least somewhat reduce overall fertility of biological humans, and to create "good enough but kinda hollow" relationships that people err more towards on the margin because it's locally convenient even among people who I wouldn't normally consider "suffering-ly single"
  • Yes, there are real suffering people who are lonely and frustrated, who's lives would be definitively better with AI romantic partners
  • Regulations are a blunt, dumb instrument, and I think there are a lot of good reasons to be skeptical of paternalistic laws (i.e. Prohibition and the War on Drugs seem like a failures that made things worse)
  • I think it'll be really hard to regulate this in any kind of meaningful way. This is much harder to regulate than drugs or alcohol.
  • My guess is that porn and romance novels/movies have a mixture of effects that include "making sexually frustrated young men less likely to rape or be pushy about sex", but also setting unrealistic expectations that subtly warp relationships.
  • The era of "human fertility" is really just a blip here – within another couple decades (at most 50 years) after most after AI partners become threatening, I'd bet moderately heavily on us having uploads, and full fledged artificial life being pretty common, and pretty quickly dominating the calculus of what's at stake. (there might or might not still be biological humans, I'd probably bet against biological humans lasting more than 200 years, but they'll be a small fraction of sentient life that morally matters)
  • I think complex relationship games are a "rich" source of fun [LW · GW], and while I think some of the major suffering of breakups or loneliness might be worth changing in the posthuman future, I think the incentive gradient of "fix problems with technology" naturally veers towards Spieglman's monsters that are simple to the point of IMO not being that morally valuable [LW · GW]. 

With all that in mind, what is the right thing to here? Hell, I do not know. But rounding this off whatever your preferred simple ideological handle is, is going to massively fail at protecting the future. Figuring out the answer here involves actual math and weighing tradeoffs against each other for real.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-10T18:15:08.541Z · LW(p) · GW(p)

I think it'll be really hard to regulate this in any kind of meaningful way. This is much harder to regulate than drugs or alcohol.

Why do you think so? Even if open-source LLMs and other necessary AI models will develop so quickly that real-time interaction and image generation will be possible on desktops, and there will be open-source waifu projects, mobile is still controlled by app stores (Apple and Google Play) where age-based restrictions could be easily imposed. Jailbreaking/unlocking the phone (or installing an apk from Github), connecting it to a local server with AI models, and maintaining the server is so burdensome that I expect 90+% of potential users of easy-to-access AI partner apps would fall off.

This is not to mention that open-source projects developed by hardcore enthusiasts will probably cater for their specific edgy preferences and not appeal to a wide audience. E.g., an open-source project of anime waifus may optimise to trigger particular fetishes rather overall believability of the AI partner and the long-term appeal of the "relationship". Open-source developers won't be motivated to optimise the latter, unlike AI partner startups, because their lifetime customer value would directly depend on that.  

comment by AnthonyC · 2023-08-03T15:21:29.653Z · LW(p) · GW(p)

Other than fertility rate, what other harms are there, and to whom, such that it's any of society's business at all? Are you thinking of it as being like addiction, with people choosing something they (initially) think is good for them but isn't?

First, though, I don't think the scenario you're proposing is anywhere near as bad for fertility as suggested. Plenty of real-world people and partners incapable of having biological children together have a sufficiently high desire to do so to go to great lengths to make it happen. Plenty of others want to do so but limit themselves for non-biological reasons, often financial. Society could do a lot more to facilitate them getting what they want, but doesn't. And I imagine a lot of people would still want kids with an AI partner as well.

Second, any impact on fertility is one we would see coming many years in advance, and it would necessarily unfold over generations. It being a genuine problem would require that no intervention succeed in turning it around, and no further AI advances lead to AGI sufficient for AIs to "count" as population for the relevant purposes that make population growth desirable. It would also require there not be any substantial subsets of the global population that consistently choose biological partners or lager families. 

Third, I think there's a lot of other downstream impacts of a world with sufficiently-good-for-this-to-be-an-issue AI romantic partners that make this less of a concern for society. For just one example, AI-->acceleration of research --> anti-aging research or other means of extending the human lifespan. If individual humans 50 years from now suddenly are given a 200 or 500 year healthy lifespan (in which case there will most likely be further increases in that time leading to effective amortality), then 1) we have a whole lot longer before any population decline issues crop up, and 2) it becomes a whole lot easier for people to decide to make a multi-decade commitment to raise kids even if they might not otherwise do so in an 80 year lifetime, or to have kids later in life by themselves when they're more financially established, or to have more kids later on after their first kids are grown. 

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-03T16:56:44.257Z · LW(p) · GW(p)

Other than fertility rate, what other harms are there, and to whom, such that it's any of society's business at all? Are you thinking of it as being like addiction, with people choosing something they (initially) think is good for them but isn't?

Yes, I discussed in this comment [LW(p) · GW(p)] how people could perceive settling for AI partners less wholesome life because they didn't pay their duty to society (regardless of whether this actually matters from your theoretical ethics point of view, if people have deeply held, culturally embedded idea about this in their head, they could be sad or unsettled for real). I don't venture to estimate how prevalent this will be, and therefore how this will weigh against net good for personal satisfaction of people who would have no issue with settling for AI partners whatsoever.

Kaj Sotala suggested [LW(p) · GW(p)] in this comment that this "duty for society" could be satisfied through platonic co-parenting. I think this is definitely interesting, could work for some people, and I think loudable that people do this, but I have doubts how widespread this practice could become. It might be that parenting and romantic involvement with the co-parent are pinned too strongly to each other in many people's minds.

First, though, I don't think the scenario you're proposing is anywhere near as bad for fertility as suggested. [...] And I imagine a lot of people would still want kids with an AI partner as well.

This is the same type of statement that many other people have made here ("people won't be that addicted to this", "people will still seek human partners even while using this thing", etc.), to all of which I reply: it should be AI romance startups' responsibility to demonstrate that the negative effect will be small, not my responsibility to prove that the effect will be huge (which I obviously couldn't do). Currently, it's all opinion versus opinion.

At least the maximum conceivable potential is huge: AI romance startups obviously would like nearly everyone to use their products (just like currently, nearly everyone watches porn, and soon nearly everyone will use general-purpose chatbots like ChatGPT). If the AI partners will be so attractive that about 20% of men are falling for them so hard that they don't even want to date any women anymore through the rest of their lives, we are talking about 7-10% of drop in fertility (less than 20% because not all of these men would counterfactually have kids anyway, also "spare" women could decide to have kids alone, some will platonically co-parent, etc.)

Plenty of real-world people and partners incapable of having biological children together have a sufficiently high desire to do so to go to great lengths to make it happen. Plenty of others want to do so but limit themselves for non-biological reasons, often financial. Society could do a lot more to facilitate them getting what they want, but doesn't.

I agree with all this, and this is all extremely sad. But this seems to be irrelevant to the question of AI partners: if there are other problems that depress fertility rate, it doesn't mean that we shouldn't deal with this upcoming problem. Moreover, while problems like from financial inequality (and precarity), reducing biological fertility of men and women due to stress and environmental pollution, etc., are big systemic problems that are very hard and very expensive to fix, it's currently relatively very cheap to prevent further potential fertility drop resulting from widespread adoption of AI partners by under-30's: just pass a regulation in major countries!

It being a genuine problem would require that no intervention succeed in turning it around [...] It would also require there not be any substantial subsets of the global population that consistently choose biological partners or lager families. 

In my ethics, more conscious observers matter today, it doesn't make a difference from a normative perspective than "later in the future" the population will recover. Also, following this logic, saving people from death (e.g., from malaria) today makes very limited sense, because all it prevents is transient few days of suffering of a person. But really I think EAs value more the "rescued years of mostly happy experience later".

Similarly, the fact that somewhere else in the world some people procreate a lot doesn't somehow make shrinking population in other parts of the world "less bad".

It being a genuine problem would require that [...] no further AI advances lead to AGI sufficient for AIs to "count" as population for the relevant purposes that make population growth desirable. [...]

Third, I think there's a lot of other downstream impacts of a world with sufficiently-good-for-this-to-be-an-issue AI romantic partners that make this less of a concern for society. [...]

I also think this cluster of arguments are not applicable. Following this logic, pretty much nothing matters if we expect world to become unrecognisably weird soon. I also expect this to happen, with very high probability, but in discussing societal impacts of AI, global health and poverty, environmental destruction, and other systemic issues, we have to take a sort of deontological stance and imagine that this weirdness won't happen. If it will happen, all bets are off, anyway. But in discussing current "mundane" global issues, we should condition that weirdness doesn't happen for some reason (which is not totally implausible: global ban on AGI development still could happen, for instance, and I would even support it).

Besides, as I noted in the beginning of my post, I think AI partners based on even today's AI capabilities (LLMs like GPT-4, text-to-speech, text-to-image, etc.) could already be used to make an extremely compelling AI partner, much more attractive than today's AI partners. It's still very early days of these products, but in a few years they will catch up.

Replies from: AnthonyC
comment by AnthonyC · 2023-08-10T19:09:55.380Z · LW(p) · GW(p)

I agree with all this, and this is all extremely sad. But this seems to be irrelevant to the question of AI partners: if there are other problems that depress fertility rate, it doesn't mean that we shouldn't deal with this upcoming problem. Moreover, while problems like from financial inequality (and precarity), reducing biological fertility of men and women due to stress and environmental pollution, etc., are big systemic problems that are very hard and very expensive to fix, it's currently relatively very cheap to prevent further potential fertility drop resulting from widespread adoption of AI partners by under-30's: just pass a regulation in major countries!

I don't think I agree. That might be cheap financially, yes. But unless there's a strong argument that AI partners cause harm to the humans using them, then I don't think society has a sufficiently compelling reason to justify a ban. In particular, I don't think (and I assume most agree?) that it's a good idea to coerce people into having children they don't want, so the relevant question for me is, can everyone who wants children have the number of children they want? And relatedly, will AI partners cause more people who want children to become unable to have them? From which the societal intervention should be, how do we help ensure that those who want children can have them? Maybe trying to address that still leads to consistent below-replacement fertility, in which case, sure, we should consider other paths. But we're not actually doing that.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-11T13:05:08.574Z · LW(p) · GW(p)

I think an adequate social and tech policy for the 21st century should

  1. Recognise that needs/wants/desires/beliefs and new social constructs could be manufactured and to discuss this phenomenon explicitly, and
  2. Deal with this social engineering consistently, either by really going out of the way to protect people's agency and self-determination (today, people's wants, needs, beliefs, and personalities are sculptured by different actors from when they are toddlers and start watching videos on iPads, and then only strengthens), or by allowing a "free market of influences", but also participating in it, by subsidising the projects that will benefit the society itself.

USA seems to be much closer to the latter option, but when people discuss policy in the US, it's conventional not to acknowledge (see The Elephant in The Brain [LW · GW]) the real social engineering that is already perpetuated by both state and non-state actors (from the pledge of allegiance to church to Instagram to Coca-Cola), and to presume that social engineering done by the state itself is a taboo or at least a tool of last resort.

This is just not what is already happening: apart from the pledge of allegiance, there are also many other ways in which the state (and other state-adjacent institutions and structures) is (was) proactive to manufacture people's beliefs or wants in a certain way, or to prevent people's beliefs or wants to be manufactured in a certain way: the Red Scare, various forms of official and unofficial (yet institutionalised) censorship, and the regulation of nicotine marketing are a few examples that came first to my mind.

Now, assuming that personal relationships is a "sacred libertarian range" and avoiding the state to weigh any influence on how people's wants and needs around personal relationships are formed (even if through the recommended school curriculum, which is albeit a very ineffective approach to social engineering), yet allowing any corporate actors (such as AI partner startups and online dating platforms) to shape these needs and even rewire the society in whatever way they please, is just an inconsistent and a self-defeating strategy for the society and therefore for the state, too. 

The state should better realise that its strength rests not only on the overt patriotism/nationalism,  military/law enforcement "national security", and the economy, but on the health and the strength of the society, too.

P. S. All the above doesn't mean that I really prefer the "second option". The first option, that is, human agency being protected, seems much more beautiful and "truly liberal" to me. However, this vision is completely incompatible with the present-form capitalism (to start, it probably means that ads should probably be banned completely, the entire educational system completely changed, and the need for labour-to-earn-a-living resolved through AI and automation), so it doesn't make much practical sense to discuss this option here.

comment by Wbrom42@gmail.com · 2023-08-01T10:10:45.138Z · LW(p) · GW(p)

About 80% of this could (and was) said about the development of the printing press. Smart people trying to ban new technology because it might be bad for the proles is a deeply held tradition.

Replies from: Roman Leventov, martin-randall
comment by Roman Leventov · 2023-08-01T12:44:40.822Z · LW(p) · GW(p)

Sorry, but this is empty rhetoric and a failure to engage with the post on the object level. How did the printing press jeopardise the reproduction of people in society? And not indirectly, through a long causal chain (such as, without the printing press, we wouldn't have the computer, without the computer we wouldn't have the internet, without the internet we wouldn't have online dating and porn and AI partners, etc.), but directly? Whereas AI partners will reduce the total fertility rate directly.

The implication of your comment is that technology couldn't be "bad" (for people, society, or any other subject), which is absurd. Technology is not ethics-neutral and could be "bad".

comment by Martin Randall (martin-randall) · 2023-08-01T15:26:36.081Z · LW(p) · GW(p)

Are you referring to the concerns of Conrad Gessner? From Why Did First Printed Books Scare Ancient Scholars In Europe?:

Gessner’s argument against the printing press was that ordinary people could not handle so much knowledge. Gessner demanded those in power in European countries should enforce a law that regulated sales and distribution of books.

If so, I don't understand the parallel you are trying to draw. Prior to the printing press, elites had access to 100s of books, and the average person had access to none. Whereas prior to AI romantic partners, elites and "proles" both have access to human romantic partners at similar levels. Also, I don't think Gessner was arguing that the book surplus would reduce the human relationship participation rate and thus the fertility rate. If you're referring to other "smart people" of the time, who are they?

Perhaps a better analogy would be with romance novels? I understand that concerns about romance novels impacting romantic relationships arose during the 18th and 19th centuries, much later.

Aside: I was unable to find a readable copy of Conrad Gessner's argument - apparently from the preface of the Bibliotheca Universalis - so I am basing my understanding of his argument on various other sources.

comment by Tapatakt · 2023-08-01T17:39:49.937Z · LW(p) · GW(p)

Epistemic status: I'm not sure if this assumtions are really here, am pretty sure what my current opinion about this assumptions is, but admit that this opinion can change.

Assumptions I don't buy:

  • Have kids when we can have AGI in 10-25 years is good and not, actually, very evil OMG what are you doing.
  • Right social incentives can't make A LOT of people poly pretty fast.
Replies from: Roman Leventov
comment by Roman Leventov · 2023-08-01T18:06:28.560Z · LW(p) · GW(p)

Is this comment a question whether these assumptions are in my post?

Have kids when we can have AGI in 10-25 years is good and not, actually, very evil OMG what are you doing.

You don't buy that having kids [when we can have AGI soon...] is good, right? OK, I disagree with that strongly, population-wise.  Do you imply that the whole planet should stop having kids because we are approaching AGI? Seems like a surefire way to wreck the civilisation even if the AI alignment problem will appear simpler than we think or miraculously solved. Will MacAskill also argues against this position in "What We Owe The Future".

Specifically for people who directly work in AI safety (or have intellectual capacity to meaningfully contribute to AI safety and other urgent x-risk priorities, and consider doing so), this is a less clear-cut case, I agree. This is one of the reasons for which I'm personally unsure whether I should have kids.

Right social incentives can't make A LOT of people poly pretty fast.

There was no such assumption. The young man in my "mainline scenario" doesn't have this choice, alas. He has no romantic relationships with humans at all. Also, I'm afraid that AI partners will soon become sufficiently compelling that after prolong relationships with them, it will be hard for people to summon motivation to court average-looking (at best), shallow, and dull humans who don't seem to be interested in this relationship themselves.