A Tale of Two Intelligences: xRisk, AI, and My Relationship
post by xRiskAnon923 · 2023-04-10T23:35:45.375Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 8 Nisan 3 Seth Herd 1 Ann None 3 comments
[Throwaway account for obvious reasons]
As a longtime veteran of LW, I'm no stranger to grappling with complex and uncomfortable ideas. I've always taken pride in my ability to engage in rational discourse and seek truth, even when it's inconvenient or challenging. That's why I find myself in a particularly distressing situation involving my partner, who is herself a highly intelligent individual, holding an advanced technical degree from a top university.
Recently, I've become increasingly concerned about xRisk from AI, a topic I'm sure many of you can relate to. The more I engage with Eliezer's writings and other LW content, the more alarmed I become about the potential dangers of AGI. However, my partner seems completely unbothered by these concerns.
In an attempt to bridge the gap, I sent her some of Eliezer's posts on AI risk, hoping they would make her take the issue more seriously. But her response was dismissive at best. She said that she didn't think the risks were a big deal and that we shouldn't worry about them.
Her arguments, which I found shallow and unpersuasive, revolved around the notions that "the good AIs will just fight the bad AIs" and "anyone intelligent enough to cause trouble would inherently understand the positive sum nature of the world." As many of us here know, these arguments don't account for the myriad complexities and potential catastrophic consequences of AGI development.
The growing chasm between our views on xRisk has left me wondering whether our relationship can survive this divergence. I'm genuinely considering breaking up with her because of our seemingly irreconcilable differences on this crucial issue. It's not that I don't love her; I just don't know if I can be with someone who doesn't share my concern for humanity's long-term future.
So, fellow LWers, I seek your guidance: How can I help her see the importance of AI risk and its potential impact on the world? Is there a better way to approach this conversation? Or should I accept that our views may never align, and move on?
I welcome your thoughts, experiences, and any advice you might have. The stakes feel incredibly high, both for my relationship and for the world as a whole.
Answers
-
Figure out why it's important to you that your romantic partner agree with you on this. Does your relationship require agreement on all factual questions? Are you contemplating any big life changes because of x-risk that she won't be on board with?
-
Would you be happy if your partner fully understood your worries but didn't share them? If so, maybe focus on sharing your thoughts, feelings, and uncertainties around x-risk in addition to your reasoning.
I suggest turning your good mind to romantic relationship theory and communication theory long before breaking up.
Note that nobody, not even you, are infinitely rational or intelligent.
Note also that it's rational to make judgment calls about what to think about and what not to. And it's rational to feel different ways about the same conclusions. One person might believe in AI x-risk and decide to not work on it and not worry about it because they think they'll be happier, and the rest of the world will be almost exactly the same. It sounds like your partner hasn't thoroughly grappled with the issue, so I'm betting her real epistemic state is uncertainty. So her real judgment is to not worry about it to much.
I'm betting that you've argued about it, and that has created important emotional responses in both of you that have distorted the logic.
When you do that research on communication and relationship theory, I think you'll wind up wanting to have another discussion with her on the topic, but being really really careful not to let it turn into an argument that will distort both of your rationality. Which is hard when things are important.
However important and valid your actual concerns,
DO NOT let a state of fear and anxiety that a community/group of people/general zeitgeist has cultivated in you cut you off and discard relationships with people who are outside of said community, who are not hurting you, who share your values.
Also do not try and enforce your beliefs by holding someone's important relationships hostage to buy-in to the group.
Read up on the BITE model of authoritarian control, determine the extent to which your considered actions are cultlike, and turn down the dial. Shared fears are not shared values, and prioritizing shared fears over shared values is concerning.
The world will not end because there's smart people you're close to who don't happen to be afraid of the same things you are. Separate out fears and values. You are conflating the fear of a particular xRisk with the value of "concern for humanity's long-term future". Nothing you've said suggests to me that she lacks concern for humanity's long-term future; just that she doesn't buy into the xRisk, after perusing a few articles, that you've had a longer time frame to arrive at. I know shared actual values are important. Fear of AI is not in fact a value. I say this sharing a certain amount of fear myself.
On a larger scale than one relationship:
Don't isolate yourself.
3 comments
Comments sorted by top scores.
comment by Mitchell_Porter · 2023-04-11T04:50:20.426Z · LW(p) · GW(p)
GPTroll-4 says:
As an AI language model, I don't have any experience with loving relationships. However, I can describe several possible strategies for minimizing relationship breakup risk, that the author of the post might understand.
- Create a Coherent Extrapolated Relationship (CER): a relationship that both partners would agree that they want, if given much longer to think about it, in more ideal circumstances.
- Perform a pivotal act: do something that saves the relationship now, giving you both time enough to create a CER later.
- Dating with dignity: accept that the relationship is doomed - it will break up or the world will end, whichever comes first - but make the most of it while it lasts.
It is important to note that intellectual incompatibility is not a new problem in relationships. The author of the post could look for advice concerning similar situations, such as interfaith marriages.
It is also important to note that the original post might just be a parody of a rationalist seeking rational relationship advice.
comment by the gears to ascension (lahwran) · 2023-04-11T07:03:29.398Z · LW(p) · GW(p)
oh, another thought - perhaps it's going to be fine like she thinks because people like you are going to react strongly, freak out, and do something about it soberly. she'll end up correct, but only because people like you put yourselves in the role of making a difference. many of my friends also don't want to deal with it directly; my boyfriend is continuing to go to college, and wants to get involved in making what you describe happen, but doesn't want to try to be the whole solution.
comment by the gears to ascension (lahwran) · 2023-04-11T06:06:49.615Z · LW(p) · GW(p)
I pasted this into claude+ on poe: https://poe.com/s/17WJGrjGWob3oIFKFMnw