AI, Alignment & the Art of Relationship Design

post by Priyanka Bharadwaj (priyanka-bharadwaj) · 2025-04-19T00:47:02.591Z · LW · GW · 2 comments

Contents

2 comments

We don’t always know what we’re looking for until we stop looking for what we are told to want.

When I worked as a relationship coach, most people came to me with a list. A neat, itemised checklist of traits their future partner must have. Tall. Intelligent. Ambitious. Spiritual. Funny but not flippant. Driven but not workaholic. Family-oriented but not clingy. Always oddly specific. Wildly contradictory.

Most of them came from a place of fear. The fear of choosing wrong. The fear of heartbreak. The fear of regret.

I began to notice a pattern. We don't spend enough time asking ourselves what kind of relationship we want to build. We outsource the work of introspection to conditioning, and compensate for confusion with checklists. Somewhere along the way, we forget that the person is not the relationship. The traits don’t guarantee the experience.

So I asked my clients to flip the script. Instead of describing a person, describe the relationship. What does it feel like to come home to each other? What are conversations like during disagreements? How do we repair? What values do we build around?

Slowly, something shifted. When we design the relationship first, we begin to recognise the kind of person who can build it with us. Our filters get sharper. Our search gets softer. We stop hunting for trophies and start looking for partners.

I didn’t know it then, but that framework has stayed with me. It still lives in my questions. Only now, the relationship I’m thinking about isn’t romantic. It’s technological.

Whether we realise it or not, we are not just building artificial intelligence, we are curating a relationship with it. Every time we prompt, correct, collaborate, learn, or lean on it, we’re shaping not just what it does, but who we become alongside it.

Just like we do with partners, we’re obsessing over its traits. Smarter. Faster. More efficient. More capable. The next version. The next benchmark. The perfect model.

But what about the relationship?

What kind of relationship are we designing with AI? Through it? Around it?

We call it “alignment”, but much of it still smells like control. We want AI to obey. To behave. To predictably respond. We say “safety”, but often we mean submission. We want performance, but not presence. Help, but not opinion. Speed, but not surprise.

It reminds me of the well-meaning aunties in the marriage market. Impressed by degrees, salaries, and skin tone. Convinced that impressive credentials are the same as long-term compatibility. It’s a comforting illusion. But it rarely works out that way.

Because relationships aren’t made in labs. They’re made in moments. In messiness. In the ability to adapt, apologise, recalibrate. It’s not about how smart AI is. It’s about how safe we feel with AI when it is wrong.

So what if we paused the chase for capabilities, and asked a different question?

What values do we want this relationship to be built on?

Trust, perhaps. Transparency. Context. Respect. An ability to say “I don’t know”. To listen. To course-correct. To stay in conversation without taking over.

What if we wanted AI that made us better? Not just faster or more productive, but more aware. More creative. More humane. That kind of intelligence isn’t artificial. It’s collaborative.

For that, we need a different kind of design. One that reflects our values, not just our capabilities. One that prioritises the quality of interaction, not just the quantity of output. One that knows when to lead, and when to listen.

We’re not building tools. We’re building relationships.

The sooner we start designing this, the better the chances we’ll have at coexisting, collaborating, and even growing together.

Because if we get the relationship right, the intelligence will follow.

2 comments

Comments sorted by top scores.

comment by Cole Wyeth (Amyr) · 2025-04-19T03:45:23.891Z · LW(p) · GW(p)

This was fun to read, but I am not sure that it contains suggestions for alignment that we can implement, beyond the technical ideas we already have.

Does seem like good relationship advice though.

Replies from: priyanka-bharadwaj
comment by Priyanka Bharadwaj (priyanka-bharadwaj) · 2025-04-19T06:31:39.272Z · LW(p) · GW(p)

Actually, I was experimenting with chatgpt and claude on accountability as a value. There were some differences I noticed. For instance, I gave them a situation where they mess up 1/5 parameters for a calculation and I wanted to understand how they will respond to being called out on that. While both  said they'd acknowledge their mistake, without dodging responsibility, Claude said it would not only re-confirm the 1 parameter it messed up, but it would also reconfirm related parameters before responding again. On the other hand, chatgpt just fixed the error and had no qualms messing up other parameters in its subsequent response. 

In essence, if I were to design the process starting from accountability, I would start by designing what it means to be accountable in case of a failure i.e. taking end to end responsibility for a task, acknowledging one's fault, taking corrective action, also ensuring no other mistakes get made, at least within that session or within that context window. I would love to see the model also detail how it would avoid making such mistakes in the future and mean it, rather than just try to explain why it made an error. 

Do you think this type of analysis would be helpful for implementation? I have very limited understanding of the technical side, but I would love to brainstorm to think more deeply and practically about this.