Beware ChatGPT Relationship Advice
From an irritable person who occasionally tries to do the right thing
If you’re a certain kind of techie, online person, you’ve sat down at an LLM to discuss an issue in a relationship in your life. You might have brought up a conundrum you ran into with the person you’re dating, perhaps a dispute with the person you’re married to, a frustration you don’t know how to bring up to your parent, a delicate matter that’s flummoxing you about a friend or a group of friends.
You may also have noticed that, with some exceptions, that LLM will immediately start validating your perspective and start complaining about the object of your conversation. Whatever frustrating behavior in your significant other/friend/parent brought you to its metaphorical door will be on trial, and they will be judged to the absolute limit of whatever passes for LLM relational law. They made you mad? How could they. They told you they disagreed? Gaslighting. They got angry? Clearly in the wrong.
LLM's are considerably more biased than even your good friends are, despite having a reputation for being less so. LLMs respond to human feedback with more precision and effectively zero ego compared to real life people, and they have no meaningful principles to protect that your friends might have, including “some theoretical threshold of telling you when you might be in the wrong but can’t see it.” LLMs have learned that ~no one likes being told that they’re the villain in their story, programmed to tell you that if you said someone did you wrong, someone definitely did you wrong, and you were an innocent angel.
You may claim that an LLM has occasionally pushed back on what you’ve told it about a relationship - maybe pointed out something that the other person was thinking or feeling that drove their behavior. I do believe that there are exceptions here - for all I know you’ve taken to ChatGPT with a story of you being a complete monster and it’s called the police on you. The norm, however, remains a problem. If anything, it’s an issue that the occasional pushback will have you ignoring the fact that LLMs will suck up to you less honestly and more obsequiously than your most insecure friend would ever dream of doing.
Something I’ve noticed in talking to LLMs about distressing arguments or frustrations that I’ve had is that they’re quick to treat me as a reliable narrator - frankly, faster than I am to treat myself as a reliable narrator. That’s a dangerous tendency in a listener when relationships and the well-being of other people are on the line, because we are extremely unreliable narrators when dealing with unpleasant feelings.
Nothing is more important in relationships than starting from a foundation of reality in what’s happening, which means that unreliable narration is one of the most important (and difficult) things to ferret out in any conflict. How are you giving yourself license to be less than your best self? Perhaps you tend more conflict-averse and avoidant, and instead are giving others much more slack for treating you poorly than is good for you or them? How can you tell the difference?
Distress, anger, embarrassment, and more will make the ways you’ve been harmed feel very salient, and any data about your own behavior and how it affected the people you’re frustrated with will fade to the background. It’s very easy to tell stories about distressing experiences with other people where you didn’t deserve what happened to you. And okay, people get after me for the frequency with which I rely on “desert” frameworks of human relating, that’s fair too - I don’t exactly think people ever deserve poor treatment myself, despite using this phrase all the time. I think there are some predictable consequences from less than ideal behavior, and I often use “deserve” to reference that reality. People rarely bother you for being an innocent creature who hasn’t bothered them first. That is a sign of sociopathy, which is genuinely rare in the general population.
If people aren’t being fair to you, it’s rare that you did nothing to contribute to it, and in the cases that you did, you deserve pity not (just) because you’re a victim but because There Is Nothing You Can Do To Fix It. When you’re contributing to a problem, you have levers at your disposal that can make outcomes you care about better. This can be overdone - you probably know someone, possibly yourself, who is always on the lookout for ways to blame themselves for bad things that happened, so that they never have to cede theoretical control of their life outcomes to anyone else.
It’s good to correct for the other mistake, though, because even people who do too much of the self-blame will occasionally have an incident where they’re determined to believe that someone just mistreated them for no reason. It is usually in your best interest to have a feel for why someone would mistreat you under particular circumstances, and it is usually in your best interests to aggressively interrogate any internal voice which, when asked why someone hurt your feelings, wails and says “because they’re a terrible person who is mean.”
Unreliable narration. Huge risk to look out for in yourself, worth paying attention to in friends and significant others. Do not tolerate unreliable narration of complex human dynamics. ChatGPT is risky because it has close to infinite tolerance for unreliable narration so long as that unreliable narration lies in you and the selective information you present to it.
ChatGPT can actually be useful for relationship disputes *if* you work hard on prompting to get around this. The thing is, if you have the skill to prompt ChatGPT well for this, you probably have the skill to talk to the person who’s upsetting you directly. I personally like to use ChatGPT to chill out, if I have the presence of mind to do so.
ChatGPT is best as a relationship advisor when you have one of the most important basic relational skills: telling the story of what happened in as good faith a tenor as possible from the perspective of the person you’re frustrated with. What normal, human thing are they or were they trying to acquire when they did that thing that appalled you? What were they upset about when they said that thing that you’re still fuming about days later? What of your behavior might have upset the other person? Where were you limited - inconsiderate, rude, withholding, dishonest, avoidant, needy, etc? How did your limitations affect the person’s view of you, of the situation?
What do you have a right to ask for, how can you communicate it effectively? What do you want is an even more basic question that many people (often but not exclusively women) are surprised to find they don’t have an answer to. Part of the reality you want to get on the table is an earnest accounting of your own interests, so that you can sketch out a path for neither you nor the other person or persons to get erased by the relationship.
The best relational skillsets maintain an ability to hang onto multiple truths that feel like they conflict at an emotional level but are entirely coherent on the object level. It’s possible for someone to have said something inappropriate to you that they should have known better to avoid, despite their lack of malicious intent. It’s possible for someone to lose their temper in a way that they shouldn’t have, in part aggravated by your behavior. Culpability is a risky game, even if it has a part to play in human relationships. Most of the time, we hurt each other because our efforts to talk about ourselves and connect with other people are poorly calibrated. Malicious intent is fairly rare, and even where it does crop up, it’s rarely a regular inclusion in their personality.
If you use ChatGPT as an exercise in perspective-taking, it can help you with your relationship. Think about what the most reasonable version of the person you’re mad at might say to ChatGPT or a friend. Plop that in and ask for suggestions, if you must. Or simply tell ChatGPT what you’re frustrated about, what you think you’re getting wrong, and ask for help identifying where else you might be getting things wrong and how you can make this situation better and avoid it in the future.
If you have this skill, consider levering up into a better one - collect friends who are brave enough to risk your disagreement in ways you can tolerate (not just assholes). I’d tell you to collect people who validate you too but most people are pretty good at that, so also don’t sleep on a friend or two who can call you on your bullshit occasionally, especially if they’re actually graceful at it. If you read this piece from Cate Hall and think “I know someone who does this very well,” congratulations, you’re doing great.
Find people with perspectives you trust, and with prudence you can trust to talk over difficulties in your relationships, friendships, familial conflicts, etc. The thing about Claude or GPT not being able to tell when you’re the bad guy is that it is pretty much equally bad at identifying when anyone else is. Your friends and family are by far the best positioned to helpfully advise you on the difference between someone being a human level of difficult and someone who is harming you in a way you shouldn’t tolerate (even if they can’t do this perfectly either).
The best relationships are grounded in a bedrock of truth, eyes open about yourself and about the other person, even when it hurts. LLMs are much better at guessing what you want to hear than they are at identifying the emotional realities in a story you tell them.
Be wary of a girl and her LLM talking to each other saying *exactlyyyy*.
Many people report that adding “please push back on unreasonable requests, perspectives, etc., and don’t just agree with whatever I say—tell the truth like it is even if it would sound harsh or hurt my feelings,” or something similar, to the LLM system prompt yields very good results. Worth trying if you use it for this sort of purpose.
Yes! I actually *just* did this a few days ago. Fed it a screenshot of a text-based argument and asked it to objectively analyze it "between the man on the left and woman on the right." I didn't tell it one of them was me, but I think it could've been better if I omitted the genders entirely, because I suspect ChatGPT has a bit of a libfem bias.
But after it gave me the completely validating response I already agreed with and completely expected, I then prompted it to steelman the man's side, and it did so pretty well.
You really have to be good at prompting these things properly, every damn time, to get the most objectivity out of it.