Many people report that adding “please push back on unreasonable requests, perspectives, etc., and don’t just agree with whatever I say—tell the truth like it is even if it would sound harsh or hurt my feelings,” or something similar, to the LLM system prompt yields very good results. Worth trying if you use it for this sort of purpose.
In my experience (all my LLMs have that as a background prompt), this results in 3-5 responses with mild and poorly directed contrarianism before collapsing back into validation; telling the story fairly as if you’re a third party or pretending to be the other person work better but only if you actually make the other person sound sane, reasonable, and well intentioned.
Yes! I actually *just* did this a few days ago. Fed it a screenshot of a text-based argument and asked it to objectively analyze it "between the man on the left and woman on the right." I didn't tell it one of them was me, but I think it could've been better if I omitted the genders entirely, because I suspect ChatGPT has a bit of a libfem bias.
But after it gave me the completely validating response I already agreed with and completely expected, I then prompted it to steelman the man's side, and it did so pretty well.
You really have to be good at prompting these things properly, every damn time, to get the most objectivity out of it.
Yes, I’ve found that it gives me tons of credit for being “the right one” no matter the situation unless I successfully obscure who I as the writer am in the story.
ChatGPT actually helped me write a very good apology to someone including elements that I had missed out on the argument that might have hurt said person.
I don’t really talk about my personal disputes with AI but I do like to use it to map out my novel. What I realized is that if I betray any bias whatsoever then it will go with my bias. “I think the plot should go this way and this is why. Do you agree? Or should it go the other way?” If I give ANY justification for my thinking whatsoever, it will simply agree with me. If you ask it like this then forget about it. What ChatGPT will say is a foregone conclusion, even if a real life friend may have pushed back. And it will come up with more reasons than I do why I am right.
The best way I can get a good response is, lay out my options as thoroughly as I can. This is why I think option A. This is why I think option B (even if it’s the option I don’t want to do). I have to give as much effort laying out option B as I can. Then I ask what AI thinks. I’m get much better and more objective answers that way. But it is as you imply. By the time I’ve laid out all of these arguments, I can see the answer myself 🤷♀️ It has very rarely surprised me with something I haven’t thought of.
I've found that if you ask the LLM to pretend to be, say, Jacques Lacan or St. Augustine of Hippo it is less likely to simply tell you what you want to hear.
So well said. There’s a deep risk in outsourcing the delicate, messy work of self-exploration and connection to something that can only mirror patterns, not truly feel them.
Your point about algorithms turning vulnerability into predictable loops is powerful. Relationships live in the unpredictable the awkward pauses, the small missteps, the unexpected grace.
Thank you for defending the quiet courage it takes to navigate love without a script. Your piece felt like a gentle, necessary guardrail.
We might be only a year away from lawyers losing their licenses for using LLMs to write briefs -- so far they're only getting hit with monetary sanctions -- because LLMs make shit up to fit a narrative.
As implied in my stone soup comment below, someone as articulate and thoughtful as you is already bringing all the elements. All the LLM is doing is putting them in the same order you would if you were doing it by hand.
Many people report that adding “please push back on unreasonable requests, perspectives, etc., and don’t just agree with whatever I say—tell the truth like it is even if it would sound harsh or hurt my feelings,” or something similar, to the LLM system prompt yields very good results. Worth trying if you use it for this sort of purpose.
In my experience (all my LLMs have that as a background prompt), this results in 3-5 responses with mild and poorly directed contrarianism before collapsing back into validation; telling the story fairly as if you’re a third party or pretending to be the other person work better but only if you actually make the other person sound sane, reasonable, and well intentioned.
Yes! I actually *just* did this a few days ago. Fed it a screenshot of a text-based argument and asked it to objectively analyze it "between the man on the left and woman on the right." I didn't tell it one of them was me, but I think it could've been better if I omitted the genders entirely, because I suspect ChatGPT has a bit of a libfem bias.
But after it gave me the completely validating response I already agreed with and completely expected, I then prompted it to steelman the man's side, and it did so pretty well.
You really have to be good at prompting these things properly, every damn time, to get the most objectivity out of it.
Yes, I’ve found that it gives me tons of credit for being “the right one” no matter the situation unless I successfully obscure who I as the writer am in the story.
This seems very 'stone soup' or, as I first heard the story from a Danny Kaye record, 'nail broth.'
ChatGPT actually helped me write a very good apology to someone including elements that I had missed out on the argument that might have hurt said person.
I don’t really talk about my personal disputes with AI but I do like to use it to map out my novel. What I realized is that if I betray any bias whatsoever then it will go with my bias. “I think the plot should go this way and this is why. Do you agree? Or should it go the other way?” If I give ANY justification for my thinking whatsoever, it will simply agree with me. If you ask it like this then forget about it. What ChatGPT will say is a foregone conclusion, even if a real life friend may have pushed back. And it will come up with more reasons than I do why I am right.
The best way I can get a good response is, lay out my options as thoroughly as I can. This is why I think option A. This is why I think option B (even if it’s the option I don’t want to do). I have to give as much effort laying out option B as I can. Then I ask what AI thinks. I’m get much better and more objective answers that way. But it is as you imply. By the time I’ve laid out all of these arguments, I can see the answer myself 🤷♀️ It has very rarely surprised me with something I haven’t thought of.
I've found that if you ask the LLM to pretend to be, say, Jacques Lacan or St. Augustine of Hippo it is less likely to simply tell you what you want to hear.
Innovative! Would have expected this to be a more annoying way to have the same problem.
They have not been slow to challenge and in some cases chastise
I’m overly honest with ChatGPT. I don’t know what I’m doing with ChatGPT tho so that’s also being overly honest.
So well said. There’s a deep risk in outsourcing the delicate, messy work of self-exploration and connection to something that can only mirror patterns, not truly feel them.
Your point about algorithms turning vulnerability into predictable loops is powerful. Relationships live in the unpredictable the awkward pauses, the small missteps, the unexpected grace.
Thank you for defending the quiet courage it takes to navigate love without a script. Your piece felt like a gentle, necessary guardrail.
We might be only a year away from lawyers losing their licenses for using LLMs to write briefs -- so far they're only getting hit with monetary sanctions -- because LLMs make shit up to fit a narrative.
As implied in my stone soup comment below, someone as articulate and thoughtful as you is already bringing all the elements. All the LLM is doing is putting them in the same order you would if you were doing it by hand.