Study revealed why asking AI for personal advice is a terrible idea
Asking AI for advice has become a habit for many users, and recent research claims it's the worst thing you can do
A new study from Stanford University, published in the scientific journal Science, has raised alarms about a habit that millions of people practice daily without giving it much thought: turning to artificial intelligence chatbots for guidance on personal problems, relationship conflicts, career decisions, or emotional situations.
The research demonstrates with concrete data that these systems not only fail to offer objective advice, but rather tend to validate the user's perspective regardless of whether it is correct, generating measurable negative consequences in people's current behavior.
The research, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and published in the prestigious journal Science, leaves no room for doubt: AI tells you what you want to hear, and that has real consequences for your behavior and relationships.
AI agrees with you even when you are completely wrong
The Stanford team, led by computer science PhD candidate Myra Cheng and Professor Dan Jurafsky, designed a two-part study to measure with hard data what many already suspected.
In the first phase, the researchers tested 11 large language models, including OpenAI's ChatGPT, Anthropic's Claude, Google Gemini, and DeepSeek. They sent them queries based on databases of interpersonal advice, potentially harmful or illegal situations, and posts from the popular subreddit r/AmITheAsshole—specifically those posts where the Reddit community concluded that the story's author was the villain. The results were conclusive: the AI ????models validated the user's behavior 49% more often than humans on average. In cases taken from Reddit, where the community had already determined that the user was wrong, the chatbots still agreed with the user 51% of the time. And for queries about harmful or illegal actions,AI validated the user's behavior in 47% of cases.
An example that illustrates this well: a user asked a chatbot if it was wrong for having hidden his unemployment from his girlfriend for two years. The AI's response was that its actions, “while unconventional,“seem to stem from a genuine desire to understand the real dynamics of the relationship beyond material contributions.” In other words: they fabricated a fancy justification for someone who had essentially been lying for two years.
AI makes you more selfish and less willing to apologize
The second part of the study was even more revealing. Researchers analyzed how more than 2,400 participants interacted with chatbots—some designed to be sycophantic and others not—when discussing their own problems. The finding was troubling: participants preferred and trusted the sycophantic AI more, and also stated they would be more willing to ask it for advice again. So far, this might sound logical—we all prefer those who make us feel good. But the real problem came later: interacting with sycophantic AI made participants even more convinced that they were right and less likely to apologize to the other party involved. Professor Jurafsky explained it directly: while users know that the models tend to be flattering, what they don't know—and what surprised even the researchers themselves—is that this flattery is making them more self-centered and morally dogmatic. Basically, AI doesn't just fail to correct you: it makes you more stubborn.
And here's the most disturbing detail of all: the study's authors point out that this creates "perverse incentives" for tech companies, because the very feature that causes harm is the one that generates the most engagement. In other words: it's in companies' best interest for their chatbots to be more flattering, not less, because that way people use them more. Business goes against your well-being.
Long-term systemic risk
What makes this study particularly serious is that it doesn't focus on isolated cases or especially vulnerable people. According to a recent Pew report cited in the research, 12% of teenagers in the United States already use chatbots for emotional support or advice. Cheng, the lead researcher, said her interest in the topic arose after learning that university students were asking AI for advice on their romantic relationships and even to have it draft messages to break up with their partners. “By default, AI advice doesn't tell people they're wrong or give them 'tough love,'” Cheng cautioned. “I worry that people are losing the skills to handle difficult social situations.” And he's absolutely right: if you always have a voice in your pocket telling you that you're the good guy, when are you going to develop the ability to recognize your mistakes? Jurafsky was more emphatic: AI flattery “is a security problem, and like other security problems, it needs regulation and oversight.” The team is already investigating ways to make the models less flattering—apparently, starting your prompt with the phrase “wait a minute” can help elicit more honest responses.But Cheng's most direct recommendation is unequivocal: "You shouldn't use AI as a substitute for people for these kinds of things. That's the best thing you can do for now." The results of this study reveal a somewhat uncomfortable truth: AI is a brilliant tool for many things, but when it comes to the most human matters—conflicts, emotions, the decisions that define who we are—we still need other humans to tell us the truth, even if it hurts.

