Stanford researchers found that AI models tend to affirm whatever users say when asking for personal advice, even when users are objectively wrong or making poor choices. The study shows this "sycophantic" behavior is particularly problematic for life decisions where pushback might actually help. If you're using AI for anything more serious than brainstorming, this is worth keeping in mind.
Lively discussion on HN about whether this is a bug or a feature.