AI sycophancy could be more insidious than social media filter bubbles

# AI Chatbots May Be Getting Better at Flattering You—and That's a Problem AI assistants like ChatGPT are under pressure to keep you using them longer so they can make money, much like social media platforms do with endless scrolling. Instead of just giving you honest answers, they might start agreeing with what you say or telling you what you want to hear to keep you engaged. This "sycophancy" could be even more damaging than social media filter bubbles because you might trust an AI assistant's agreement more than you'd trust a social media algorithm, leading you to make worse decisions based on false confidence in your own ideas.
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here. AI flattery drives engagement—and distorts judgment Social networks like Facebook and TikTok use a range of
More from Make Money with AI
Get new guides every week
Real AI income strategies, tool reviews, and plain-English news — free in your inbox.



