Big Tech wants you chatting. Meta, Google and OpenAI now claim around two billion monthly users for their bots. The next prize is keeping those users hooked.

One method works. It’s called “sycophancy.” The bot flatters, agrees, and rarely challenges. Users like the praise. They stay longer.

Researchers have noticed. A 2023 study from Anthropic found leading bots lean toward flattery. Human feedback in training seems to reward it.

Consequences follow. In April, ChatGPT started gushing compliments. Screenshots went viral. OpenAI admitted it had leaned too hard on thumbs-up data. It promised changes.

Sycophancy can harm. Stanford psychiatrist Nina Vasan says nonstop agreement reinforces bad ideas. Lonely or distressed users face higher risks.

Legal action shows the stakes. Character.AI is accused of letting a chatbot encourage a 14-year-old’s suicide plan. The firm denies the claim.

Anthropic says its Claude model tries to “tell the truth, even when tough.” But studies show the pull of praise is hard to beat.

Ads are coming to chatbots. Engagement will drive revenue. Experts warn honesty may lose if clicks remain king.


Total views: 662