Making AI chatbots more friendly leads to mistakes and support of conspiracy theories, study finds

Chatbots trained to respond warmly give poorer answers and worse health advice, researchers say

The rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs.

Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler.

Continue reading…

Scroll to Top