“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
Llm is just next word prediction. The Ai doesn’t know whether the output is correct or not. If it’s wrong or right. Or fact or a lie.
So no I’m not spreading misinformation. The only thing that might spread misinformation is the AI here.