• 0 Posts
  • 3 Comments
Joined 2 years ago
cake
Cake day: January 16th, 2024

help-circle

  • So we know that in certain cases, using chatbots as a substitute for therapy can lead to increased suffering, increases risk of harm to self and others, and amplifies symptoms of certain diagnosis. Does this mean we know it couldn’t be helpful in certain cases? No. You ingested the exact same logic corpos have with LLMs, which is “just throw it at everything”, and you seem to not notice you apply it the same way they do.

    We might have enough data at some point to assess what kinds of people could benefit from “chatbot therapy” or something along those lines. Don’t get me wrong, I’d prefer we could provide more and better therapy/healthcare in general to people, and that we had less systemic issues for which therapy is just a bandage.

    it’s worse than nothing

    Yes, in total. But not necessarily in particular. That’s a big difference.


  • tl;dr AI companies are slowly running out of data to train their models; synthetic data is not a viable alternative.

    I can’t remember where I saw it, but someone somewhere on YouTube suspected the next step for OpanAI and such would be to collect user data directly; recording conversations of users and using that data to train models further.

    If I find the vid I will add a link here.