• proceduralnightshade@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    So we know that in certain cases, using chatbots as a substitute for therapy can lead to increased suffering, increases risk of harm to self and others, and amplifies symptoms of certain diagnosis. Does this mean we know it couldn’t be helpful in certain cases? No. You ingested the exact same logic corpos have with LLMs, which is “just throw it at everything”, and you seem to not notice you apply it the same way they do.

    We might have enough data at some point to assess what kinds of people could benefit from “chatbot therapy” or something along those lines. Don’t get me wrong, I’d prefer we could provide more and better therapy/healthcare in general to people, and that we had less systemic issues for which therapy is just a bandage.

    it’s worse than nothing

    Yes, in total. But not necessarily in particular. That’s a big difference.

    • truthfultemporarily@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      If you have a drink that creates a nice tingling sensation in some people and make other people go crazy, the only sane thing to do is to take that drink off the market.

      • proceduralnightshade@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Yeah but that applies to social media as well. Or, idk, amphetamine. Or fucking weed. Even meditation. Which are all still there, some more regulated than others. But that’s not what you’re getting at, your point is AI chatbots = bad and I just don’t agree with that.