The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • chrischryse@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    10
    ·
    1 day ago

    OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

    So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

    • themachinestops@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      23 hours ago

      The problem here is the kid if I am not wrong asked ChatGPT if he should talk to his family about his feelings. ChatGPT said no, which in my opinion makes it at fault.

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      21 hours ago

      OpenAI knowingly allowing its service to be used as a therapist most certainly makes them liable. They are toying with people’s lives with an untested and unproven product.

      This kid was poking no one and didn’t get his ass beat, he is dead.

      • chrischryse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        6 hours ago

        That’s like saying “webmd is knowingly acting like everyone’s doctor”, ChatGPT is a tool and you need to remember it’s a bot that doesn’t understand a lot or show emotion.

        The kid also was telling ChatGPT “oh this hanging is for a character” along with other ways to trick it. Sure I guess OpenAi should be slightly responsible, but not as responsible for how people use it, if you’re going to not bother with real help I ain’t showing sympathy I get suicide sucks but what sucks more is putting your loved ones through that trajedy

        • Doomsider@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 hours ago

          If a company designs a flawed tool that harms people they are responsible. Why are you trying so hard to not make them responsible.

          The last part about suicide is pretty tone death. I have lost multiple people in my life to suicide.