Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.
Removed by mod
I agree. But that’s now how these LLMs work.
deleted by creator
Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
deleted by creator
I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.