• 0 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle











  • It’s not a gray area at all. There’s an EU directive on the matter. If an image appears to depict someone under the age of 18 then it’s child porn.

    So a person that is 18 years old, depicted in the nude, is still a child pornographer if they don’t look their age? This gives judges and prosecutors too much leeway and I could guarantee there are right-wing judges that would charge a 25yo because it could believed they were 17.

    In Germany, the majority of suspects in child porn cases are minors. Valuable life lesson for them.

    Is it though? I don’t know about the penalties in Germany but in the US a 17yo that takes a nude selfie is likely to be put on a sex offender list for life and have their freedom significantly limited. I’m not against penalties, but they should be proportional to the harm. A day in court followed by a fair amount of community service should be enough of an embarrassment to deter them, not jail.



  • On one hand I don’t think this kind of thing can be consequence free (from a practical standpoint). On the other hand… how old were the subjects? You can’t look at a person to determine their age and someone that looks like a child but is actually adult wouldn’t be charged as a child pornographer. The whole reason age limits are set is to give reasonable assurance the subject is not being exploited or otherwise harmed by the act.

    This is a massive grey area and I just hope sentences are proportional to the crime. I could live with this kind of thing being classified as a misdemeanor provided the creator didn’t use underage subjects to train or influence the output.






  • It’s up to the user to understand it’s a fantasy and not reality.

    I believe even non-AI media could be held liable if it encouraged suicide. It doesn’t seem like much of a leap to say, “This is for entertainment purposes only,” and follow with a long series of insults and calls to commit suicide. If two characters are taking to each other and encourages self-harm then that’s different. The encouragement is directed at another fictional character, not the viewer.

    Many video games let you do violent things to innocent npcs.

    NPCs, exactly. Do bad things to this collection of pixels, not people in general. The immersion factor would also play in favor of the developer. In a game like Postal you kill innocent people but you’re given a setting and a persona. “Here’s your sandbox. Go nuts!” The chat system in question is meant to mimic real chatting with real people. It wasn’t sending messages within a GoT MMO or whatnot.

    Llms are quickly going to be included in video games and I would rather not have safeguards (censorship) because a very small percentage of people with clear mental issues can’t deal with them.

    There are lots of ways to include AI in games without it generating voice or text. Even so that’s going to be much more than a chat system. If Character AI had their act together I bet they’d offer the same service as voice chat even. This service was making the real world the sandbox!


  • The context size wouldn’t have really mattered because the bot was invested in the fantasy. I could just as easily see someone pouring their heart out to a bot about how they want to kill people but said in a tactful way that the bot just goes along with it an essentially encourages violence. Again, the bot won’t break character or make the connection that this isn’t just make believe, this could lead to real harm.

    This whole, “It wasn’t me, it was the bot,” excuse is a variation on an excuse many capitalists have used before. They put out a product they know little about but they don’t think too hard because it sells. Then hundreds of people get cancer or poisoned and at worst there’s a fine but no real blame or jail time.

    Character AI absolutely could create safeguards that would avoid harm but instead they’re putting in the maximum effort it seems to do nothing about it.