• 0 Posts
  • 56 Comments
Joined 3 years ago
cake
Cake day: June 10th, 2023

help-circle

  • I think it’s critically important to be very specific about what LLMs are “able to do” vs what they tend to do in practice.

    The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case. Whether or not it’s easy to access is irrelevant. The fact that the people performing the study had to “jailbreak” the models to get past checks tells you that the model’s creators are very aware that the model is very capable of producing an un-transformed version of the copyrighted work.

    From the end-user’s perspective, if the model is sufficiently gated from distributing copyrighted works, it doesn’t matter what it’s inherently capable of, but the argument shouldn’t be “the model isn’t breaking the law” it should be “we have a staff of people working around the clock to make sure the model doesn’t try to break the law.”











  • Basically the entire US economy, every employer, many schools, and half of the commercials on TV are telling us to use and trust AI.

    Kid was already using the bot for advice on homework and relationships (two things that people are fucking encouraged to do depending on who you ask). The bot shouldn’t give lethal advice. And if it’s even capable of doing that, we all need to take a huuuuuuge step back.

    “I want to make sure so I don’t overdose,” Nelson explained in the chat logs viewed by the publication. “There isn’t much information online and I don’t want to accidentally take too much.”

    Kid was curious and cautious, and AI gave him incorrect information and the confidence to act on that information.

    He was 19. Cut this victim blaming bullshit. Being a kid is hard enough before technology went full cyberpunk.