I’'m curious about the strong negative feelings towards AI and LLMs. While I don’t defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    AI is theft in the first place. None of the current engines have gotten their training data legally. The are based on pirated books and scraped content taken from websites that explicitely forbid use of their data for training LLMs.

    And all that to create mediocre parrots with dictionaries that are wrong half the time, and often enough give dangerous, even lethal advice, all while wasting power and computational resources.

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    1 day ago

    Because the goal of “AI” is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is “why should we continue to pay wages?”. That is bad for everyone who isn’t part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/… the data you input can and WILL eventually be used against you.

    If you only self-host and know what you’re doing, this might be somewhat different, but it still won’t stop the big guys from trying to swallow all the others whole.

    • Mrkawfee@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      the data you input can and WILL eventually be used against you.

      Can you expand further on this?

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        User data has been the internet’s greatest treasure trove since the advent of Google. LLM’s are perfectly set up to extract the most intimate data available from their users (“mental health” conversations, financial advice, …) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

        Regardless, there is no scenario in which the end user wins.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 day ago

      Reads like a rant against the industrial revolution. “The industry is only concerned about replacing workers with steam engines!”

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        You’re probably not wrong. It’s definitely along the same lines… although the repercussions of this particular one will be infinitely greater than those of the industrial revolution.

        Also, industrialization made for better products because of better manufacturing processes. I’m by no means sure we can say the same about AI. Maybe some day, but today it’s just “an advanced dumbass” considering most real world scenarios.

      • chloroken@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        22 hours ago

        Read ‘The Communist Manifesto’ if you’d like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

  • Cosmonauticus@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    I can only speak as an artist.

    Because it’s entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who’s works its platform is based on.

    You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I’d rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

    Fuck AI

  • borokov@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    Dunning-Kruger effect.

    Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

    Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

  • EgoNo4@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution?

    Both.

  • boatswain@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    Because of studies like https://arxiv.org/abs/2211.03622:

    Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

    • Dr_Nik@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      1 day ago

      Seems like this is a good argument for specialization. Have AI make bad but fast code, pay specialty people to improve and make it secure when needed. My 2026 Furby with no connection to the outside world doesn’t need secure code, it just needs to make kids smile.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

    But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

    So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

    • ShittyBeatlesFCPres@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.

      So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

    On top of these, there’s also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

    • Dr_Nik@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 day ago

      The industrial revolution called, they want their argument against the use of automated looms back.

        • Dr_Nik@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 day ago

          Lots of assumptions there. In case you actually care, I don’t think any one company should be allowed to own the base system that allows AI to function, especially if it’s trained off of public content or content owned by other groups, but that’s kind of immaterial here. It seems insane to villainize a technology because of who might make money off of it. These are two separate arguments (and frankly, they historically have the opposite benefactors from what you would expect).

          Prior to the industrial revolution, weaving was done by hand, making all cloth expensive or the result of sweatshops (and it was still comparatively expensive as opposed to today). Case in point, you can find many pieces of historical worker clothing that was specifically made using every piece of a rectangular piece of fabric because you did not want to waste any little bit (today it’s common for people to throw any scraps away because they don’t like the section of pattern).

          With the advent of automated looms several things happened:

          • the skilled workers who could operate the looms quickly were put out of a job because the machine could do things much faster, although it required a few specialized operators to set up and repair the equipment.
          • the owners of the fabric mills that couldn’t afford to upgrade either died out or specialized in fabrics that could not be made by the machines (which set up an arms race of sorts where the machine builders kept improving things)
          • the quality of fabric went down: when it was previously possible to have different structures of fabric with just a simple order to the worker, it took a while for machines to do something other than a simple weave (actually it took the work of Ada Lovelace, and see above mentioned arms race), and looms even today require a different range of threads than what can be hand woven, but…
          • the cost went down so much that the accessibility went through the roof. Suddenly the average pauper COULD afford to clothe their entire family with a weeks worth of clothes. New industries cropped up. Health and economic mobility soared.

          This is a huge oversimplification, but history is well known to repeat itself due to human nature. Follow the bullets above with today’s arguments against AI and you will see an often ignored end result: humanity can grow to have more time and resources to improve the health and wellness of our population IF we use the tools. You can choose to complain that the contract worker isn’t going to get paid his equivalent of $5/hr for spending 2 weeks arguing back and forth about a dog logo for a new pet store, but I am going to celebrate the person who realizes they can automate a system to find new business filings and approach every new business in their area with a package of 20 logos each that were AI generated using unique prompts from their experience in logo design all while reducing their workload and making more money.

          • ZILtoid1991@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            24 hours ago

            GenAI is automating the more human fields, not some production line work. This isn’t gonna lead to an abundance of clothing that are maybe not artisan made, but the flooding of the art fields with low quality products. Hope you like Marvel slop, because you’re gonna get even more Marvel slop, except even worse!

            Creativity isn’t having an idea of a big booba anime girl, it’s how you draw said big booba anime girl. Unless you’re one of those “idea guys”, who are still pissed off that the group of artists and programmers didn’t steal the code of Call of Duty, to put VR support into it, so you could sell if for the publisher at a markup price, because VR used to be a big thing for a while.

            • GnuLinuxDude@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              23 hours ago

              but the flooding of the art fields with low quality products

              It’s even worse than that, because the #1 use case is spam, regardless of what others think they personally gain out of it. It is exhausting filtering through the endless garbage spam results. And it isn’t just text sites. Searching generic terms into sites like YouTube (e.g. “cats”) will quickly lead you to a deluge of AI shit. Where did the real cats go?

              It’s incredible that DrNik is coming out with a bland, fake movie trailer as an example of how AI is good. It’s “super creative” to repeatedly prompt Veo3 to give you synthetic Hobbit-style images that have the vague appearance of looking like VistaVision. Actually, super creative is kinda already done, watch me go hyper creative:

              “Whoa, now you can make it look like an 80s rock music video. Whoa, now you can make it look like a 20s silent film. Whoa, now you can make look like a 90s sci-fi flick. Whoa, now you can make it look like a super hero film.”

              • ZILtoid1991@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                23 hours ago

                It even made “manual” programming worse.

                Wanted to google how to modify the path variable on Linux? Here’s an AI hallucinated example, that will break your installation. Wanted to look up an algorithm? Here’s an AI hallucinated explanation, that is wrong enough at some parts, that you just end up just wasting your own time.

            • Dr_Nik@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              23 hours ago

              Gotcha, so no actual discourse then.

              Incidentally, I do enjoy Marvel “slop” and quite honestly one of my favorite YouTube channels is Abandoned Films https://youtu.be/mPQgim0CuuI

              This is super creative and would never be able to be made without AI.

              I also enjoy reading books like Psalm for the Wild Built. It’s almost like there’s space for both things…

  • DontTakeMySky@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    On top of everything else people mentioned, it’s so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

    At that point just let me ditch the formality and send over the summary in the first place.

    But more generally, I don’t have an issue with “AI” just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn’t. It’s dangerous to view it that way. Right now we’re “okay” at differentiating real information from hallucinations, but so many people aren’t and it will just get worse as people get complacent and AI gets better at hiding.

    Part of this is the natural evolution of techology and I’m sure the situation will improve, but it’s being pushed so hard in the meantime and making the problem worse.

    The first Chat GPT models were kept private for being too dangerous, and they weren’t even as “good” as the modern ones. I wish we could go back to those days.

    • INeedMana@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      At that point just let me ditch the formality and send over the summary in the first place.

      A tangent a little bit but so much this. Why haven’t we normalized using fewer words already?
      Why do we keep writing (some blogs and all of content marketing) whole screens of text to convey just a sentence of real content?
      Why do we keep the useless hello and regards instead of just directly getting to the points already?