• tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      35
      ·
      1 day ago

      The real question is whether the author doesn’t understand what he’s writing about, or whether he does and is trying to take advantage of users who don’t for clicks.

    • Cyrus Draegur@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      1 day ago

      Yeah, that’s where my mind is at too.

      AI in its present form does not act. It does not do things. All it does is generate text. If a human responds to this text in harmful ways, that is human action. I suppose you could make a robot whose input is somehow triggered by the text, but neither it nor the text generator know what’s happening or why.

      I’m so fucking tired of the way uninformed people keep anthropomorphizing this shit and projecting their own motives upon things that have no will or experiential qualia.

      • dreadbeef@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        24 hours ago

        agentic ai is a thing. AI can absolutely do things… it can send commands over an api which sends signals to electronics, like pulling triggers

    • Sculptus Poe@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      10
      ·
      edit-2
      1 day ago

      More random anti-ai fear mongering. I stopped looking at r/technology posts in reddit because that sub is getting flooded with anti-ai propoganda posts with rediculous headlines like this to the point that those posts and political posts about technology ceos is all there is in that community now. (This one is getting bad too, but there are at least 25% of the posts being actual technology news. r/technology on reddit is reaching single digit percentages for actual technology posts. )

      • SinningStromgald@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        Since “AI” doesn’t actually exist yet and what we do have is sucking up all the power and water while accelerating climate change. Add in studies showing regular usage of LLM’s is reducing peoples critical thinking I don’t see much “fear mongering”. I see actual reasonable issues being raised.

    • masterofn001@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      17
      ·
      edit-2
      1 day ago

      If a program is given a set of instructions, it should produce that set of instructions.

      If a program not only does not produce those instructions, but gives itself its own set of instructions, and the programmers don’t understand what’s actually happening, that may be cause for concern.

      “Self aware” or not. (I’m sure an ai would pass the mirror test)

      People seem to have no problem with the term machine learning. Or the intelligence in ai. We seem to be unwilling to consider a consciousness that is not anthrocentric. Drawing that big red line with semantics we create. It can learn. It can defend itself. It can manipulate and cause users harm. It wants to survive.

      Sometimes we need to create new words or definition to explain new things.

      Remember when animals were not conscious beings just driven by instinct or whatever we told ourselves to make us feel better?

      Is a bee self aware? Is it conscious? Does it eat, learn, defend, attack? Does it matter what we say it is or isn’t?

      There are humans we say have co conscience.

      Maybe ai is just the sum of human psychopathy / psychosis.

      Either way, semantics are semantics, and we ourselves might just be simulations in a holographic universe.

      • smiletolerantly@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        1 day ago

        It’s a goddamn stochastic parrot, starting from zero on each invocation and spitting out something passing for coherence according to its training set.

        “Not understanding what is happening” in regards to AI is NOT “we don’t jniw how it works mechanically” it’s “yeah there are so many parameters, it’s just not possible to make sense of / keep track of them all”.

        There’s no awareness or thought.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          1 day ago

          There may be thought in a sense.

          A analogy might be a static biological “brain” custom grown to predict a list of possible next words in a block of text. It’s thinking, sorta. Maybe it could acknowledge itself in a mirror. That doesn’t mean it’s self aware, though: It’s an unchanging organ.

          And if one wants to go down the rabbit hole of “well there are different types of sentience, lines blur,” yada yada, with the end point of that being to treat things like they are…

          All ML models are static tools.

          For now.