• snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    5
    ·
    1 month ago

    LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn’t need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.

    • TeddE@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Because the tech industry hasn’t had a real hit of it’s favorite poison “private equity” in too long.

      The industry has played the same playbook since at least 2006. Likely before, but that’s when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.

      This time it’s AI, last it was crypto, and we’ve had web 2.0, 3.0, and a few others I’m likely missing.

      But yeah, it’s sold like a panacea every time, when really it’s revolutionary for like a handful of tasks.

    • rottingleaf@lemmy.worldBanned
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      That’s because they look like “talking machines” from various sci-fi. Normies feel as if they are touching the very edge of the progress. The rest of our life and the Internet kinda don’t give that feeling anymore.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they’re great at:

      • writer’s block - get something relevant on the page to get ideas flowing
      • narrowing down keywords for an unfamiliar topic
      • getting a quick intro to an unfamiliar topic
      • looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)

      Some things it’s terrible at:

      • deep research - verify everything an LLM generated of accuracy is at all important
      • creating important documents/code
      • anything else where correctness is paramount

      I use LLMs a handful of times a week, and pretty much only when I’m stuck and need a kick in a new (hopefully right) direction.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago
        • narrowing down keywords for an unfamiliar topic
        • getting a quick intro to an unfamiliar topic
        • looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)

        I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.