• BarneyPiccolo@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 hours ago

    I just saw a video on the first music synthesizer. It was built in 1897, and took up the entire basement of a city-black sized building. It was huge and useless, but it worked. Over the next 75 years, technology improved, until it could fit into a suitcase, and be carried around.

    The concept and the tech existed in its basic form, but it wasn’t really ready for deployment yet.

    I see data centers that way. Technically, they can build it, but it still has too many problems to be truly viable yet. There are too many problems with cost, the environment, the corruption, and that’s before considering the impact on society.

    In 50 years, maybe we’ll have the technology and the public policy to do this right, but right now it seems like we are forcing an inferior system to accommodate something that is too advanced for it. We’re getting way ahead of ourselves.

    It’s like body builder who gets on a bike for the first time, and can’t believe how fast his giant muscles can make that bike move, without realizing how out of control it will be at the same time, or how big the crash will be when it finally arrives.

  • rose56@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 hours ago

    No, they are not shoving AI through a funnel in our mouths. We are delusional and this must be normal.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    The glass is only half full (because AI data centers are stealing all of the good drinking water to cool down their grossly huge “machinery”).

  • zeroConnection@programming.dev
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 hours ago

    GUYS, please, we just need to give them one more trillion of moneys and an ocean of fresh water and we will have an AGI next month!!!

    Just imagine AI doing all the work for you, while you live a life of leisure as a homeless person!

    • TheEighthDoctor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      I think it went very fast in 4 years and now it plateaued. The only new thing coming out of it is different ways to interact with the same models.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 hours ago

        I never saw it go anywhere. I mean its cool and interesting from a technological perspective but I’m yet to see any practical application for normal people. It seems to only make shit worse, while destroying the environment and the economy.

        • TheEighthDoctor@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          6 hours ago

          I respectfully disagree, I use it a lot for work to create simple programs that would take me hours to make, I use it to summarize text, to make templates and to create Regex because I can’t be bothered to learn it.

  • anomnom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    13 hours ago

    It’s not AI. It’s LLMs that don’t actually think in any meaningful way. They just repeat what they have ingested. And was most mathematically likely.

    That’s why imma pessimist about LLMs doing anything truly revolutionary. They’re another productivity tool to solve problems that shouldn’t exist in the first place and middle-managment loves it for the same fucking reason.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      What you’re calling AI has somewhat shifted to being called AGI. Either way, the ship has long since set sail and LLMs are lumped under the category of AI. That’s what it’s called. Usage dictates meaning. It’s not an endorsement of the technology. The same way the computer AIs in games are called AI even if they aren’t “real AI.”

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      yup. a roided up eliza isn’t going to synthesize anything new. they can do some tasks, but it’s most certainly not artificial intelligence. and chaining a bunch of eliza’s together isn’t going to make them smarter (claw etc.,), much less make them reliable and useful.

  • homoludens@feddit.org
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    12 hours ago

    Yeah, well this isn’t a democracy where people have a say in what happens in our society. Our feudal elite decides what will happen, so stop complaining.

  • switcheroo@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    19 hours ago

    Ai isn’t being used to better society. To improve lives. It’s being used to drain and make the Epstein class more undeserved money.

    • diabetic_porcupine@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      6 hours ago

      I wouldn’t blame ai for that blame the people using it to those ends. It’s just a tool quit being a little bitch

      • frunch@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        How many tools require millions of gallons of water to and require such unbelievably massive amounts of electricity to operate? Do they take 100,000s of square feet with infrastructure? Generate massive amounts of noise and waste?

  • YoureHotCupCake@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    18 hours ago

    It is outrageous what is happening with AI right now, I work for a large company that does contracts with the US government specifically for the VA. Not only did they just lay off a bunch of people but they just announced that we being required to use AI in every step of our workflow and they have decided that AI is so great they now have people who have never a day in their life been coders doing development work. The guy whose job it was to create and manage schedules is now being required to use AI to write code and ship it. These AIs are wrong so so so much its crazy that this is the direction we are going in. If you thought things were bad already its about to get way worse.

    I am so deeply sorry to all the vets who will be struggling to get the healthcare that they need because of this. We don’t want to do this either but its clear as day they will fire us and replace us with any warm body regardless if that person has actual experience or not. I am looking to leave but the market is complete dog shit and Its been a struggle to get any kind of response for applications.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      11
      ·
      18 hours ago

      Hmmm…
      How about moderation of lemmy users based on suspected political affiliation according to an LLM?

      • Casterial@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        14 hours ago

        Do they use LLM to moderate? Reddit does and it doesn’t have context and it’s how I got permanent ban lol

        • Mac@mander.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          13 hours ago

          One party claimed such but the (AI supporting) accused denied it.

        • Tollana1234567@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 hours ago

          reddit does use it, i suspect they are using googles version and or OPEN AI. thats why there has been so many AI generated messages after you get banned. reddit realized this(admin/spez) that its alerting people to its AI usage, they use to shadowban instead now.

          the AI response from a sitewide ban usually goes like this: “Your account has been banned due to violaitons(s), please refer to the TOS”. Also it doesnt tell you what the ban is, so they kept nebelous enough that you cant appeal it.

          • Casterial@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            Yup, I saw a video of a child being shoved and said “Ngl I applaud the mom id shove them back” and I was ban for inciting violence lol

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        How about moderation of lemmy users based on suspected political affiliation according to an LLM?

        link, please?

        • Mac@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 hours ago

          It’s hypothetical, hopefully.

          I also did not save the links.

      • Casterial@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        14 hours ago

        It’s not bad, it’s not good. It requires a lot of hand holding and context.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        28
        ·
        22 hours ago

        I just tried it to see if it could implement a ping scanner in python. It could, but only if it blocked the gui while running. That kind of thing is an intermediate level school assignment. It’s not even half bad, it’s maybe 15% not bad.

  • Snapdragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    5
    ·
    23 hours ago

    Besides medical science, I see no use for AI. People make excuses about being “more accessible” for disabled people, but you could replicate those features without AI.

    • bridgeenjoyer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      ·
      23 hours ago

      Its the equivalent of using a 80 lb sledge hammer for a penny nail. Swinging wildly and missing 99% of the time, hitting your own shins, but 1% of the time it worked so its definitely good and the right thing to do!

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    20 hours ago

    I don’t understand the question and I’m guessing people in the survey may not have either. Moving too fast as in using too many physical resources without first focusing on optimization or “OMG the robots are coming for my job!”? These are very different views on technology that could give the same answer.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          19 hours ago

          It’s all opinion question. They’re trying to gather opinions and feelings, not measure quantitative data about each person themselves.

          • Imgonnatrythis@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            It’s just a survey writing thing. A good survey can focus on these subjective issues but produce potentially actionable results. This question is akin to asking do you think food is too spicy?

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    11
    ·
    12 hours ago

    AI isn’t the problem, it is just an excuse to abuse and gaslight people. If AI didn’t exist, some other card would be played.

    Instead of destroying the looms, we should take them over and make our own products. AI can be incredibly useful and might allow cottage industries and smaller communities to become strong enough to contest the powers above us. The big constraints is just the affordability of local hardware and the development of sufficiently powerful models.

    Things are moving quickly, especially in the local AI space. Two years ago, fitting a 70b was difficult in my hardware, which had 4k context capacity, could take an hour to output, really sucked at calculating numbers, and was censored. Now a 122b can be uncensored, allow for 256k context, takes less than two minutes to output an lengthy response, and is much smarter.

    What I am saying, is that we shouldn’t reject the power of AI. We should use it ourselves, and become the equals of the elite. If we foolishly abandon power, the wealthy will just continue bullying us.

    • Madzielle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 hours ago

      can we all stop using the phrase “the elite”. Elite to whom? That’s their fucking word, they are not “elite”. fucking, gross word use imo.

    • mabeledo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      9 hours ago

      No local models will be as good as those offered by big corporations, ever. It’s just not physically possible. Even worse, you don’t seem to understand that running a model is not the issue, training it is.

      Regardless, even if any of this wasn’t true, running LLMs on prem is something that’s only achievable by very few people worldwide. It would take generations for poorer countries to catch up, once again, so this AI race is effectively another attempt at exacerbating inequality, and frankly, it’s giving some strong “war for oil” vibes, people in richer countries happily ignoring what’s going on elsewhere because they are getting nicer things.

      • SabinStargem@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 hours ago

        I have 128gb of DDR4 RAM, a 4090, and a 3060. While certainly not weak, my computer is some generations behind. People, real people, can run a model inside their homes. Provided you limit the context and get a midrange quantization, you can run a Qwen3.6 35b on a midrange gaming PC.

        Given time, we will someday run DOOM Eternal in our pockets, and be able to talk with the demons.

        • mabeledo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          This is exactly the kind of cluelessness I was talking about. Again, training is way more expensive than running models, and very obviously a rig that costs several thousands of dollars is something not many people have access to.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      11 hours ago

      I agree and would add for others reason here that the Luddites issue was not the looms they destroyed, but the out of control inequality that the government was not addressing. We need to stop blaming AI as a society for job loss and instead get the governments to help with the transition which so far they have largely been inactive on.

      • bthest@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        11 hours ago

        What do you mean by help transition society? Help society transition to what exactly?

      • mabeledo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        9 hours ago

        Job losses are a consequence of AI spending, not productivity increases. This is an economic issue. Everyone with a little money is trying to ride the wave, regardless of the consequences, while big corporations are positioning themselves for long term dominance.

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      edit-2
      11 hours ago

      we shouldn’t reject the power of AI. We ould use it ourselves, and become the equals of the elite.

      Sorry but you’re delusional. Your chatbot girlfriend is not going to turn you or anyone else into the equal of Epstein or Elon Musk.

      And I don’t want to be “the equal” of Nazi cockroach pedophiles. That’s a downgrade for me.

      • SabinStargem@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 hours ago

        You do realize that you can use an AI model for many mundane things? Accounting, coding, scheduling, That leaves humans free to do human things like socializing and learning. The reason why Musk and company is so powerful, is because they can use their wealth to delegate tasks away from themselves. Time is a resource, and the wealthy are able to save much of it by not having to do the things that the ordinary person does.

        • mabeledo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          This assumes that the owning class will let you make use of that time, which is and always has been untrue.

          Also, I find it incredible that “coding” and “accounting” are now “mundane things”. Get off that ivory tower of yours, you definitely need to breath some thicker air.