• sin_free_for_00_days@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    I see the shit that people send out, obvious LLM crap, and wonder how poor they must write to consider the LLM output worth something. And then wonder if the people consuming this LLM crap are OK with baseline mediocrity at best. And that’s not even getting into the ethical issues of using it.

  • ReptileVessel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    10 hours ago

    As a DuckDuckGo user who uses claude and ChatGPT every day, I don’t want AI features in duck duck go because I probably would never use them. So many companies are adding chatbot features and most of them can’t compete with the big names. Why would I use a bunch of worse LLMs and learn a bunch of new interfaces when I can just use the ones I’m already comfortable with

  • BC_viper@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    5 hours ago

    Every poll is instantly skewed with the user base. I think Ai is amazing, but it’s not worth the hype. Im cautious about its actual uses and its spectacular failures. Im not a Fuck AI person. But im also not an Ai is going to be our God in 2 years person. And I feel like im more the average.

  • thegoodyinthehoody@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    5
    ·
    16 hours ago

    As much as I agree with this poll, duck duck go is a very self selecting audience. The number doesn’t actually mean much statistically.

    If the general public knew that “AI” is much closer to predictive text than intelligence they might be more wary of it

    • slappyfuck@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      12 hours ago

      There was no implication that this was a general poll designed to demonstrate the general public’s attitudes. I’m not sure why you mentioned this.

      • Lightfire228@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        11 hours ago

        Because that’s how most people implicitly frame headlines like this one: a generalization of the public

    • howrar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      The poll didn’t even ask a real question. “Yes AI or no AI?” No context.

    • ikirin@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      I mean you Gotta Hand it to “Ai” - it is very sophisticated, and Ressource intensive predicitive Text.

  • Gorilladrums@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    6
    ·
    16 hours ago

    I think most people find something like chatgpt and copilot useful in their day to day lives. LLMs are a very helpful and powerful technology. However, most people are against these models collecting every piece of data imaginable from you. People are against the tech, they’re against the people running the tech.

    I don’t think most people would mind if a FOSS LLM, that’s designed with privacy and complete user control over their data, was integrated with an option to completely opt out. I think that’s the only way to get people to trust this tech again and be onboard.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 hours ago

      In the non tech crowds I have talked to about these tools, they have been mostly concerned with them just being wrong, and when they are integrated with other software, also annoyingly wrong.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        Idk most people I know don’t see it as a magic crystal ball that’s expected to answer all questions perfectly. I’m sure people like that exist, but for the most part I think people understand that these LLMs are flawed. However, I know a lot of people who use them for everyday tasks like grammar checks, drafting emails/documents, brainstorming, basic analysis, and so on. They’re pretty good at these sort of things because that’s what they’re built for. The issue of privacy and greed remain, and I think some of the issues will at least be partially solved if they were designed with privacy in mind.

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      If I understand right, the usefulness of basic questions like “Hey ChatGPT, how long do I boil pasta” is offset by the vast resources needed to answer that question. We just see it as simple and convenient as it tries to invest in its “build up interest” phase and runs at a loss. If the effort to sell the product that way fails, it’s going to fund itself by harvesting data.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        I don’t disagree per se, but I think there’s a pretty big difference between people using chatgpt for correcting grammar or drafting an email and people using it generate a bunch of slop images/videos. The former is a more streamlined way to use the internet which has value, while the latter is just there for the sake of it. I think its feasible for newer LLM designs to focus on what’s actually popular and useful, and cutout the fat that’s draining a large amounts of resources for no good reason.

    • Reygle@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      12 hours ago

      I’m enjoying how ludicrous the idea of a “privacy friendly AI” is- trained on stolen data from inhaling everyone else’s data from the internet, but cares suddenly about “your” data.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        It’s not impossible. You could build a model that’s built on consent where the data it’s trained on is obtained ethically, data collected from users is anonymized, and users can opt out if they want to. The current model of shameless theft isn’t the only path there is.

    • GarboDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 hours ago

      We can say maybe a personal LLM trained on data that you actually already own and having the infrastructure being self efficient sure but visual generation llms and data theft isn’t cool

    • MBech@feddit.dk
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      15 hours ago

      I think you’re wildly overestimating how much people care about their personal data.

  • Electricd@lemmybefree.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    20 hours ago

    Most objective article (sarcasm)

    In fact it has a whole-ass “AI” chatbot product, Duck.ai, which is bundled in with DuckDuckGo’s privacy VPN for $10 a month

  • Tyrq@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    6
    ·
    1 day ago

    I would like to petition to rename AI to

    Simulated
    Human
    Intelligence
    Technology

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    1 day ago

    Okay, so that’s not what the article says. It says that 90% of respondents don’t want AI search.

    Moreover, the article goes into detail about how DuckDuckGo is still going to implement AI anyway.

    Seriously, titles in subs like this need better moderation.

    The title was clearly engineered to generate clicks and drive engagement. That is not how journalism should function.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Well, that’s how journalism has always functioned. People call it “clickbait” as if it’s something new, but headlines have always been designed to grab your attention and get you to read.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      22 hours ago

      That is the title from the news article. It might not be how good journalism would work, but copying the title of the source is pretty standard in most news aggregator communities.

    • LobsterJim@slrpnk.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      Unless I’m mistaken this title is generated to match the title at the link. Are you saying the mods should update titles to accurately reflect the content of the articles posted?

        • Jason2357@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          It has a separate llm chat interface, and you can disable the ai summary that comes up on web search results.

  • dantheclamman@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    4
    ·
    2 days ago

    I think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      I am explicitly against the use case probably being thought of by many of the respondents - the “ai summary” that pops in above the links of a search result. It is a waste if I didn’t ask for it, it is stealing the information from those pages, damaging the whole WWW, and ultimately, gets the answer horribly wrong enough times to be dangerous.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      Right? Like let me choose if and when I want to use it. Don’t shove it down our throats and then complain when we get upset or don’t use it how you want us to use it. We’ll use it however we want to use it, not you.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 days ago

        I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.

        https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

        When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.

        Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.

        According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

        The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

        • Regrettable_incident@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          They were trying to argue that it was legally responsible for its own actions? Like, that it’s a person? And not even an employee at that? FFS

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            You just know they’re going to make a separate corporation, put the AI in it, and then contract it to themselves and try again.

        • NotAnonymousAtAll@feddit.org
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

          That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.

          There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 day ago

            It’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.

            It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          …what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            It makes sense to do it, it’s just along the lines of evil company.

            If they lose, it’s some bad press and people will forget.

            If they win, they’ve begun setting precedent to fuck over their customers and earn more money. Even if it only had a 5% chance of success, it was probably worth it.

  • 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    218
    arrow-down
    1
    ·
    2 days ago

    At least they have an AI-free option, as annoying as it is to have to opt into it.

    On a related note, it’s hilarious to me that the Ecosia search engine has AI built in. Like, I don’t think planting any number of trees is going to offset the damage AI has done and will do to the planet.

        • UnspecificGravity@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          I want to know what economic forces are making it so that having AI, which costs money and very few users actually want, such a forgone conclusion. Who is paying them?

            • Jason2357@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              All these MBAs that learned about the advantage of first movers in school and have so little domain knowledge they operate 100% on “we just cant be late to the table”

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          Climate intelligence. Gods, excuse me while I go fetch my skeleton that was ejected from my body due to the cringe.

      • NewDay@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Ecosia produces its own green solar energy. According to them, they produce twice as much as they consume. The AI is still shit, because it is just ChatGPT.

        • morto@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Reducing the albedo of some area just to disperse the captured energy for no utility (ai) is still harmful to the environment and contributes to earth’s energy imbalance. Solar energy is great when it replaces fossil fuel emissions, not when it’s just wasted.

        • Mwa@thelemmy.club
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          hot take: this comment gives me a idea for them a opt-in AI powered entirely by solar energy if we solve the ethics problem first ofc.

      • Bio bronk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        I don’t get this argument when literally everything else is hundreds of times worse like lifestock and cars. Removing either one today would dramatically change the environment.

        Do you drive a car or take any kind of transportation?

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      20 hours ago

      Like, I don’t think planting any number of trees is going to offset the damage AI has done and will do to the planet.

      That’s true for pretty much everything, so not a real argument

    • Sockenklaus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      2 days ago

      Well, I don’t know about that.

      My swiss hoster just started offering AI and says that their AI infrastructure is 100 % powered by renewables and the waste heat is used for district heating.

      You could argue that LLM training in itself used so much energy that you’ll never be able to compensate for the damage, but I don’t know. 🤷

      • PixxlMan@lemmy.world
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        2
        ·
        2 days ago

        While good, you should always keep in mind that using renewables for this means that power can’t be used for other purposes, meaning the difference has to be covered by other sources of energy. Always bear in mind that these things don’t exist in a vaccum. The resources they use always mean resources aren’t used elsewhere. At worst this would mean that new clean power is built to power a waste, and then old dirty power has to be used for everything else, instead of being replaced by clean energy.

        • MBM@lemmings.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Yeah that reminds me of the data centres hogging green energy that was meant for households

        • Demdaru@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          2 days ago

          On the other hand…the same private entity wouldn’t buy the means to produce renewable power if they didn’t want to power their AI center. So in the ends, nothing changes, and the power couldn’t be used for other purposes because it simply wouldn’t be generated.

          However, as they did and are using it to promote themselves, they are influencing others to also adopt renewable energy policy in a way, no matter how small.

          No, normally I am not that optimistic, but I am trying ^^"

  • Young_Gilgamesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    2 days ago

    Google became crap ever since they added AI. Microsoft became crap ever since they added AI. OpenAI started losing money the moment they started working on AI. Coincidence? I think not!

    Rational people don’t want Abominable Intelligence anywhere near them.

    Personally, I don’t mind the AI overviews, but they shouldn’t show up every time you do a search. That’s just a waste of energy.

    • MrKoyun@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      You can choose how often you want the AI Overwiew to appear! It like asks you the first time you get one in a small pop up. I still think they should instead work on “highlighting relevant text from a website” like how google used to do. It was so much better.

      • Young_Gilgamesh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        I did not know that. Never noticed a pop up. And does this work with both search engines? You can turn off the AI features on DuckDuckGo with like two clicks, but I can’t seem to find the option on Google.

        • MrKoyun@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          I was talking about DDG because I thought you were talking about DDG in the last part. I dont think you can turn off AI completely on Google.

    • MBech@feddit.dk
      link
      fedilink
      English
      arrow-up
      42
      ·
      2 days ago

      Google became crap about 10 years ago when they added the product banner in the top, and had the first 5-10 search results be promoted ads. Long before they ever considered adding AI.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Google became crap shortly after their company name became a synonym for online searches. When you don’t have competitors, you don’t have to work as hard to provide search results – especially if you’re actively paying Apple not to come up with their own search engine, Firefox to maintain Google as their default search engine, etc. IMO AI has been the shiny new thing they’re interested in as they continue to neglect search quality, but it wasn’t responsible for the decline of search quality.

      • parricc@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Time is sneaking up on us. It’s not even 10 years anymore. It’s closer to 20. 💀

      • Young_Gilgamesh@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        I guess. And then they removed the “Don’t be evil” motto just to drive the point home.

        But you have to agree, the company DID become even worse once they started using AI.

        • MBech@feddit.dk
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Oh absolutely. It’s just important to remember that they’ve been horrible for a long time, and has shown more ads in a single search than your average 30 minute youtube video.

    • Spaniard@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Google and Microsoft were crap before AI, I don’t remember when google removed the “don’t be evil” but at that point they have been crap for a few years already.

    • fleton@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Yeah google kinda started sucking a few years before AI went mainstream, the search results took a dive in quality and garbage had already started circulating to the top.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        I mind them. Nobody at my workplace scrolls beyond the AI overview and every single one of the overviews they quote to me about technical issues are wrong, 100%. Not even an occasional “lucky guess”.

      • Balinares@pawb.social
        link
        fedilink
        English
        arrow-up
        50
        arrow-down
        6
        ·
        edit-2
        2 days ago

        I mean, the poll was like as not a publicity stunt, to draw attention to the fact DDG is not doing AI. All the same, the fact they are making “no AI” a selling point is noteworthy.

        EDIT: I stand corrected – apparently DDG does do AI presently. Hopefully they’re serious about reconsidering that, then.

      • Jännät@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        It’ll be a cold day in hell when AI-horny suits learn the lesson that people don’t want AI

    • Logi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I like kagi’s approach of generating an AI overview if you end your query with a question mark. Is this a search or a question?

      • Endymion_Mallorn@kbin.melroy.org
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        22 hours ago

        I’m not a fan of Kagi’s approach to anything, frankly. The fact that they’re charging money for a search engine, and using an LLM? Hard pass.

          • Endymion_Mallorn@kbin.melroy.org
            link
            fedilink
            arrow-up
            1
            ·
            15 hours ago

            I don’t have a problem with them selling ads. I have a problem with the insidious tracking that has become a part of those ads. Mojeek sells ads but doesn’t track you everywhere. Also, there’s SearX instances, which typically don’t sell ads or data.

            They also don’t use LLMs. If you think Kagi isn’t giving away or selling your data, think again, because the LLM is doing it as a core function.

    • Egonallanon@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      You can turn all the AI features off on regular DDG search settings. Best I can tell that achuevescthe same as using the no AI filter.

      • Seleni@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Except it’s not very good. I turn it off and still get AI pictures and videos, and it gets rid of some pictures I know aren’t AI.

        • teft@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Just edit the ddg entry instead of adding a new one. It’s super easy to change the url for the search in settings.

          • timbuck2themoon@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I don’t know how you do that. I can’t edit the standard DDG on desktop at all. Didn’t see anything at first glance in about:config either.

            Easier to me to just add a new engine.

    • NotSteve_@piefed.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I’m pretty sure it asks you how often you want to see the AI overview doesn’t it? Can’t you just click never?