• In your Gmail app, go to Settings.
  • Select your Gmail address.
  • Clear the Smart features checkbox.
  • Go to Google Workspace smart features.
  • Clear the checkboxes for: Smart features in Google Workspace, Smart features in other Google products
  • If you have more Gmail accounts, repeat these steps for each one.
  • Turning off Gemini in Gmail also disables basic, long-standing features like spellchecking, which predate AI assistants. This design choice discourages opting out and shows how valuable your AI-processed data is for Google.

This has finally gotten me to take steps to deGoogle my email, Fastmail trial underway.

    • timestatic@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      No one is forcing you to use it. Having a european AI like Lumo that encrypts transcripts is great in comparison to shady big tech companies. Yes I know the full context is sent each time the AI wants to generate something. But still, I’m happy they offer it.

    • artyom@piefed.social
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      1
      ·
      2 days ago

      I don’t necessarily have a problem with offering AI. Especially in actually-useful contexts. I have a problem with it being forced on me in unwanted ones.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      Yeah but lumo is basically just a side gimmick thing that isn’t integrated with the rest of their suite.

      It’s basically the equivalent of a self hosted small LLM that you don’t have to fuck around with setting up.

      There’s nothing inherently wrong with LLMs as a tool. The problem is the misuse, misapplication and over scaling of them.

      If they were all just one off tools like lumo that are basically slightly more advanced digital assistants they would be fine. LLMs are fantastic for quickly searching shit with crap discoverability for example. They routinely are more effective at finding random useful results in say reddit or stack overflow or even some weird forum on the 12th page of Google.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        2 days ago

        I mean, I get that, but why is Proton offering one? What value do I get from Proton’s LLM that I wouldn’t get from any other company’s LLM? It’s not privacy, because it’s not end to end encrypted. It’s not features, because it’s just a fine tuned version of the free Mistral model (from what I can tell). It’s not integration (thank goodness), because they don’t have access to your data to integrate it with (according to their privacy policy).

        I kind of just hate the idea that every tech company is offering an LLM service now. Proton is an email and VPN company. Those things make sense. The calendar and drive stuff too. They have actual selling points that differentiate them from other offerings. But investing engineering time and talent into yet another LLM, especially one that’s worse than the competition, just seems like a waste to me. And especially since it’s not something that fits into their other product offerings.

        It truly seems like they just wanted to have something AI related so they wouldn’t be “left behind” in case the hype wasn’t a bubble. I don’t like it when companies do that. It makes me think they don’t really have a clear direction.

        Edit: it looks like they use several models, not just one:

        Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, GPT-OSS 120B, Qwen, Ernie 4.5 VL 28B, Apertus, and Kimi K2.

        - https://proton.me/support/lumo-privacy

        I have a laptop with 48GB of VRAM (a Framework with integrated Radeon graphics) that can run all of those models locally, so Proton offers even less value for someone in my position.

        • SuspciousCarrot78@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          19 hours ago

          Ah; as I recall, it’s because they polled users and there was an overwhelming “yes please”, based on Proton’s privacy stance.

          Given proton is hosted in EU, they’re likely quite serious about GDPR and zero data retention.

          Lumo is interesting. Architecturally I mean, as a LLM enjoyer. I played around with it a bit, and stole a few ideas from them when I jury rigged my system. Having said that, you could get a ton more with $10 on OpenRouter. Hell, the free models on there are better than lumo and you can choose to only use privacy respecting providers.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            I played around with it a lot yesterday, giving it documentation and asking it to write some code based on the API documentation. Just like every single other LLM I’ve ever tried, it just bungled the entire thing. It made up a bunch of functions and syntax that just doesn’t exist. After I told it the code was wrong and gave it the right way to do it, it told me that I got it wrong and converted it back to the incorrect syntax. LLMs are interesting toys, but shouldn’t be used for real work.

            • SuspciousCarrot78@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 hours ago

              Yeah. I had ChatGPT (more than once) take the code given, cut it in half, scramble it and then claim “see? I did it! Code works now”.

              When you point out what it did, by pasting its own code back in, it will say “oh, why did you do that? There’s a mistake in your code at XYZ”. No…there’s a mistake in your code, buddy.

              When you paste in what you want it to add, it “fixes” XYZ … and …surprise surprise… It’s either your OG code or more breaks.

              The only one ive seen that doesn’t do this is (or does it a lot less) is Claude.

              I think Lumo for the most part is really just Mistral, Nemotron and Openhands in a trench coat. ICBW.

              I think Lumo’s value proposition is around data retention and privacy, not SOTA llm tech.

              • hperrin@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                14 hours ago

                Feel free to try. Here’s the library I use: https://nymph.io/

                It’s open source, and all the docs and code are available at that link and on GitHub. I always ask it to make a note entity, which is just incredibly simple. Basically the same thing as the ToDo example.

                The reason I use this library (other than that I wrote it, so I know it really well) is that it isn’t widely known and there aren’t many example projects of it on GitHub, so the LLM has to be able to actually read and understand the docs and code in order to properly use it. For something like React, there are a million examples online, so for basic things, the LLM isn’t really understanding anything, it’s just making something similar to its training data. That’s not how actual high level programming works, so making it follow an API it isn’t already trained on is a good way to test if it is near the same abilities as an actual entry level SWE.

                I just tested it again and it made 9 mistakes. I had to explain each mistake and what it should be before it finally gave me code that would work. It’s not good code, but it would at least work. It would make a mistake, I would tell it how to fix it, then it would make a new mistake. And keep in mind, this was for a very simple entity definition.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            24 hours ago

            It’s integrated graphics so it uses up to half of the system RAM. I have 96GB of system ram, so 48GB of VRAM. I bought it last year before the insane price hikes, when it was within reach to normal people like me.

            I’ve tried it and it works. I can load huge models. Bigger than 48GB even. The ones bigger than 48GB run really slow, though. Like one token per second. But the ones that can fit in the 48GB are pretty decent. Like 6 tokens per second for the big models, if I’m remembering correctly. Obviously, something like an 8b parameter model would be way faster, but I don’t really have a use for those kinds of models.

    • Broken@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Its good to clarify that it’s not end to end encrypted like their email because its not clear from their marketing wording that its not. Its very easy to presume “encrypted” is the same encryption process they are known for on their email.

      The flip side of that coin is that it is a separate tool you don’t have to use. You can choose to use as many or few of their products as you wish (its not forced on you).

      It’s also a plus that there is SOME encryption and attempts at privacy vs every other alternative besides self hosting.

      I’ve personally found lumo to be very useful in troubleshooting computer issues that I’m unfamiliar with. I’ve learned a lot from using it, and the researching was faster than scouring forums myself and presented to me in a single pane. Its just a tool similar to a web browser. I choose a browser that helps me be private and I choose an AI tool that does the same, but I don’t expect either to actually keep me private.