• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 hours ago

    You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check

    Read what I wrote.

    Its not a matter of “rules” it “obeys”

    Its a matter of literally not it even having access to do such things.

    This is what Im talking about. People are complaining about issues that were solved a long time ago.

    People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.

    We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.

    The approach of “Im gonna tell the LLM not to do stuff in a markdown file” is tech from like 2 years ago.

    People still do that. Stupid people who deserve to have it blow up in their face.

    Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.

    Agents shouldn’t even have the ability to do damaging actions in the first place.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      Ah yes, lovely mcp. Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools and then you’ll be completely safe plugging the output of the llm into the os. Definitely fine yes.

      I bet you your contract with them says they’re not liable for shit their llm does to your files, your environment or your repositories, mcp or no mcp.

      Fool.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools

        Its becoming clear you have no clue wtf you are talking about.

        Model Context Protocol is a protocol, like http or json or etc.

        Its just a format for data, that is open sourced and anyone can use. Models are trained to be able to invoke MCP tools to perform actions, and anyone can just make their own MCP tools, its incredibly simple and easy. I have a pretty powerful one I personally maintain myself.

        Anthropic doesnt make any money off me, in fact, I dont use any of their shit, except maybe whatever licensing fees microsoft pays to them to use Claude Sonnet, but microsoft copilot is my preferred service I use overall.

        I bet you your contract with them says they’re not liable for shit their llm does to your files

        Setting aside the fact that I dont even use anthropic’s tools, my copilot LLMs dont have access to my files either. Full stop.

        The only context in which they do have access to files is inside of the aforementioned docker based sandbox I run them inside of, which is an ephemeral immutable system that they can do whatever the fuck they want inside of because even if they manage to delete /var/lib or whatever, I click 1 button to reboot and reset it back to working state.

        The working workspace directory they have access to has readonly git access, so they can pull and do work, but they literally dont even have the ability to push. All they can do is pull in the stuff to work on and work on it

        After they finish, I review what changes they made and only I, the human, have the ability to accept what they have done, or deny it, and then actually push it myself.

        This is all basic shit using tools that have existed for a long time, some of which are core principles of linux and have existed for decades

        Doing this isnt that hard, its just that a lot of people are:

        1. Stupid
        2. Lazy
        3. Scared of linux

        The concept of “make a docker image that runs an “agent” user in a very low privilege env with write access only to its home directory” isnt even that hard.

        It took me all of 2 days to get it setup personally, from scratch.

        But now my sandbox literally doesnt even expose the ability to do damage to the llm, it doesnt even have access to those commands

        Let me make this abundantly clear if you cant wrap your head around it:

        LLM Agents, that I run, dont even have the executable commands exposed to them to invoke that can cause any damage, they literally dont even have the ability to do it, full stop

        And it wasnt even that hard to do