• boonhet@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Yup, you want memory accessible to the GPU for local AI. AMD Strix Point and Mac devices are popular options. CPU can run LLMs but very slowly. I’ve got 32 GB of RAM and 8 VRAM and it’s borderline useless for models that don’t fit in the VRAM.