

4·
3 days agoI dunno about that… Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you’re looking at some very expensive hardware that also comes with a power bill.
Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.
Granted you don’t need a whole datacenter, but the price is far from zero.
Yeah, I think quite a lot of people on Lemmy have similar social media habits (or lack of) to some degree. We also tend to associate with other people like us. Especially people in tech tend to talk to other tech people, or friends and family of tech people which is a limited demographic.
It’s a very different perspective to most people. The average person on the train has vastly different media consumption and likely very different opinions.
There are a lot of people who consult LLMs in most aspects of their lives.