It’s not AI winter just yet, though there is a distinct chill in the air. Meta is shaking up and downsizing its artificial intelligence division. A new report out of MIT finds that 95 percent of companies’ generative AI programs have failed to earn any profit whatsoever. Tech stocks tanked Tuesday, regarding broader fears that […]
Does AI comes up negative for most users? Surely here in Lemmy, yes. But out there I see/hear people using it -for dumb shit, mind you- all the time and being happy about it.
This is only one study, but I saw an article a few months ago talking about a study by a major phone company that found that the vast majority of people (80% or more IIRC) either didn’t care about AI features on their phones or actively disliked them.
I think most people don’t really care one way or another but hate that it’s being shoved into everything, and those who know the stats on how often it’s wrong are a lot more likely to actively dislike it and be vocal about their dislike.
That sounds quite possible, AI features on phones/OSs go mostly unused –according to my study, which has a sample of size who the hell knows and a methodology of I feel–.
But llms I think, although burning money, are quite accepted by the people who touch them, and do not understand what is actually going on or don’t care if the thing is wrong often.
I sometimes use llms, but only to burn thru monkey work that I can fast and easily review and do if the result is too shity. But that is the extention of my ai use.
A lot of people are fine with getting wrong answers about shit they don’t know already. That’s what gets spread in social media and what was used for a large portion of the training data and what is available when AI does a web search.
It presents something that looks right, that is what most people care about.
If I remember correctly the Intel floating point thing didn’t come up as a negative for most users like AI does.
Does AI comes up negative for most users? Surely here in Lemmy, yes. But out there I see/hear people using it -for dumb shit, mind you- all the time and being happy about it.
This is only one study, but I saw an article a few months ago talking about a study by a major phone company that found that the vast majority of people (80% or more IIRC) either didn’t care about AI features on their phones or actively disliked them.
I think most people don’t really care one way or another but hate that it’s being shoved into everything, and those who know the stats on how often it’s wrong are a lot more likely to actively dislike it and be vocal about their dislike.
That sounds quite possible, AI features on phones/OSs go mostly unused –according to my study, which has a sample of size who the hell knows and a methodology of I feel–.
But llms I think, although burning money, are quite accepted by the people who touch them, and do not understand what is actually going on or don’t care if the thing is wrong often.
I sometimes use llms, but only to burn thru monkey work that I can fast and easily review and do if the result is too shity. But that is the extention of my ai use.
A lot of people are fine with getting wrong answers about shit they don’t know already. That’s what gets spread in social media and what was used for a large portion of the training data and what is available when AI does a web search.
It presents something that looks right, that is what most people care about.