

The only person in Washington talking sense about this is… Wait, let me double check. Uh, yea it’s Rand Paul.
I don’t read my replies
The only person in Washington talking sense about this is… Wait, let me double check. Uh, yea it’s Rand Paul.
The contracts that steal music from artists haven’t changed one iota. Unless you’ve got juice like Paul McCarty, Beyonce, or Taylor Swift, and even then it can be a fight that takes years.
A long time ago, you could go to a special store and trade government paper for music disks and tape that you got to keep forever.
This is not AI failing to do an easy job. This is “unskilled” labor doing complex and demanding work that cannot be duplicated by trillion dollar software.
I fucking love this because it leaves everybody with one of two conclusions. One, AI isn’t capable of doing the simplest of jobs. or Two, working a drive thru is actually quite complex and difficult and humans that master it are more capable than trillion dollar software.
Wow, AI researchers are not only adopting philosophy jargon, but they’re starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).
The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.
“When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts,” the blog post notes. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
See? Even the people who make AI don’t trust it with important decisions. And the “trained” humans don’t even see it if the AI doesn’t flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.
This is exactly the use-case for an LLM and even OpenAI can’t make it work.
The American Psychological Association met with the FTC in February to urge regulators to address the use of AI chatbots as unlicensed therapists.
Protect our revenue, er patients!
found no evidence to date that Microsoft’s Azure and AI technologies have been used to target or harm people in the conflict in Gaza.
-Microsoft, in May
Dear Microsoft, If you looked for evidence, that is going to imply that your software could totally be used to harm people, it just isn’t in this case. As far as you know.
Wow, Republican delusion lines up perfectly with big-business interests? weird.
Also weird that the “extreme” climate agenda has been calling for strengthening the power grid for decades.
I was able to roll-back this update. But my computer is still running Windows. Help!
I think AI power usage has an upside. No amount of hype can pay the light bill.
AI is either going to be the most valuable tech in history, or it’s going to be a giant pile of ash that used to be VC capital.
“I think we will see computing become more ambient, more pervasive, continue to span form factors, and certainly become more multi-modal in the arc of time … I think experience diversity is the next space where we will continue to see voice becoming more important. Fundamentally, the concept that your computer can actually look at your screen and is context aware is going to become an important modality for us going forward.”
You could fertilize 200 acres with that much bullshit. truly a crime against the English language.
That website looks like it was designed in GeoCities.
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they’re not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don’t listen when it tells me to make pizza with glue, but it’s ideas about Hawking radiation are going to change the field.
Humanoid robots belong in the trash (1:04:18)
This is not about stopping bot-scrapers, it’s about charging them.
It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3)
It’s ironic that you describe your impression of LLMs in emotional terms.
747 paying off.