https://www.wheresyoured.at/the-men-who-killed-google/
I’m not a huge fan of Ed Zitron generally, he leans towards histrionic too much for my tastes, but he makes a compelling case here.
https://www.wheresyoured.at/the-men-who-killed-google/
I’m not a huge fan of Ed Zitron generally, he leans towards histrionic too much for my tastes, but he makes a compelling case here.
And unless you are Stephan King or the like exactly how are you going to get the publishing cartel (I think they re consolidated downs to 3-4 publishers now) to change their contract to not include this? Their response will almost certainly be either “that’s non-negotiable” or “ok then you get half as much money”.
He could see AI being used more immediately to address certain “low-hanging fruit,” such as checking for application completeness. “Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete,” he says. More sophisticated uses would need to be developed, tested, and proved out.
Oh no, the dystopian horror…
Its a shit article with Tech crunch changing the words to get people in a flap about AI (for or against), the actual quote is
“I’d say maybe 20 percent, 30 percent of the code that is inside of our repos today and some of our projects are probably all written by software”
“Written by software” reasonably included machine refactored code, automatically generated boilerplate and things generated by AI assistants. Through that lens 20% doesnt seem crazy.
Git is, but it has no process of discovery or hosting by itself. Those are needed to efficiently share open source software to large numbers of people.
I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.
Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that…
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
So by going harder on blocking content that China? Because that’s what they do but most of the big providers get through after a day or two of downtime each time the government make a change to block them.
It would be interesting to give these scores a bit of context: what level would a random person off the street, a history undergrad and a history professor score?
Civil cases of copyright infringment are not theft, no matter what the MPIA have trained you to believe.