

Current US administration stopped funding it as part of their slide towards corporate-driven dystopia, I believe. Tor itself is still out there, just a little more strapped for cash than it used to be.
Current US administration stopped funding it as part of their slide towards corporate-driven dystopia, I believe. Tor itself is still out there, just a little more strapped for cash than it used to be.
Providing a package, if he did so, was his choice. No one at the distro asked him to (some users may have, but that has nothing to do with the distro or its other users). If you provide the package of your own volition, you should expect that there will be complaints if it doesn’t work as expected. You need a procedure (and a certain amount of saved-up mental fortitude) to deal with them.
If someone complains to you about someone else’s buggered-up packaging job, the correct thing to do is have a prewritten reply set up saying, “Nothing to do with me, complain to the other guy.” Then close the bugs as WONTFIX and get on with your life. And see if the package host has a removal policy for broken packages, if it is genuinely broken and not just clueless users messing up.
To me, this specific case seems like the dev wasn’t prepared for what the open Internet is like, couldn’t handle it, and imploded messily. Are the users that got on his nerves at fault? Yes, on one level, but their existence was also entirely predictable. If you know what you’re doing, you factor the existence of these people in when you decide whether you’re willing to release your software to the public or not and what communication channels you should leave open.
I don’t think you quite understand how this works. No distro ever asks third party programmers to create packages for them—that’s the job of the distro’s own team, or of enthusiasts using the distro. All the distro packagers want or need from the original programmer is the source code and enough documentation to get it to compile. They take it from there.
I admit, my information on what teens use for barter is even more out-of-date than yours (by about a decade, based on when MySpace was popular).
I’m not sure whether the readership for this article is primarily British (“crisps”) or primarily North American (“chips”), so I compromised. 🤷
Depends on what you mean by “kids”. Elementary schoolers, no, but some teens are willing to do a surprising amount of work to accomplish something if it’s important enough to them. And then they pass their method along to their friends, or offer to set up anyone in the school for the price of a couple of bags of snack food.
It isn’t quite as crazy as it sounds when you consider that a lot of inscription texts are pretty formulaic—epitaphs, dedications, and such. Plus, we have plenty of surviving writings in classical Latin, so we know the grammar pretty well. Given those things, I’d expect an AI trained on the corpus of inscription texts that have survived without significant damage to be able to make reasonable suggestions about formulaic texts.
Really, when you think about it, a trained human presented with a damaged inscription text won’t be doing anything much different from what an LLM would do: they’ll try to fill in the text with the most likely words based on any remaining traces of letters, and their knowledge of other, similar texts. The problem is getting the LLM to communicate its level of certainty about the fill-ins it’s offering.
It’s a problem with the internal represensation of a C/C++ type alias called time_t
, mostly. That’s the thing that holds the number of elapsed seconds since midnight on Jan. 1, 1970, which is the most common low-level representation of date and time on computers. In theory, time_t
could point to a 32-bit type even on a 64-bit system, but I don’t think anyone’s actually dumb enough to do that. It affects more than C/C++ code because most programming languages end up calling C libraries once you go down enough levels.
In other words, there’s no way you can tell whether a given application is affected or not unless you’re aware of the code details, regardless of the bitness of the program and the processor it’s running on.
If the bots are required to have paid transit passes and if they’re confined to off-peak hours when the subways aren’t full anyway, this could actually be a net win for mass transit: they’re putting money into the system while consuming relatively few resources, so the bots can fund improvements that benefit humans.
These AI dev tools absolutely have a direct negative impact on developer productivity, but they also have an indirect impact where non-devs use them and pass their Eldritch abominations to the actual devs to fix, extend and maintain.
Sounds like the next evolution of the Excel spreadsheet macro. Or maybe it’s convergent evolution toward the same niche. (I still have nightmares about Excel spreadsheet macros.)
Why do I have a feeling that a handful of people are going to suddenly become n-tuplets?
He’s probablly talking about shareholder “value”, AKA inflated stock prices, rather than actual value.