

Easy: Altman used ChatGPT to come up with his numbers.


Easy: Altman used ChatGPT to come up with his numbers.


And, tbh, it doesn’t even matter in this case that this was AI generated. If anyone snuck any worthless piece of crap painting into a into a museum without permission it would be removed. AI only contributed to this by making it even more low-effort.


That’s the thing with science communication. It barely exists.
There is a bogus theory. Nobody tries replicating it for decades because there’s no fame in replication. Then someone finally does and disproves the theory. If the author is lucky, it gets published on the last pages of some low-level journal, because there’s even less fame in failed replication. But the general public doesn’t read journals. They don’t even read science journalism. They might read a short note in a daily newspaper that was twisted into unrecognizability by an underpaid, overworked journalist who didn’t understand a word in the article they read in some pop science magazine.
Science doesn’t reach the general public, and if it does against all odds, it’s so twisted and corrupted that it frequently says the opposite of what the original paper said.
People do their general education in school, and once they leave they stop learning general topics.


Revenue and profit are different things. In times of low interest it’s growth at all costs. Investors love market share and growth, because they expect to make money when they sell their shares. That’s risky, but with low, no, or even negative interest it’s still worth the risk.
When interest goes up, parking money in safe, interest-based forms of investment becomes more interesting, so to compete companies also need to lower the risk. In a climate like that investors want to make money via dividends, so companies need to maximize dividends and to do so they need to maximize profits. Growth, market share and future plans become less relevant.
That’s what we are seeing right now.


Tech is a field where there’s always infinite work to do, and it’s always only limited by the budget.
We had very low interest rates for over a decade, which made investments more profitable and thus there was always a ton of money to go around. The current financial downturn is the main reason of all the tech layoffs with no budget there are no jobs.
The upside of that: Even with all the talk of AI and stuff, once the interest rate goes down and investments go up, all the jobs will be back.


What bothers me most is that they equate a model with reality.
Quantum gravity theory is our current working model that we use to describe our observations. It’s not reality itself, and no scientist worth their money would claim that it is, because if it was, physics would be solved and it isn’t.
That’s how science works: We have observations, we build models to describe them, then we have more observations that don’t fit the old models, so we build newer models that also describe the new observations. Since we aren’t omnicient, there’s always something we can’t observe (yet) and what we can’t observe we also can’t describe.
“Therefore, no physically complete and consistent theory of everything can be derived from computation alone.”
This, in fact, would fit quite well to an imperfect simulation that doesn’t perfectly follow all the rules we made up when observing.


But that wouldn’t make for a catchy headline, would it?


The question is what did she consent to (as in, what was the thing she did expect that this checkbox created)?
“Cameo” doesn’t exactly evoke “allow people to create fetish porn with my face”.
If the button was labelled with that or some other more clear text, I don’t think there would have been a need for this article.
And that’s pretty much the point of this article: “Beware of corporate double-speek, this harmless word here means ‘allow fetish porn with your face’”, and that kind of warning article is not only important but pretty much essential in today’s world, where “autopilot” doesn’t mean that the car is fully self-driving, and where even “full self-driving” doesn’t mean “fully self-driving”.
And the only indication one has that words don’t mean what they mean is a multiple hundred page long terms of services full of legal jargon that most people can’t understand but that legally protect the corporation.
As Marc-Uwe Kling said: “Die Welt ist voll von Arschlöchern. Rechtlich abgesicherten Arschlöchern.”
“The world is full of assholes. Legally protected assholes.”


If someone expects content moderation or the other safeguards you have in large parts of the internet it might come as a surprise that a large platform allows fetish porn content to be made with “cameos”.
Tbh, the word itself is super vague and ambiguous and doesn’t reflect what it means in this context.


In a situation where someone doesn’t understand the implications and a corporation can make money of their misfortune. That pretty much describes most of social media.


That’s not really language dependant but region/culture dependant.
Nigeria and UK both have English as their official language, still their viewpoint on history is quite different too.
A good Wikipedia article would include all relevant viewpoints that are based in reality, and probably even some that are not, though with a disclaimer.


I think you are a step further down in the a/b problem tree.
The purpose of society is that everyone can have a safe, stable and good life. In our current setup this requires that most people are employed. But that’s not a given.
Think of a hypothetical society where AI/robots do all the work. There would be no need to employ everyone to do work to support unemployed people.
We are slowly getting to that direction, but the problem here is that our capitalist society isn’t fit for that setup. In our capitalist setup, removing the need for work means making people unemployed, who then “need to be supported” while the rich who own/employ robots/AI benefit without putting in any work at all.


I agree with the sentiment, as bad as it feels to agree with Altman about anything.
I’m working as a software developer, working on the backend of the website/loyalty app of some large retailer.
My job is entirely useless. I mean, I’m doing a decent job keeping the show running, but (a) management shifts priorities all the time and about 2/3 of all the “super urgent” things I work on get cancelled before then get released and (b) if our whole department would instantly disappear and the app and webside would just be gone, nobody would care. Like, literally. We have an app and a website because everyone has to have one, not because there’s a real benefit to anyone.
The same is true for most of the jobs I worked in, and about most jobs in large corporations.
So if AI could somehow replace all these jobs (which it can’t), nothing of value would be lost, apart from the fact that our society requires everyone to have a job, bullshit or not. And these bullshit jobs even tend to be the better-paid ones.
So AI doing the bullshit jobs isn’t the problem, but people having to do bullshit jobs to get paid is.
If we all get a really good universal basic income or something, I don’t think most people would mind that they don’t have to go warm a seat in an office anymore. But since we don’t and we likely won’t in the future, losing a job is a real problem, which makes Altman’s comment extremely insensitive.


Sure, clanker.


Try it out and you’ll see. Amazon seems to be doing great with it.


That’s kinda what Wikipedia does. They have a quite elaborate review process before stuff goes live: https://en.wikipedia.org/wiki/Wikipedia:Reviewing
In the English Wikipedia, that process is working quite well. But in e.g. the Welsh Wikipedia or other tiny languages, they might only have a handful of reviewers in total. There’s no way that such a small group of people could be knowledgeable in all subjects.
Welsh Wikipedia, for example, has less than 200 total active users, and there are dozens of small language or dialect Wikipedias that have <30 active users.
https://en.wikipedia.org/wiki/List_of_Wikipedias
I don’t think there’s an actual solution for this issue until AI translations become so good that there’s no need for language-specific content any more. If that ever happens.


Hmm, the law begins with “Given enough eyeballs”. So it’s explicitly not about small-language Wikipedia sites having too few editors.
It also doesn’t talk about finding consensus. “All bugs are shallow” means that someone can see the solution. In software development, that’s most often quite easy, especially when it comes to bugfixes. It’s rarely difficult to verify whether the solution to a bug works or not. So in most cases if someone finds a solution and it works, that’s good enough for everyone.
In cultural fields, that’s decidedly not the case.
For most of society’s problems, there are hardly any new solutions. We have had the same basic problems for centuries and pretty much “all” the solutions have been proposed decades or centuries ago.
How to make government fair? How to get rid of crime? How to make a good society?
These things have literally been issues since the first humans learned to speak.
That’s why Linus’ law doesn’t really apply here. We all want different things and there’s no fix that satisfies all requirements or preferences.


Since when do we post Amazon’s investor marketing messages unfiltered here?


DevOps is not executing the automation, but designing it. DevOps is not manually spinning up pods but writing the automation that does so.
Mining hardware is shortlived. These things get outdated real fast and need to be replaced frequently. So what they do is when a mining rig is up for replacement, they just swap it out for an AI rig.
The real asset for mining is the infrastructure: rack space, access to cheap electricity, data centers. All of that is very useful for AI as well.