It’s ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
There’s been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.
Meanwhile a huge chunk of the software industry is now heavily using this “dead end” technology 👀
I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)
Everyone I know here is using it. A lot.
However my company also has tonnes of dedicated sessions and paid time to instruct it’s employees on how to use it well, and to get good value out of it, abd the pitfalls it can have
So yeah turns out if you teach your employees how to use a tool, they start using it.
I’d say LLMs have made me about 3x as efficient or so at my job.
It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:
-
Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.
-
Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.
deleted by creator
You skipped possibility 3, which is actively happening ing:
Advancements in tech enable us to produce results at a much much cheaper cost
Which us happening with diffusion style LLMs that simultaneously cost less to train, cost less to run, but also produce both faster abd better quality outputs.
That’s a big part people forget about AI: it’s a feedback loop of improvement as soon as you can start using AI to develop AI
And we are past that mark now, most developers have easy access to AI as a tool to improve their performance, and AI is made by… software developers
So you get this loop where as we make better and better AIs, we get better and better at making AIs with the AIs…
It’s incredibly likely the new diffusion AI systems were built with AI assisting in the process, enabling them to make a whole new tech innovation much faster and easier.
We are now in the uptick of the singularity, and have been for about a year now.
Same goes for hardware, it’s very likely now that mvidia has AI incorporating into their production process, using it for micro optimizations in its architectures and designs.
And then those same optimized gpus turn around and get used to train and run even better AIs…
In 5-10 years we will look back on 2024 as the start of a very wild ride.
Remember we are just now in the “computers that take up entire warehouses” step of the tech.
Remember that in the 80s, a “computer” cost a fortune, took tonnes of resources, multiple people to run it, took up an entire room, was slow as hell, and could only do basic stuff.
But now 40 years later they fit in our pockets and are (non hyoerbole) billions of times faster.
I think by 2035 we will be looking at AI as something mass produced for consumers to just go in their homes, you go to best buy and compare different AI boxes to pick which one you are gonna get for your home.
We are still at the stage of people in the 80s looking at computers and pondering “why would someone even need to use this, why would someone put one in their house, let alone their pocket”
I remember having this optimism around tech in my late twenties.
I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”
All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!
Good, fire 2 devs out of 3.
Companies that do this will fail.
Successful companies respond to this by hiring more developers.
Consider the taxi cab driver:
With the invention if the automobile, cab drivers could do their job way faster and way cheaper.
Did companies fire drivers in response? God no. They hired more
Why?
Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up
If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows
If I go from “it costs me 100k to make you a website” to “it costs me 50k to make you a website” my pool of possible clients more than doubles
Which means… you need to hire more devs asap to start matching this newfound level of demand
If you fire devs when your demand is about to skyrocket, you fucked up bad lol
-
I think the human in the loop currently needs to know what the LLM produced or checked, but they’ll get better.
For sure, much like how a cab driver has to know how to drive a cab.
AI is absolutely a “garbage in, garbage out” tool. Just having it doesn’t automatically make you good at your job.
The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.
The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don’t give a fuck. Spend 1 trillion on AI and say we’re near over and over again and literally nobody can stop them right now.
Good, let them go broke in the pursuit of a dead end.
Good let them waste all their money
Pump and dump. That’s how the rich get richer.
The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.
Why won’t they pour billions into me? I’d actually put it to good use.
I’d be happy with a couple hundos.
I’d be happy with a big tiddy goth girl. Jealous of your username btw.
LLMs are good for learning, brainstorming, and mundane writing tasks.
This is slightly misleading. Even if you can’t achieve “agi” (a barely defined term anyways) it doesn’t mean AI is a dead end.
Say it isn’t so…
Its not a dead end if you replace all big name search engines with this. Then slowly replace real results with your own. Then it accomplishes something.
Worst case scenario, I don’t think money spent on supercomputers is the worst way to spend money. That in itself has brought chip design and development forward. Not to mention ai is already invaluable with a lot of science research. Invaluable!
I’m a software developer and I know that AI is just the shiny new toy from which everyone uses the buzzword to generate investment revenue.
99% of the crap people use it for us worthless. It’s just a hammer and everything is a nail.
It’s just like “the cloud” was 10 years ago. Now everyone is back-pedaling from that because it didn’t turn out to be the panacea that was promised.