

based af and morally acceptable use of AI.
“The future ain’t what it used to be.”
-Yogi Berra


based af and morally acceptable use of AI.


Bruh I’m gonna grow so many teeth. I wonder if we’ll be able to get crocodile teeth.
Like can I get just one crocodile tooth that hangs over my lip?


Let’sa gooo!


So maybe that computer I just bought will be my last for a while then.


If the Dems aren’t running on abolishing ICE and completely dismantling the police state, now, they’re setting up to throw the 2028 election.


Oh yeah?



People who have been holding off on building a new systems due to part prices since 2020

I should do a kickstarter for a wifi connected butt plug that starts vibrating anytime a major AI providers system goes offline.


I’m trying to learn more about EU politics, and when something like this won’t die after being beat down several times, in the US it’s almost always some industry lobbying organization.
And a problem we have globally, is that there isn’t an organized counter movement in the opposite direction (that privacy is a human right, that this isn’t a path to security, that states need to be restrained and restricted in their tendencies towards authoritarianism).
Without that countermovement, it’s almost inevitable something like this will pass as the lobbying organization can long outlive the current generation of activists or politicians who see the problems with something like chat control.


Thank you.
But what groups are advocating for this? There is clearly a significant campaign behind this. It doesn’t seem at all grassroots.


It feels like this is just going to keep coming back, and that it’s being pushed by a centrally organized project. If privacy protections aren’t effectively enshrined in law, inevitably this kind of nanny state surveillance will happen.


Who is promoting this and why is it constantly coming back?


We’re gonna need to bring Bubba in to evaluate…


To shreds you say?


Even cheaper to just launch the billionaire funders as payload.


Its also just fucking stupid.
You have two options:
One build an insanely, comically expensive series of mirrors to move solar energy to solar panels so you can power them at night…
OR…
Install twice as many solar panels on the ground. OR the same amount and battery storage, or any combination thereof.


I mean 1% conversion rate for e-mail marketing isn’t particularly unique for an AI written email or otherwise.
I think 1% conversion is basically the rule of thumb for e-mail marketing, and its about the 10% of that for click through?
So per 1000, you might get 10 clicks and one person going to whatever you are marketing.


I don’t think anything I wrote has bearing on this being a bubble or to what extent other than it’s clear that what we have now isn’t what those working to get people to invest further in the bubble claim it to be.
Usually when people are out there claiming it’s not a bubble is just before it pops.


I mean they aren’t a technological dead end just as they aren’t a technological panacea.
You can absolutely use them as coding assistants. They can be used to fool people, sometimes quite effectively. There is definitely “something” going on under the hood even if we don’t want to use words traditionally applied to the human experience like"learning" or “intelligence”. There is a surprising amount of consilience in current models, where you train to get good at task A, but also get good at task B, for no obvious reason.
It’s clear to me no amount of paper mache smearing over the half-glass of wine issue fixes it. There is something fundamental to the “gappiness” present in llms both knowledge set and appearance of logic. It’s becoming clear this is something intrinsic to the architecture and gluing in hot fixes isn’t going to change that. There is some very real underlying weirdness (sea horse emoji). Context windows still only create the mirage of global states (and maybe with a large enough window this doesn’t matter (relative to a human perspective). It’s also clear that nothing about llms or transformers overcomes basic principles of entropy or information theory: you can’t just model noise like some kind of infinite training cheat code.
From where we were (lstm’s) to where we are, they are easily a 100x improvement. ML now is MUCH better than ML 10 years ago, and it has everything to do with transformers.
When llms came into the scene, attention and transformers were not new. but it was a new approach to training them, and creating some clever things to get them to generalize, along with making them utterly massive. But “Attention is all you need” had been published quite a while before this generation, and I promise, if Google has seen the potential, they would not have released that research.
There will be stepwise and generational improvements to AI and ML. even though transformers are what broke through to the mainstream, the progress is much more linear and continuous than it might at first appear. So we shouldn’t expect transformers to be the end state, nor should we expect the next major jump to come from them, or even necessarily something novel. it may be the tools for the next big jump are already here, just waiting to be applied in a clever way
A haiku:
It’s not DNS
There’s no way it’s DNS
It was DNS