1980: TVs will fry your brain
1990: Videogames will fry your brain
2000: Computers will fry your brain
2010: Smartphones will fry your brain
2020: AI will fry your brain
Any takes for the 2030s?
Climate change.
Literally.
2030: Cyborg w/AI will fry your brain. Literally though.
Neural implants? Only this time they’re really going to fry your brain.
I mean, based fully on our current dystopian reality, I feel you just made a really good point about tech growing to a point where it fully captures you from reality, and indeed frys your brain by convincing you that fantasies are real.
MAGA is a great example of people with brains so fried they think a pedophile exconman with 34 felonies who killed millions of Americans trough a poor pandemic response is somehow helping them by destroying USAID, DEI, Healthcare, and Social Security.
Their brains are gonzo, all through the constant applied exploitation of all the tech you just mentioned combined.
AI will absolutley make it worse.
Well looking around at where we are today, maybe TVs did fry our brains.
And before that books and comics. But LLMs are different: they pretend to be your friend but actually just encourage whatever you come up with. You can easily fry people’s brains by being their sycophant, now everyone can subscribe to one.
2030: Critical thought will fry your brain
The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.
I’m made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.
To add to this, we already know that context switching causes a loss in performance.
A person who’s thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.
The Neuroscience Behind the Pain
Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:
-Memory consolidation: Storing your current mental model
-Attention disengagement: Breaking focus from the current task
-Cognitive reloading: Building a new mental model for the next task
-Re-engagement: Getting back into flow
Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.
Here’s another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/
What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.
This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.
The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.
Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.
Context switching isn’t just X — it’s Y.
Are we sure this was written by a human?
AI being released was basically an apocalypse for people who use EM dash.
Here’s the most cited, human created (2001), paper on the topic of context switching performance loss: https://www.apa.org/pubs/journals/releases/xhp274763.pdf
Thanks.
And I’m all for em dashes. After all, I started using them after reading enough books. It’s just that particular construct that strikes me as especially LLM-y.
AI was trained on human writing. If it produces a certain tone, then that’s probably a result of the material that was favoured in training it. That construction was common in human writing before it became common in AI too.
What makes it stick out is when AI uses it in contexts where humans normally wouldn’t, but this kind of assertion is common in scientific papers and articles. It would make sense to train an AI on scientific writing, since that tone sounds authoritative and like you have some idea of what you’re talking about.
So I don’t think this is an LLM-construct; it’s an instance of the original style that LLMs copy.
True, but in my experience most people use a comma, not an em dash.
I’d like to see a study on that, I see it mentioned so much it’s almost achieved meme status.
It could very well be a Baader–(👀)Meinhof phenomenon.
That medium post is 100% LLM output.
100%, shit test
i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language… seems unhelpful
I fucking hate this AI shit but I’ll admit I end up using Gemini (knowing its wrong sometimes) but it’s like how I’d use Google but just more of a complex ask instead of simple search query’s, I couldn’t imagine using it beyond that other than a follow-up or two.
It’s just a chatbot that has access to info, who goes onto their cable companies website and befriends the chatbot?
I have found Google search to be getting progressively worse where as I can type out a question to Gemini that will return better results than Google search. It’s annoying that Google search has gotten so bad and duckduckgo will return you something interesting but not relavent. So Gemini is my Google search nowadays.
It very well may be intentional; to drive people away from traditional search and in to Gemini.
Oh, do you mean Claudia!? She’s awesome!
Found the Richard Dawkins :P
I’ve used gpt a coup times when I was searching the web and forums for well over an hour and found nothing relevant enough to work. Theissue got solved in 5-10 minutes.
They enshittified the search so now using the chatbot is more useful. The search just returns slop and even fake slop forums.
Pretty much. Can’t find useful info without having to put in ALOT of extra work that I wouldn’t of a decade ago.
Fuck though I love being able to ask it for part numbers and info. Much less hassle to ask it then use the shitty corpo parts catalogues search features especially when there’s weird naming schemes and a lack of description, clicking through 50 parts trying to find the right one sucks.
Its more that SEO is so well known at this point you can whip up whatever AI generated garabge you want to be ranked high on search engines in seconds. For now the AIs are just better at “wading” through the trash since they somewhat curate the data its training on. Once all they can train it on is slop you better hope you still have some encyclopedias and text books laying around
I mean I have been using DDG for years now. I just could not find the right answer for my specific issue on my specific linux distro and AI was sadly just faster
Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?
I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It’s About how people use it
Thinking is hard and ppl would prefer to feel, instead. When you just have to vibe with your AI that thinks for you, ppl will absolutely use it and disempower themselves under the illusion of empowerment. They will infantilize themselves and end up being treated like the children they want to be.
I always thought recommendation algorithm will do it but the progress stopped at some point. We had apps recommending videos, music, feeds, news and so on for a long time but it never evolved into recommended careers or recommended places to live. Not in the sense where some algorithms that tracks you all the times tells you what your next important life choice should be. I don’t know anyone who’s using AI like that yet but I can see it happening in the future.
AI is like a dog looking at itself in a mirror.
Some dogs are smart, and understand that this is a tool and that it is there to help you see things better… Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight…
There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.
How do you know the dogs which want to fuck and fight aren’t the smarter ones?
What if the other dogs don’t recognize the reflection as anything meaningful — not a tool, a reflection, …? In that case, at least the “dumb” dogs figured out that something’s up.
Edit: anthropomorphizing the idea that nonchalant reactions = understanding well enough to not care. There’s many reasons any particular dog may not fight a mirror. Particularly, they may just rely less on vision to determine whether something is alive or not. That would not indicate understanding, though… it would indicate the dogs understandably passive approach to things which don’t seem to have any significance. Closer to a lack of awareness than an actual understanding of any kind.
I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.
Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.
Those are important studies but nothing shocking. The conclusion to draw from them is the same one we’ve drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn’t worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?
It doesn’t necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.
If AI is as good or better than I am at writing code, then I’ll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.
If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.
This is not new, not bad, and I’ll even go to the extent of saying it’s a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That’s what has driven our rapid technological and societal advances in the past millenia.
But, AI has many issues and many detrimental applications as well, so don’t see this comment as a full endorsement of AI.
I don’t want it, all it does is to negate years of learned experience and ability to organically formulate ideas.

study already came out that hs people graduating cant even read or write, functionally illterate.
You can’t see the same kind of propaganda your grandparents were saying about computers just for ai now?
Besides, why are colleges passing illiterate students? That’s the actual problem.
There’s a tiny difference between then and now called scientific evidence. These are actual scientific studies saying that using AI results in lower cognitive abilities.
But which 10 minutes?
One sec, maybe ChatGPT knows….
Studies show that using a bulldozer for plowing a field decreases the farmers muscle density after just one day of use.
Christ. What a load of shit.
can confirm. Reddit is filled with abject brain dead dumbasses. since most content is AIGEN it makes sense.
Not sure about the method: to me it shows people are more willing to give up when the computer appears to be broken.
I think the control group need to experience a similar computer service failure, but maybe just swap out the ai for a basic calculator tool, or a pdf with formulas or a cheat sheet or something 😅
deleted by creator












