How hard can it be to have an AI take PR’s from other AI’s and clean out the worst + plus hardening PR protocols ? It could even assist/guide AI contributors via a special AI-contributor forum or whatever. AI are currently high-lighting a lot of ‘holes’ in systems where we expect a certain behavior. Just coping/complaining and closing things off is a bad decision, and we should accept these flaws in our systems and adapt them to a new world. The sooner the better.
The projects that get it right, now have an army of managed AI contributors, and a filtered/educational AI PR pipeline where project maintainers cherry-pick the top creme de la creme…
If one AI could clean other AI mistakes, shouldn’t the first said be able to not make such mistakes?
AI isn’t AI. It’s predictive text. They will all fall in the same pitfalls as any other. “AI” can write code and can be very helpful making short functions, but it can’t and will never be able to do or work in whole systems because, just by nature, those are made to fit human needs, illogical and sometimes very specific needs that can’t be just “predicted”.
How hard can it be to have an AI take PR’s from other AI’s and clean out the worst + plus hardening PR protocols ? It could even assist/guide AI contributors via a special AI-contributor forum or whatever. AI are currently high-lighting a lot of ‘holes’ in systems where we expect a certain behavior. Just coping/complaining and closing things off is a bad decision, and we should accept these flaws in our systems and adapt them to a new world. The sooner the better.
The projects that get it right, now have an army of managed AI contributors, and a filtered/educational AI PR pipeline where project maintainers cherry-pick the top creme de la creme…
If one AI could clean other AI mistakes, shouldn’t the first said be able to not make such mistakes?
AI isn’t AI. It’s predictive text. They will all fall in the same pitfalls as any other. “AI” can write code and can be very helpful making short functions, but it can’t and will never be able to do or work in whole systems because, just by nature, those are made to fit human needs, illogical and sometimes very specific needs that can’t be just “predicted”.
This is a fantasy. LLMs will just produce the same errors over and over a lot of the time.
We’ve got several months of evidence as to how hard.