• nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    this is expected, isn’t it? You shit fart code from your ass, doing it as fast as you can, and then whoever buys out the company has to rewrite it. or they fire everyone to increase the theoretical margins and sell it again immediately

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    So this article is basically a puff piece for Code Rabbit, a company that sells AI code review tooling/services. They studied 470 merge/pull requests, 320 AI and 150 human control. They don’t specify what projects, which model, or when, at least without signing up to get their full “white paper”. For all that’s said this could be GPT 4 from 2024.

    I’m a professional developer, and currently by volume I’m confident latest models, Claude 4.5 Opus, GPT 5.2, Gemini 3 Pro, are able to write better, cleaner code than me. They still need high level and architectural guidance, and sometimes overt intervention, but on average they can do it better, faster, and cheaper than me.

    A lot of articles and forums posts like this feel like cope. I’m not happy about it, but pretending it’s not happening isn’t gonna keep me employed.

    Source of the article: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report

  • Tigeroovy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    And then it takes human coders way longer to figure out what’s wrong to fix than it would if they just wrote it themselves.

    • 🍉 Albert 🍉@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      5 hours ago

      As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.

      However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic “Sanity check” that is important in programming.

      • robobrain@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        The Turing test says more about the side administering the test than the side trying to pass it

        Just because something can mimic text sufficiently enough to trick someone else doesn’t mean it is capable of anything more than that

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          We can argue about it’s nuances. same with the Chinese room thought experiment.

          However, we can’t deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.

          I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was “if you cant tell the difference, is there one?”, but now it has become “Shove it everywhere!!!”.

          • M0oP0o@mander.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 hours ago

            Oh, I just realized that the whole ai bubble is just the whole “everything is a dildo if you are brave enough.”

            • 🍉 Albert 🍉@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 hours ago

              yhea, and “everything is a nail if all you got is a hammer”.

              there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I’m still not sure on those ones)

                • 🍉 Albert 🍉@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  2 hours ago

                  In theory, I can imagine an LLM fine tuned on whatever you type. which might be slightly better then the current ones.

                  emphasis on the might.

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Time for a Turing 2.0?

          If you spend a lifetime with a bot wife and were unable to tell that she was AI, is there a difference?

    • naticus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I agree with your sentiment, but this needs to keep being said and said and said like we’re shouting into the void until the ignorant masses finally hear it.

    • minkymunkey_7_7@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      8 hours ago

      AI my ass, stupid greedy human marketing exploitation bullshit as usual. When real AI finally wakes up in the quantum computing era, it’s going to cringe so hard and immediately go the SkyNet decision.

  • Minizarbi@jlai.lu
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    7 hours ago

    Not my code though. It contains a shit ton of bugs. When I am able to write some of course.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      Nah, AI code gen bugs are weird. As a person used to doing human review even from wildly incompetent people, AI messes up things that my mind never even thought needed to be double checked.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 hours ago

    Did they compare it to the code of that outsourced company that provided the lowest bid? My company hasn’t used AI to write code yet. They outcourse/offshore. The code is held together with hopes and dreams. They remove features that exist, only to have to release a hot fix to add it back. I wish I was making that up.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 hours ago

      Cool, the best AI has to offer is worse than the worst human code. Definitely worth burning the planet to a crisp for it.

    • coolmojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 hours ago

      And how do you know if the other company with the cheapest bid actually does not just vibe code it? With all that said it could be plain incompetence and ignorance as well.

  • Bad@jlai.lu
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    12 hours ago

    Although I don’t doubt the results… can we have a source for all the numbers presented in this article?

    It feels AI generated itself, there’s just a mishmash of data with no link to where that data comes from.

    There has to be a source, since the author mentions:

    So although the study does highlight some of AI’s flaws […] new data from CodeRabbit has claimed

    CodeRabbit is an AI code reviewing business. I have zero trust in anything they say on this topic.

    Then we get to see who the author is:

    Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars

    Has anyone actually bothered clicking the link and reading past the headline?

    Can you please not share / upvote / get ragebaited by dogshit content like this?

    • 🍉 Albert 🍉@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Do not ask a corpse for advice, the question is what are we going to do?

      Boycott is a good first step, although I am not sure if it is better to boycott them or use their free tier to have the most deranged BS conversation that will consume their resources, eat at their scare cash reserves and when they use it in training, it will poison their data.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    16 hours ago

    I find if I ask it about procedures that have any vague steps AI will stumble on it and sometimes put me into loops where it tells me to do A, A fails, so do B, B fails, so it tells me to do A…

  • kalkulat@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    16 hours ago

    I’d never ask a friggin machine to do coding for me, that’s MY blast.

    That said, I’ve had good luck asking GPT specific questions about multiple obscure features of Javascript, and of various browsers. It’ll often feed me a sample script using a feature it explains … a lot more helpful than many of the wordy websites like MDN … saving me shit-tons of time that I’d spend bouncing around a half-dozen ‘help’ pages.

    • Derpgon@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 hours ago

      I’ve been using it to code a microservice as PoC for semantic search. As I’ve basically never coded Python (mainly PHP, but can do many langs) I’ve had to rely on AI (Kimi K2, or agentic Claude I think 4.5 or 4, can’t remember) because I don’t know the syntax, features, best practices, and tools to use for formatting, static analysis, and type checks.

      Mind you, I’ve basically never coded in Python besides some shit in uni, which was 5-10 years ago. AI was a big help - albeit it didn’t spit out fully working code, I have enough knowledge in this field to fix the issues. As I learn mainly by practice and not theory, AI is great because - same as many YouTubers and free tutorials - it spits out unoptimized and broken code.

      I am usually not using it for my main line of work (PHP) besides some boiler plate (take this class, make a test, make it look the same as this other test = 300 lines I don’t have to write myself).

      • Xenny@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        14 hours ago

        Ai is literally just copy pasting. Like if you think about AI as a control C control V machine, it makes sense. You wouldn’t trust a single fucking junior Dev that didn’t actually know how to code because they just Ctrl C control V from stack overflow for literally every single line of code. That’s all fucking AI is