• neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    7 days ago

    Explain this too me AI. Reads back exactly what’s on the screen including comments somehow with more words but less information Ok…

    Ok, this is tricky. AI, can you do this refactoring so I don’t have to keep track of everything. No… Thats all wrong… Yeah I know it’s complicated, that’s why I wanted it refactored. No you can’t do that… fuck now I can either toss all your changes and do it myself or spend the next 3 hours rewriting it.

    Yeah I struggle to find how anyone finds this garbage useful.

    • Damaskox@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      7 days ago

      I have asked questions, had conversations for company and generated images for role playing with AI.

      I’ve been happy with it, so far.

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        That’s kind of outside the software development discussion but glad you’re enjoying it.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          As a developer

          • I can jot down a bunch of notes and have ai turn it into a reasonable presentation or documentation or proposal
          • zoom has an ai agent which is pretty good about summarizing a meeting. It usually just needs minor corrections and you can send it out much faster than taking notes
          • for coding I mostly use ai like autocomplete. Sometimes it’s able to autocomplete entire code blocks
          • for something new I might have ai generate a class or something, and use it as a first draft where I then make it work
  • (des)mosthenes@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 days ago

    no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently

    • Repple (she/her)@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I have to use it for work by mandate, and overall hate it. Sometimes it can speed up certain aspects of development, especially if the domain is new or project is small, but these gains are temporary. They steal time from the learning that I would be doing during development and push that back to later in the process, and they are no where near good enough to make it so that I never have to do the learning at all

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    edit-2
    7 days ago

    Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.

    And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.

        At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.

        I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          I think the term pretty accurately describes what is going on: they don’t know how to code, but they do know what correct output for a given input looks like, so they iterate with the LLM until they get what they want. The coding here is based on vibes (does the output feel correct?) instead of logic.

          I don’t think there’s any problem with the term, the problem is with what’s going on.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      4
      ·
      7 days ago

      Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

      In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        7 days ago

        Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.

        The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

        On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

        Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

        It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      7 days ago

      I have limited AI experience, but so far that’s what it means to me as well: helpful in very limited circumstances.

      Mostly, I find it useful for “speaking new languages” - if I try to use AI to “help” with the stuff I have been doing daily for the past 20 years? Yeah, it’s just slowing me down.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        3 days ago

        I like the saying that LLMs are “good” at stuff you don’t know. That’s about it.

        When you know the subject it stops being much useful because you’ll already know the very obvious stuff that LLM could help you.

        • Zetta@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          FreedomAdvocate is right, IMO the best use case of ai is things you have an understanding of, but need some assistance. You need to understand enough to catch atleast impactful errors by the llm

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          They’re also bad at that though, because if you don’t know that stuff then you don’t know if what it’s telling you is right or wrong.

          • fafferlicious@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            I…think that’s their point. The only reason it seems good is because you’re bad and can’t spot that is bad, too.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      7 days ago

      Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.

    • lIlIlIlIlIlIl@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      11
      ·
      7 days ago

      Exactly what you would expect from a junior engineer.

      Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

      Something something craftsmen don’t blame their tools

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        7 days ago

        The difference being junior engineers eventually grow up into senior engineers.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Exactly what you would expect from a junior engineer.

        Except junior engineers become seniors. If you don’t understand this … are you HR?

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

          Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

          Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

          The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

            LOL sure

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 days ago

          Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      For some of us that’s more useful. I’m currently playing a DevSecOps role and one of the defining characteristics is I need to know all the tools. On Friday, I was writing some Java modules, then some groovy glue, then spent the after writing a Python utility. While im reasonably good about jumping among languages and tools, those context switches are expensive. I definitely want ai help with that.

      That being said, ai is just a step up from search or autocomplete, it’s not magical. I’ve had the most luck with it generating unit tests since they tend to be simple and repetitive (also a major place for the juniors to screw up: ai doesn’t know whether the slop it’s pumping out is useful. You do need to guide it and understand it, and you really need to cull the dreck)

  • FancyPantsFIRE@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 days ago

    I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.

    My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.

    I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.

    That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      I had to sort over 100 lines of data hardcoded into source (don’t ask) and it was a quick function in my IDE.

      I feel like “sort” is common enough everywhere that AI should quickly identify the right Google results, and it shouldn’t take 3 min

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      By having it write a quick function to do so or to sort them alphabetically within the chat? Because I’ve used GPT to write boilerplate and/or basic functions for random tasks like this numerous times without issue. But expecting it to sort a block of text for you is not what LLMs are really built for.

      That being said, I agree that expecting AI to write complex and/or long-form code is a fool’s hope. It’s good for basic tasks to save time and that’s about it.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        The tool I use can rewrite code given basic commands. Other times I might say, “Write a comment above each line” or “Propose better names for these variables” and it does a decent job.