• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I guess I have no problem in theory with AI agents taking a look at my code it’s just I would want it opt in only. I don’t want to have to deal with them on legacy projects that I’m not working on, or anything mission critical (if there is a bug in the code but the application overall still works, I’d rather not have an AI dick around with it until I have time to properly go through it).

    To be clear, I would have the same objections to a human doing the same things. It’s just that most humans don’t randomly submit pull requests on otherwise inactive repos.

  • peopleproblems@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    ·
    2 days ago

    This feels like an attempt to destroy open source projects. Overwhelm developers with crap PRs so they can’t fix real issues.

    It won’t work long term, because I can’t imagine anyone staying on GitHub after it gets bad.

    • 6nk06@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      destroy open source projects

      I do believe that too. The AIs are stealing all the code and remove the licenses, and the OSI recently classified “binary blobs” as open-source. LLM companies need fresh content and will try anything to steal that.

  • qaz@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    edit-2
    2 days ago

    Recently some issues were opened on a repo of mine, they confused me at first until I realized it was written by an LLM. Really annoying.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      You’d have to be an idiot to merge anything from an AI without going through it line by line. Which really is the problem with AI, it’s mostly fine if you keep an eye on it but the fact you have to keep an eye on it kind of renders the whole thing pointless.

      It’s like self-driving cars, if I have to keep an eye on it to make sure it won’t randomly crash into a tree I might as well drive the damn thing myself.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      It does seem like AI will be way more useful for finding security holes than preventing them.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      The Age of Forks

      Maybe won’t be all entirely too bad or doom… I would like to see a silver lining, some FLOSS developers are smarter than trolls.

    • mesa@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Thats the one I remember. Its so funny some of the comments in the GH PRs and Issues themselves.

      On a side note, im seeing more and more people transfer over to Codeberg and other such git alternatives. Ive used GH for over 15 years and its interesting to see the shifting occurring. Then again, to those of us that have been online for a while, it feels like the natural order of things. GH is trying to get as much $$ as they can (which includes AI) and its new features are becoming tied to monitary components. Meanwhile the community is making things its users actually want.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Creating issues is free to a large number of people you don’t really control, whether that is the general public or some customers who have access to your issue tracker and love AI doesn’t really matter, if anything dealing with the public is easier since you can just ban members of the public who misbehave.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    3
    ·
    2 days ago

    The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I’ll be interested to see whether its comments affect the comments of the tech lead.

    I’ve seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it’s not going to be merging any defects without getting past a human reviewer.

    I don’t know how good any of this shit actually is. I tested the AI review once and it didn’t have a lot to say because it was a really simple PR. It’s a tool. When it does good, fine. When it doesn’t, it probably won’t take any more effort than any other bad input.

    I’m sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I see some problems here.

      An LLM providing “an opinion” is not a thing, as far as current tech does. It’s just statistically right or wrong, and put that into word, which does not fit nicely with real use cases. Also, lots of tools already have autofix that can (on demand) handle many minor issues you mention, without any LLM. Assuming static analysis is already in place and decent tooling is used, this would not have to reach either a human or an AI agent or anything before getting fixed with little resources.

      As anecdotal evidence, we regularly look into those tools on the job. Granted, we don’t have billions of lines of code to check, but so far it’s at best useless. Another anecdotal evidence is the recent outburst from the curl project (and other, following suite) getting a mountain of issues that are bogus.

      I have no doubt that there is a place for human-sounding review and advice, alongside other more common uses like completion and documentation, but ultimately these systems are not able to think by design. The work still has to be done. And can’t go much beyond platitudes. You ask how common the horrible cases are, but that might not be the correct question. Horrific comments are easy to spot and filter out. Perfectly decent looking “minor fixes” that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I’d say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

      Yet another anecdotal… yes, that’s a lot. Given the current hype, I can only base my findings on personal experience, mostly. I use AI-based code completion, assuming it’s short enough to check at a glance, and the context is small enough that it can’t make mistakes. At most two-three lines at time. Even in this context, while checking that the generated code matches what I was going to write, I’ve seen a handful of mistakes slip through over a few months. It makes me dread what could get through a PR system, where the codebase is not necessarily fresh in the mind of the reviewer.

      This is not to say that none of that is useful, but if it were to be, it would require extremely high level of trust, far higher than current human intervention (which is also not great and source of mistakes, I’m very aware of that) to be. The goal should not be to emulate human mistakes, but to make something better.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        An LLM providing “an opinion” is not a thing

        Agreed, but can we just use the common parlance? Explaining completions every time is tedious, and most everyone talking about it at this level always knows. It doesn’t think, it doesn’t know anything, but it’s a lot easier to use those words to mean something that seems analogous. But yeah, I’ve been on your side of this conversation before and let’s just read all that as agreed.

        this would not have to reach either a human or an AI agent or anything before getting fixed with little resources

        There are tools that do some of this automatically. I picked really low hanging fruit that I still see every single day in multiple environments. LLMs attempt (wrong word here, I know) more, but they need review and acceptance by a human expert.

        Perfectly decent looking “minor fixes” that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I’d say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

        I get that folks are trying to fully automate this. That’s fucking stupid. I don’t let seasoned developers commit code to my repos without review, why would I let AI? Incidentally, seasoned developers also can suggest fixes with subtle errors. And sometimes they escape into the code base, or sometimes perfectly good code that worked fine on prem goes to shit in the cloud—I just had to argue my team into fixing something that executed over 10k SQL statements in some cases on a single page load due to lazy loading. That shit worked “great” on prem but was taking up to 90 seconds in the cloud. All written by humans.

        The goal should not be to emulate human mistakes, but to make something better.

        I’m sure that is someone’s goal, but LLMs aren’t going to do that. They are a different tool that helps but does not in any way replace human experts. And I’m caught in the middle of every conversation because I don’t hate them enough for one side, and I’m not hype enough about them for the other. But I’ve been working with them for several years now and watched the grow since GPT2 and I understand them pretty well. Well enough not to trust them to the degree some idiots do, but I still find them really handy.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      2 days ago

      The “AI agent” approach’s goal doesn’t include a human reviewer. As in the agent is independent, or is reviewed by other AI agents. Full automation.

      They are selling those AI agents as working right now despite the obvious flaws.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        They’re also selling self-driving cars… the question is: when will the self driving cars kill fewer people per passenger-mile than average human drivers?

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          There’s more to it than that, there’s also the cost of implementation.

          If a self-driving car killed on average one less human than your average human does, but costs $100,000 to install in the car, then it still isn’t worth implementing.

          Yes I know that puts a price on human life but that is how economics works.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Right now they do between a combination of extra oversight, generally travelling at slow speeds, and being resticted in area. Kind of like how children are less likely to die in a swimming pool with lifeguards compared to rivers and beaches without lifeguards.

          Once they are released into the wild I expect a number of high profile deaths, but also assume that those fatalities will be significantly lower than the human average due to being set to be overly cautious. I do expect them to have a high rate of low speed collisions when they encounter confusing or absent road markings in rural areas.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Not self driving but “driver assist” on a rental we had recently would see skid marks on the road and swerve to follow them - every single time. That’s going to be a difference between the automated systems and human drivers - humans do some horrifically negligent and terrible things, but… most humans tend not to repeat the same mistake too many times.

            With “the algorithm” controlling thousands or millions of vehicles, when somebody finds a hack that causes one to crash, they’ve got a hack that will cause all similar ones to crash. I doubt we’re anywhere near “safe” learn from their mistakes self-recoding on these systems yet, that has the potential for even worse and less predictable outcomes.

            • JordanZ@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Watched a Tesla do weird things at this intersection because the lines are painted erroneously. It stopped way back from where a sane person would in the left turn lane. I can only presume it was because the car in the center had their tires ‘over the line’ even though it’s a messed up line. There is plenty of room but it got confused and just stopped like the full ~3 car lengths back from the light where the road is narrower because of the messed up line.

        • Initiateofthevoid@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          The issue will remain that liability will be completely transferred from individual humans to faceless corporations. I want self-driving cars to be a thing - computers can certainly be better than humans at driving - but I don’t want that technology to be profit-motivated.

          They will inevitably cause some accidents that could have been prevented if not for the “move fast and break things” style of tech development. A negligent driver can go to jail, a negligent corporation gets a slap on the wrist in our society. And traffic collisions will result in having to face powerful litigation teams when they inevitably refuse to pay for damages through automated AI refusal like private health insurance companies.

  • ᕙ(⇀‸↼‶)ᕗ@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    from what i read the curl guys arent worried as they do not accept ai prs. while terrible for some in short term, I believe in long term this will create a great gap between all the dying closed source and the growing open source world. it is a weak temporary attempt to harm open source, but is usefull long term. no reasons left to stick with big corpo. none.