• bamboo@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 hours ago

    Obviously the solution here is to use multi-agent AI where one agent produces the code and the other agent is skeptical of the code and tears it apart and tells the first agent to start over and try again.

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      9 hours ago

      Amazing, so the solution is to implement incompetent middle management into the AI workflow?

      • BenevolentOne@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        Yes, and I’m beginning to suspect that the group of people who don’t understand why they have an incompetent middle manager and the group who doesn’t get good results out of AIs has 100% overlap.

        Please feel free to downvote me into oblivion.

      • egrets@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        7 hours ago

        That’s a funny but un-generous take on the approach, because it’s actually quite effective. There’s a guy on the Fediverse who built a local model that self-governs this way; it’s kind of like peer review. It also gets rid of all the sycophancy and “how about I also do this?” pleading of commercial agentic AI frameworks.

        You still need to be in control as the user – this doesn’t make the AI fault proof in terms of misinterpretation, overengineering, etc.