• ByteJunk@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    6 hours ago

    No no no.

    You feed it back to the AI and it will be like:

    "I see you have a very solid workout plan right here, it shows a great understanding of physiology and the most recent developments to training plans!

    There’s an small issue and a larger problem however that need to be addressed, …

    Proceeds to create a whole new plan

  • Shellofbiomatter@lemmus.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 hours ago

    Well even that’s an advanced method. Based on how many messed up plans or with obvious flaws i saw posted back in reddit. Way too many people aren’t even uploading those plans back to AI to ask for feedback.

    • 13igTyme@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      10 hours ago

      Reddit was one of the larger data sources that most LLM used when being built. There’s a reason why it’s wrong 60% of the time.

  • bamboo@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 hours ago

    Obviously the solution here is to use multi-agent AI where one agent produces the code and the other agent is skeptical of the code and tears it apart and tells the first agent to start over and try again.

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      6 hours ago

      Amazing, so the solution is to implement incompetent middle management into the AI workflow?

      • egrets@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        That’s a funny but un-generous take on the approach, because it’s actually quite effective. There’s a guy on the Fediverse who built a local model that self-governs this way; it’s kind of like peer review. It also gets rid of all the sycophancy and “how about I also do this?” pleading of commercial agentic AI frameworks.

        You still need to be in control as the user – this doesn’t make the AI fault proof in terms of misinterpretation, overengineering, etc.