No no no.
You feed it back to the AI and it will be like:
"I see you have a very solid workout plan right here, it shows a great understanding of physiology and the most recent developments to training plans!
There’s an small issue and a larger problem however that need to be addressed, …
Proceeds to create a whole new plan
More of a fuckai thing.
Fuck AI, in real life
Well even that’s an advanced method. Based on how many messed up plans or with obvious flaws i saw posted back in reddit. Way too many people aren’t even uploading those plans back to AI to ask for feedback.
Reddit was one of the larger data sources that most LLM used when being built. There’s a reason why it’s wrong 60% of the time.
Obviously the solution here is to use multi-agent AI where one agent produces the code and the other agent is skeptical of the code and tears it apart and tells the first agent to start over and try again.
Amazing, so the solution is to implement incompetent middle management into the AI workflow?
That’s a funny but un-generous take on the approach, because it’s actually quite effective. There’s a guy on the Fediverse who built a local model that self-governs this way; it’s kind of like peer review. It also gets rid of all the sycophancy and “how about I also do this?” pleading of commercial agentic AI frameworks.
You still need to be in control as the user – this doesn’t make the AI fault proof in terms of misinterpretation, overengineering, etc.
this is literally what i do at work and it’s extremely effective.
That tone sounds like Perplexity to me.






