Yes, and I’m beginning to suspect that the group of people who don’t understand why they have an incompetent middle manager and the group who doesn’t get good results out of AIs has 100% overlap.
That’s a funny but un-generous take on the approach, because it’s actually quite effective. There’s a guy on the Fediverse who built a local model that self-governs this way; it’s kind of like peer review. It also gets rid of all the sycophancy and “how about I also do this?” pleading of commercial agentic AI frameworks.
You still need to be in control as the user – this doesn’t make the AI fault proof in terms of misinterpretation, overengineering, etc.
Amazing, so the solution is to implement incompetent middle management into the AI workflow?
Yes, and I’m beginning to suspect that the group of people who don’t understand why they have an incompetent middle manager and the group who doesn’t get good results out of AIs has 100% overlap.
Please feel free to downvote me into oblivion.
That’s a funny but un-generous take on the approach, because it’s actually quite effective. There’s a guy on the Fediverse who built a local model that self-governs this way; it’s kind of like peer review. It also gets rid of all the sycophancy and “how about I also do this?” pleading of commercial agentic AI frameworks.
You still need to be in control as the user – this doesn’t make the AI fault proof in terms of misinterpretation, overengineering, etc.