Did you literally just say “if you have genAI check the work of other genAI the genAI will say it’s good”?
Yes, that is how they’re getting by a large number of the previous issues. Multiple tries across versions of models with different training. Add in web searches. They’re getting accuracy by cheating precision. It’s expensive as fuck too.
but there is also deminishing returns.
absolutely correct. One query to local llm has a decent chance to be wrong. To bump that up, they’re generating a shit ton of queries. It’s eventually good for humanity overall, by the time they get it truly reasonable, the cost of the queries will be so high that when venture cap runs out, no one will be able to afford it even if it is replacing wages. Then we can go back to just using it as a tool.
Yes, that is how they’re getting by a large number of the previous issues. Multiple tries across versions of models with different training. Add in web searches. They’re getting accuracy by cheating precision. It’s expensive as fuck too.
absolutely correct. One query to local llm has a decent chance to be wrong. To bump that up, they’re generating a shit ton of queries. It’s eventually good for humanity overall, by the time they get it truly reasonable, the cost of the queries will be so high that when venture cap runs out, no one will be able to afford it even if it is replacing wages. Then we can go back to just using it as a tool.