Doesn’t DeepSeek still censor sensitive questions (e.g. about Taiwan and the Tiananmen protests) when ran locally and offline?
Although it’s certainly got its merits (being FOSS? and much more energy-efficient than most models), censorship of state violence is still bad.
I wonder how other chat AIs do, concerning that censorship. Grok we already know to be shit, it warps the truth beyond words. Even regarding non-political stuff, its accuracy is trash compared to other models.
How about ChatGPT? Ollama? Kobold? Llamafile? Do they censor stuff?
Every model has bias due to its training set. And each model has strengths and weaknesses based on various factors including their training sets. The real correct answer is run an ensemble of local models and use the best one for any given task.
Doesn’t DeepSeek still censor sensitive questions (e.g. about Taiwan and the Tiananmen protests) when ran locally and offline?
Although it’s certainly got its merits (being FOSS? and much more energy-efficient than most models), censorship of state violence is still bad.
I wonder how other chat AIs do, concerning that censorship. Grok we already know to be shit, it warps the truth beyond words. Even regarding non-political stuff, its accuracy is trash compared to other models.
How about ChatGPT? Ollama? Kobold? Llamafile? Do they censor stuff?
Its not Foss its open weights there is a huge difference
Every model has bias due to its training set. And each model has strengths and weaknesses based on various factors including their training sets. The real correct answer is run an ensemble of local models and use the best one for any given task.