• birdwing@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 day ago

    Doesn’t DeepSeek still censor sensitive questions (e.g. about Taiwan and the Tiananmen protests) when ran locally and offline?

    Although it’s certainly got its merits (being FOSS? and much more energy-efficient than most models), censorship of state violence is still bad.

    I wonder how other chat AIs do, concerning that censorship. Grok we already know to be shit, it warps the truth beyond words. Even regarding non-political stuff, its accuracy is trash compared to other models.

    How about ChatGPT? Ollama? Kobold? Llamafile? Do they censor stuff?

    • notfromhere@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      Every model has bias due to its training set. And each model has strengths and weaknesses based on various factors including their training sets. The real correct answer is run an ensemble of local models and use the best one for any given task.