• tfowinder@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    1 month ago

    LLM just mirror real world data they are trained on,

    Other than censorship i don’t think there is a way to make it stop. It doesn’t understand moral good or bad it just spits out what it was trained on.

  • Ardens@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    And the study should also mention, that the LLMs don’t do anything by themselves, they do what they are trained to do… noting more. They are just machines.