- cross-posted to:
- technology@hexbear.net
- cross-posted to:
- technology@hexbear.net
You must log in or register to comment.
LLM just mirror real world data they are trained on,
Other than censorship i don’t think there is a way to make it stop. It doesn’t understand moral good or bad it just spits out what it was trained on.
And the study should also mention, that the LLMs don’t do anything by themselves, they do what they are trained to do… noting more. They are just machines.