So DeepSeek has this very cool feature that displays what it is “thinking” before it gives you its answer. It’s quite neat in that you can see its “thought” process, but it also has the added benefit of revealing whatever bias it might have developed with its training data.
In this case, I asked it if we might be living in a “slow motion World War 3” with the Maiden Coup in Ukraine being the opening shots. The mf thought that I might “buy Russian propaganda” because I called it a coup rather than a revolution.
So although DeepSeek is Chinese, it was still very clearly trained on a lot of mainstream / information.
I know this is somewhat unserious but I’m genuinely distressed at the thought you’d think a LLM model would be “based” if it aggreed on this.
It is talking to you in english and english media, subsequently english public opinion is overwhelmingly of that opinion. if Elon Musk can’t keep his pet chatbot a nazi because the input data isn’t 100% nazi, why would the one from a chinese firm–not the government–uphold a decent political line?
Yeah fair comment. It just kinda took me off guard I guess.