So DeepSeek has this very cool feature that displays what it is “thinking” before it gives you its answer. It’s quite neat in that you can see its “thought” process, but it also has the added benefit of revealing whatever bias it might have developed with its training data.
In this case, I asked it if we might be living in a “slow motion World War 3” with the Maiden Coup in Ukraine being the opening shots. The mf thought that I might “buy Russian propaganda” because I called it a coup rather than a revolution.
So although DeepSeek is Chinese, it was still very clearly trained on a lot of mainstream / information.
The machine learning models which came about before LLMs were often smaller in scope but much more competent. E.g. image recognition models, something newer broad “multimodal” models struggle with; theorem provers and other symbolic AI applications, another area LLMs struggle with.
The modern crop of LLMs are juiced up autocorrect. They are finding the statistically most likely next token and spitting it out based on training data. They don’t create novel thoughts or logic, just regurgitate from their slurry of training data. The human brain does not work anything like this. LLMs are not modeled on any organic system, just on what some ML/AI researchers assumed was the structure of a brain. When we “hallucinate logic” it’s part of a process of envisioning abstract representations of our world and reasoning through different outcomes; when an LLM hallucinates it is just creating what its training dictates is a likely answer.
This doesn’t mean ML doesn’t have a broad variety of applications but LLMs have gotta be one of the weakest in terms of actually shifting paradigms. Source: software engineer who works with neural nets with academic background in computational math and statistical analysis