• 0 Posts
  • 2 Comments
Joined 6 months ago
cake
Cake day: February 12th, 2025

help-circle
  • The machine learning models which came about before LLMs were often smaller in scope but much more competent. E.g. image recognition models, something newer broad “multimodal” models struggle with; theorem provers and other symbolic AI applications, another area LLMs struggle with.

    The modern crop of LLMs are juiced up autocorrect. They are finding the statistically most likely next token and spitting it out based on training data. They don’t create novel thoughts or logic, just regurgitate from their slurry of training data. The human brain does not work anything like this. LLMs are not modeled on any organic system, just on what some ML/AI researchers assumed was the structure of a brain. When we “hallucinate logic” it’s part of a process of envisioning abstract representations of our world and reasoning through different outcomes; when an LLM hallucinates it is just creating what its training dictates is a likely answer.

    This doesn’t mean ML doesn’t have a broad variety of applications but LLMs have gotta be one of the weakest in terms of actually shifting paradigms. Source: software engineer who works with neural nets with academic background in computational math and statistical analysis