So DeepSeek has this very cool feature that displays what it is “thinking” before it gives you its answer. It’s quite neat in that you can see its “thought” process, but it also has the added benefit of revealing whatever bias it might have developed with its training data.

In this case, I asked it if we might be living in a “slow motion World War 3” with the Maiden Coup in Ukraine being the opening shots. The mf thought that I might “buy Russian propaganda” because I called it a coup rather than a revolution.

So although DeepSeek is Chinese, it was still very clearly trained on a lot of mainstream / LIB information.

  • sodium_nitride [she/her, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    4 days ago

    All the other avenues of AI research are and were NO WHERE near as comprehensive or competent as LLM machines.

    Depends on what you want to accomplish and how much resources you want to expend.

    Discarding probability based systems as “juiced up autocorrect” will discard

    I have not discarded LLMs. I know some people use them to great effect, but one must be deeply skeptical of their use as oracles.

    If you use them for their intended purpose, they can be useful, just as autocorrect is useful. I have used LLMs to great effect for helping me cut down on my word count for certain assignments, or as a psudeo-google search for coding assistance.

    I am well aware that the other approaches cannot do these things. They tend to suck at language processing. However, AI architectures using explicitly coded rules have the advantage over LLMs that they are not so prone to hallucinating, which makes them safer and more useful for certain other tasks.

    Not to mention that LLMs themselves were largely unviable until the creation of the attention mechanism and humanity throwing ungodly amounts of resources at them (hundreds of billions of dollars of investment).

    I am sorry to tell you that your brain also hallucinates logic, just on a much larger scale with a ton more neural connections

    I am aware that human brains also hallucinate logic. That’s why I don’t place must weight on random anecdotes when talking about politics or science.

    Please don’t do this kind of luddite historical revisionism

    What historical revisionism? The only thing my comment mentions is that inference engines did not receive as much hype or funding as LLMs, which is true. And how is anything I have stated “luddism”?

    go ask LISP bros how their AI machine business turned out, just don’t mention Chapter 11 they’d get PTSD

    This doesn’t mean anything when all the AI companies are hemorrhaging money at an epic scale. At least the LISP bros can say that they never built the monument to the irrationality of capitalism that is the AI stock bubble.

    Or maybe they did with the dot com bubble. Idk much about that period.