• LLMs and other AI systems, imo, cannot act totally without human interaction (despite the efforts of dipshits) because it lacks the fundamental ability to create, tackle, and resolve problems dialectically.

    The ultimate core of any AI agent is its training data. Its synapses. For a human or other sentient animal these synapses are in constant flux - the connections break, reconnect, form new connections altogether. These synapses are then connected to our environments to form responses to stimuli.

    For an AI model, they are static. The connections, by our current designs, cannot be largely altered during runtime, but through a process we call “training”. We periodically train new models, but once distributed their synapses are permanently fixed in place.

    Now, we do have some caveats to this. We can give models novel inputs or build their working memory with prompts, contextual data, etc. but these are peripheral to the model itself. You can deploy a GPT-5 model as a programming agent that can perform better at programming than a chatbot agent, but fundamentally theyre programs that are deterministic. Strip any agent of its context and inputs and it’ll behave like any other model of the same training data and methodology.

    In my view, dialectics are something you experience with everything you do - every day we form countless helices of thesis-antithesis-synthesis in the ways our minds accept and process information and solve problems. Strip a human of the memory of its current role and task and they will react totally unique and independent of another. We are fundamentally not deterministic!

    The inability for AI to break out from deterministic outputs induces the ‘model collapse’ problem, wherein feeding a model the outputs of other models deteriorates their abilities. The determinism of AI means it is constantly reliant on the nondeterministic nature of humans to imitate this ability.

    I think there’s some limitations with my line of thought but the way we create AI models now works great for repetitive and non-novel tasks, like Text transformation, but the truly creative side of its outputs is only an imitation to a biological equivalent.

    • Awoo [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      Right and this works well for humans who want to observe the data and understand how the model might function from a technical standpoint. They can look at the synapses and essentially slowly understand how things are connected up and draw rough conclusions about how it comes to certain decisions.

      In a model where that is in constant flux ? Impossible. You can’t observe and understand something that is changing as you observe it. You could freeze it and look at it while it’s “unconscious” i suppose, a bit like brain scanning a patient.

      Will this constant flux always work? Of course not. And that is why natural selection is a necessary process to evolve it. The models that fail have to die and the models that succeed go on to form the evolved artificial lifeforms.

      The compute power to achieve this sounds like we’d need a Dyson Sphere of energy though. We’re talking about the need for a simulation of millions of different creatures, interacting in different ways, all functioning as proper AI that calculate and adapt. If you think we’re burning absurd amounts of energy with AI currently just wait until these projects really step up.