• semioticbreakdown [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    This is what is argued by the 4E cognition and the active inference crowd in academia and I agree. There’ve been pretty compelling results on smaller problems and recent applications to RL. Although under those frameworks, especially the latter, things like wants and needs and curiosity arise naturally from the agent trying to maintain itself in the world and understand causes of phenomena.

    • Awoo [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      5 days ago

      The basic need/want all biological forms have is energy. The need to acquire energy and the need to use energy more efficiently in order to survive longer between energy acquisitions.

      Everything else evolves from that starting point. It’s just a matter of adding predators and complexity to the environment, as well as adding more “senses”, sight, hearing, taste etc.

      One thing I think we’re not yet realising is that intelligence probably requires other intelligence in order to evolve. Your prey aren’t going to improve naturally if your predators aren’t also improving and evolving intelligently. Intelligent animal life came from the steady progression of ALL other animal life in competition or cooperation with one another. The creation and advancement of intelligence is an entire ecosystem and I don’t think we will create artificial intelligence without also creating an entire ecosystem that it can evolve within alongside many other artificial intelligence in the environment.

      Humans didn’t magically pop into existence smart. They were creating by surviving against other things that were also surviving. The system holistically created the intelligence.

      • MemesAreTheory [he/him, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        5 days ago

        I’ve had this thought for 5 or so years. With any luck, maybe I’ll put it into something publishable before I’m rounded up by the anti communist death squads that come for academics. I think intelligence is fundamentally a social/collective phenomenon, at least in a broad sense where we consider a predator/prey relationship some kind of sociality too. Humans just got really really really surprisingly good at it relative to other species because of our ability to communicate using language, and thus transfer knowledge gains very widely and very quickly compared to a non-verbal animal limited to its immediate social sphere or, slower yet, passing down changes via evolution and genetic advantage. Still, though, this is reliant on a social world and a kind of thinking that can be fundamentally collaborative. Our language and mental models of the world aren’t just geared to get us what we as individual agents desire; they were developed by a process that required them to be understandable to other similarly intelligent creatures.

          • MemesAreTheory [he/him, any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            Thanks for sharing! My approach to this is from a philosophy background, less coding, but I’ll see if I can puzzle my way through some of the technical language I don’t use regularly. I just took a “Phil of mind and artificial intelligence” course waaaayyyy back in 2020 that first turned me onto these topics, and that may as well be ancient history as far as “AI” is concerned. I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence. I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”

            However, since then I’ve specialized in other areas of legal and political philosophy and have less regular interaction with the current discussion other than to hate on it and be a gadfly whenever less skeptical members of my community begin to become seduced by the allure of yet another tech con.

            • semioticbreakdown [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              Haha, hope you get something out of it. I thought there were a lot of connections with postmodern philosophy actually, since both this and that are informed by applications of semiotics.

              I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence.

              Oh definitely. I think a large part of it is the wholly mechanistic approach that kind of has been pervasive for a while. Brains as just computers, and all that.

              I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”

              I mean, look at the first AI winter. Symbolic AI was all the rage and was considered completely sufficient for a general intelligence, and then they were never able to make one and all the funding dried up.

              • MemesAreTheory [he/him, any]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 days ago

                Learning about that AI winter is exactly the pattern I predicted we’d reach at the time. Who cares what an up-jumped undergrad cares about auditing a seminar though, right? (not really, they were very graceful and receptive, but like, I couldn’t publish anything due to lack of professional skill at that time either). Again, this wasn’t my field of expertise, but I did find it an interesting problem where had life put me another direction, I’d be more specialized within philosophy of mind instead.

      • semioticbreakdown [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        5 days ago

        I agree. The development of intelligence resulted from the tension between the organism and it’s environment, the organism and other organisms. The development of an individual organism’s intelligence does, too. It really is all dialectics in the end.

        • Awoo [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          5 days ago

          Yes. Dialectics layered on dialectics layered on dialectics.

          Increasingly more complex ecosystems of dialectics over time.

          The problem I forsee with the future of machine learning is that it’s easy to create one dialectic. It’s easy to create 2 or even 100… But at a certain point you reach “ok, so what dialectic do we add now?”.

          We do not have an accurate model of all the dialectics that exist in our biological world. How are we supposed to recreate the conditions that create the same kind of intelligence we have if we can’t create the same dialectics that formed it? If we miss some parts out it will not create the same kind of intelligence, it may not have the same morals or the same fundamental beliefs as we do because the environment that shapes it won’t be the same environment.

          The creation of human-like artificial intelligence will require the recreation of conditions that gave rise to human evolution.

          This brings me back to that techbro theory that we might be living in a simulation. If we aim to create artificial intelligence like us then we would end up creating a simulation that looks like our environment.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      Yeah, it makes sense that selection pressures from the environment would lead to evolution of needs on part of the agent. Volition and directed action gives a survival benefit.