• adultswim_antifa [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    55
    ·
    edit-2
    3 days ago

    Sam Altman’s job is to hype GPT-5 so the VCs will still put up with him burning probably the biggest pile of money anyone has ever burned. He’s probably damaged the internet and the environment more than any single individual ever, terrorized people about their jobs for years. And he almost certainly knows it’s all bullshit, making him a fraud. In a just world, he would be in prison when this is all over. He would almost certainly face the death penalty in China.

    • Hohsia [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      24
      ·
      3 days ago

      Abused his sister too btw. Feel like that needs to be talked about more. He’s a grade A piece of shit

    • WrongOnTheInternet [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      3 days ago

      damaged… the environment more than any single individual ever

      Still crypto by orders of magnitude. AI doesn’t even measure.

      For example, google data centres used 30 twh in the last year, with crypto using more like 170 twh.

      It’s not possible to figure out chatgpt’s usage because all the data is bad, but it’s still relatively small compared to crypto.

      • adultswim_antifa [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        A lot of people contributed to those crypto numbers. The AI models run in special built data centers. Some have their own generators because the power usage is so high.

        • WrongOnTheInternet [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 days ago

          I don’t think there are any specific power plants for AI built yet and crypto mining market concentration is very high, so ultimately not that many people.

          OpenAI appears to operate what is described as the world’s largest single data center building, with an IT load capacity of around 300 MW and a maximum power capacity of approximately 500 MW. This facility includes 210 air-cooled substations and a massive on-site electrical substation, which further highlights its immense scale. A second identical building is already under construction on the same site as of January 2025. When completed, this expansion will bring the total capacity of the campus to around a gigawatt, a record.

          So this largest one would take 4.5 twh a year, or 3 percent of current estimated crypto usage. With the expansion, 9 twh or 6 percent of estimated crypto usage.

  • terrific@lemmy.ml
    link
    fedilink
    English
    arrow-up
    44
    ·
    3 days ago

    Good old Gary setting the record straight…

    No hypothesis has ever been given more benefit of the doubt, nor more funding. After half a trillion dollars in that direction, it is obviously time to move on. The disappointing performance of GPT-5 should make that enormously clear.

    Unlikely, but I like his optimism. This is how I have felt with the release of every new LLM for the two years, but the scam is somehow still going 🤷 … I suppose many people stand to lose a lot of money when the bubble finally bursts.

  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    32
    ·
    3 days ago

    LLM has reached its limits. No matter what you do with it the thing is always going to be a glorified search engine.

    AI has to be conceived from the ground up as something that learns and reproduces actual thinking based on needs/wants. A system that produces methods of walking that reduce energy use for a bot while also seeking out energy sources might only be reproducing the cognitive behaviour of a bacteria but it is closer to life than these LLMs and has more potential to iteratively evolve into something more complex as you give it more wants/needs for its program to evolve on.

    Machine learning has more potential than this shit.

    • LLMs and other AI systems, imo, cannot act totally without human interaction (despite the efforts of dipshits) because it lacks the fundamental ability to create, tackle, and resolve problems dialectically.

      The ultimate core of any AI agent is its training data. Its synapses. For a human or other sentient animal these synapses are in constant flux - the connections break, reconnect, form new connections altogether. These synapses are then connected to our environments to form responses to stimuli.

      For an AI model, they are static. The connections, by our current designs, cannot be largely altered during runtime, but through a process we call “training”. We periodically train new models, but once distributed their synapses are permanently fixed in place.

      Now, we do have some caveats to this. We can give models novel inputs or build their working memory with prompts, contextual data, etc. but these are peripheral to the model itself. You can deploy a GPT-5 model as a programming agent that can perform better at programming than a chatbot agent, but fundamentally theyre programs that are deterministic. Strip any agent of its context and inputs and it’ll behave like any other model of the same training data and methodology.

      In my view, dialectics are something you experience with everything you do - every day we form countless helices of thesis-antithesis-synthesis in the ways our minds accept and process information and solve problems. Strip a human of the memory of its current role and task and they will react totally unique and independent of another. We are fundamentally not deterministic!

      The inability for AI to break out from deterministic outputs induces the ‘model collapse’ problem, wherein feeding a model the outputs of other models deteriorates their abilities. The determinism of AI means it is constantly reliant on the nondeterministic nature of humans to imitate this ability.

      I think there’s some limitations with my line of thought but the way we create AI models now works great for repetitive and non-novel tasks, like Text transformation, but the truly creative side of its outputs is only an imitation to a biological equivalent.

      • Awoo [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Right and this works well for humans who want to observe the data and understand how the model might function from a technical standpoint. They can look at the synapses and essentially slowly understand how things are connected up and draw rough conclusions about how it comes to certain decisions.

        In a model where that is in constant flux ? Impossible. You can’t observe and understand something that is changing as you observe it. You could freeze it and look at it while it’s “unconscious” i suppose, a bit like brain scanning a patient.

        Will this constant flux always work? Of course not. And that is why natural selection is a necessary process to evolve it. The models that fail have to die and the models that succeed go on to form the evolved artificial lifeforms.

        The compute power to achieve this sounds like we’d need a Dyson Sphere of energy though. We’re talking about the need for a simulation of millions of different creatures, interacting in different ways, all functioning as proper AI that calculate and adapt. If you think we’re burning absurd amounts of energy with AI currently just wait until these projects really step up.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      26
      ·
      3 days ago

      I don’t think an AI necessarily has to have needs or wants, but it does need to have a world model. That’s the shared context we all have and what informs our use of language. We don’t just string tokens together when we think. We have a model of the world around us in our heads, and we reason about the world by simulating actions and outcomes within our internal world model. I suspect that the path to actual thinking machines will be through embodiment. Robots that interact with the world, and learn to model it will be able to reason about it in a meaningful sense.

      • semioticbreakdown [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        This is what is argued by the 4E cognition and the active inference crowd in academia and I agree. There’ve been pretty compelling results on smaller problems and recent applications to RL. Although under those frameworks, especially the latter, things like wants and needs and curiosity arise naturally from the agent trying to maintain itself in the world and understand causes of phenomena.

        • Awoo [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          3 days ago

          The basic need/want all biological forms have is energy. The need to acquire energy and the need to use energy more efficiently in order to survive longer between energy acquisitions.

          Everything else evolves from that starting point. It’s just a matter of adding predators and complexity to the environment, as well as adding more “senses”, sight, hearing, taste etc.

          One thing I think we’re not yet realising is that intelligence probably requires other intelligence in order to evolve. Your prey aren’t going to improve naturally if your predators aren’t also improving and evolving intelligently. Intelligent animal life came from the steady progression of ALL other animal life in competition or cooperation with one another. The creation and advancement of intelligence is an entire ecosystem and I don’t think we will create artificial intelligence without also creating an entire ecosystem that it can evolve within alongside many other artificial intelligence in the environment.

          Humans didn’t magically pop into existence smart. They were creating by surviving against other things that were also surviving. The system holistically created the intelligence.

          • MemesAreTheory [he/him, any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            3 days ago

            I’ve had this thought for 5 or so years. With any luck, maybe I’ll put it into something publishable before I’m rounded up by the anti communist death squads that come for academics. I think intelligence is fundamentally a social/collective phenomenon, at least in a broad sense where we consider a predator/prey relationship some kind of sociality too. Humans just got really really really surprisingly good at it relative to other species because of our ability to communicate using language, and thus transfer knowledge gains very widely and very quickly compared to a non-verbal animal limited to its immediate social sphere or, slower yet, passing down changes via evolution and genetic advantage. Still, though, this is reliant on a social world and a kind of thinking that can be fundamentally collaborative. Our language and mental models of the world aren’t just geared to get us what we as individual agents desire; they were developed by a process that required them to be understandable to other similarly intelligent creatures.

              • MemesAreTheory [he/him, any]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                Thanks for sharing! My approach to this is from a philosophy background, less coding, but I’ll see if I can puzzle my way through some of the technical language I don’t use regularly. I just took a “Phil of mind and artificial intelligence” course waaaayyyy back in 2020 that first turned me onto these topics, and that may as well be ancient history as far as “AI” is concerned. I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence. I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”

                However, since then I’ve specialized in other areas of legal and political philosophy and have less regular interaction with the current discussion other than to hate on it and be a gadfly whenever less skeptical members of my community begin to become seduced by the allure of yet another tech con.

                • semioticbreakdown [she/her]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  Haha, hope you get something out of it. I thought there were a lot of connections with postmodern philosophy actually, since both this and that are informed by applications of semiotics.

                  I developed a strong conviction even then that LLMs were structurally incapable of “General Intelligence,” whatever that is, and began trying to think about what was missing from the process or approach that yielded actual intelligence.

                  Oh definitely. I think a large part of it is the wholly mechanistic approach that kind of has been pervasive for a while. Brains as just computers, and all that.

                  I more or less came to the conclusion that the tech field was largely oversimplifying the problem (when has that ever happened before) and substituting a solvable problem, “How do we train better Machine Learning programs?” for a much harder problem, “How do we construct an intelligence?”

                  I mean, look at the first AI winter. Symbolic AI was all the rage and was considered completely sufficient for a general intelligence, and then they were never able to make one and all the funding dried up.

          • semioticbreakdown [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 days ago

            I agree. The development of intelligence resulted from the tension between the organism and it’s environment, the organism and other organisms. The development of an individual organism’s intelligence does, too. It really is all dialectics in the end.

            • Awoo [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              3 days ago

              Yes. Dialectics layered on dialectics layered on dialectics.

              Increasingly more complex ecosystems of dialectics over time.

              The problem I forsee with the future of machine learning is that it’s easy to create one dialectic. It’s easy to create 2 or even 100… But at a certain point you reach “ok, so what dialectic do we add now?”.

              We do not have an accurate model of all the dialectics that exist in our biological world. How are we supposed to recreate the conditions that create the same kind of intelligence we have if we can’t create the same dialectics that formed it? If we miss some parts out it will not create the same kind of intelligence, it may not have the same morals or the same fundamental beliefs as we do because the environment that shapes it won’t be the same environment.

              The creation of human-like artificial intelligence will require the recreation of conditions that gave rise to human evolution.

              This brings me back to that techbro theory that we might be living in a simulation. If we aim to create artificial intelligence like us then we would end up creating a simulation that looks like our environment.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Yeah, it makes sense that selection pressures from the environment would lead to evolution of needs on part of the agent. Volition and directed action gives a survival benefit.

      • This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we’re defining actual (general) AI as having self awareness. In addition to what @Awoo@hexbear.net said, here’s another thing.

        You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it’s modeling a world that it can exist in; in other words the world model can’t be random gibberish because intelligence would be meaningless in such a world, and it wouldn’t even be a “world model.” All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already “wants” to have an accurate world model. But it’s a bit of a chicken and egg problem: does the AI only “want” to have an accurate model of the world after it gains self-awareness, the only point where true “wants” can exist? Or was that “want” built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won’t just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a “want” in the post-awareness general AI.

        An analogy of this same sort of thing but as it is with us bio-intelligence beings: We “want” to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That “want” is a result of evolution that has ingrained into us a desire (a “want”) to survive. But evolution itself doesn’t “want” anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) “want” to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience “wanting” something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn’t have come about if it weren’t for that initial directionality.

        Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.


        It gets really hard to articulate this kind of thing, so I apologize for all the “quoted” words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.

        • semioticbreakdown [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          But evolution itself doesn’t “want” anything. It just has directionality towards making better replicators

          But evolution is actively participated in and directed by the unbroken process of life. The need to avoid death is prior to the existence of evolution. It can’t be just the result of an imposition on sentient life, because it’s a necessary condition of the autopoietic processes that define life itself, of which evolution is an extension. Evolution isn’t even defined by making better replicators really. A replicator that is too effective at replicating can dissolve its environment and destroy the conditions that made its existence possible. Population steady-state flows happen in nature quite often. When the dissipative structures that formed proto-life cordoned off from the world through cell boundaries, it really did become a need to avoid death to continue. it really is a kind of want, not just its appearance (but not mentally because there is no mind yet) - to maintain tension between the world and itself and propagate itself.

          It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains

          I don’t think it’s as much from the neurons themselves as it is the whole inference/action dialectic and the world/organism dialectic. All mental phenomena is secondary to and originally in service of acting on the world to maintain the boundary between the organism and the world, and is necessarily indistinguishable from making judgements of and causal/predictive inference on the world itself. Self-awareness resulted from real material pressures, actually existing relations between organisms, and the need to distinguish the self and the other for appropriate action. I’d also argue that the genuine desire to survive as a psychic phenomenon has always existed at least from the first time a neural organism perceived the world, identical to qualia. It’s not necessary to have self-awareness for that. Want as a mental phenomena exists prior to self-awareness - the latter results from the attribution of causes to the body. The model of the world experienced and embodied by an artificial sentience doesn’t need to distinguish itself in the immediate until doing so is necessary for its continued existence and further certainty on the attribution of causes.

          • purpleworm [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            Evolution isn’t even defined by making better replicators really. A replicator that is too effective at replicating can dissolve its environment and destroy the conditions that made its existence possible.

            That’s called a bad replicator for the purpose of this discussion, because destroying the conditions that are required for its own replication to continue is not conducive to replication and therefore a replicator that does that is bad.

            • semioticbreakdown [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Yes but I am saying that that is a fundamental contradiction that means theyre not just replicators, because their fitness as a replicator

              … that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment

              may be at odds with the continuation of the species as a process because of the interaction between replication and the environment.

              While you can then make the argument that because this isn’t conducive to replication in the long term theyre less fit replicators, I think that for a given environment, if the one less effective at replication is the one that continues, then they’re categorically speaking really something else, and only focusing on replication is missing a large portion of the picture. I’m also saying that replication isn’t essential to the self-maintaining process on the individual level, its just the means of the continuation of that process beyond an individual instance. I understand what you mean, but I don’t think they should be treated as replicators because I don’t think replication is the fundamental driving force of these processes.

              • purpleworm [none/use name]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 days ago

                If good replicator is just being defined as personally producing a whole bunch of offspring, then I think it’s just not a helpful term. A good replicator should be something that replicates effectively, not just a lot, and what you are describing as “less effective at replication” is clearly more effective at replication relatively speaking if its offspring are still around and its competitors are not. You would hardly say something is a good replicator if it produced an unfathomable amount of offspring and then just ate them all, right?

                I’m also saying that replication isn’t essential to the self-maintaining process on the individual level

                How is this relevant? No one was contradicting this idea, even implicitly, it’s just not a meaningful factor in the discussion for the reason you go on to note.

                • semioticbreakdown [she/her]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago
                  spoiler

                  If good replicator is just being defined as personally producing a whole bunch of offspring, then I think it’s just not a helpful term.

                  They phrased it by less and more offspring and I quoted it as such. If you want to argue that’s not what they said and take beef with my post as a result that’s fine. If you want to define a good replicator as one that can continue replicating over time, thats OK too (and I would agree with that) but again I think it’s attributional and not essential.

                  How is this relevant? No one was contradicting this idea, even implicitly, it’s just not a meaningful factor in the discussion for the reason you go on to note.

                  But evolution itself doesn’t “want” anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) “want” to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive?

                  This does seem to imply replication as the fundamental function of an autopoietic process, at least to me, and that’s what I was referencing. All I was trying to get at is that the appearance of “wanting” to survive, as the original poster put it, isn’t related to replication, and the attribution of the desire to live as something imposed by and the result of evolution is inaccurate because it’s a direct extension of autopoiesis essential to the organism which exists prior to evolutionary (and replicatory) processes. I think this has direct implications for the development of real intelligence in an AI system. I’m not going to reply after this because I don’t think I’m explaining my perspective well and I don’t want to argue anymore. It’s just a quibble on ontology, anyway, because I mostly agree with their post and I thought it was well written and thought out.

          • But evolution is actively participated in and directed by the unbroken process of life.

            Yes. And?

            The need to avoid death is prior to the existence of evolution. It can’t be just the result of an imposition on sentient life, because it’s a necessary condition of the autopoietic processes that define life itself, of which evolution is an extension.

            I’m not seeing how this contradicts anything I said. In fact it supports what I said by recognizing the necessity for a directionality that precedes (and is a prerequisite for) any kind of sentient desire or “wants.”

            A replicator that is too effective at replicating can dissolve its environment and destroy the conditions that made its existence possible.

            @purpleworm@hexbear.net addressed this really well and gave a thoughtful, completely correct response. Not much more for me to say on it.

            When the dissipative structures that formed proto-life cordoned off from the world through cell boundaries, it really did become a need to avoid death to continue. it really is a kind of want, not just its appearance (but not mentally because there is no mind yet) - to maintain tension between the world and itself and propagate itself.

            I think you’re splitting hairs here between ever so slightly different aspects what I have been calling directionality. Desires or “wants” by definition require a mind capable of having a want or desire. Where you say “it really is a kind of want but not mentally because there is no mind yet” then that’s simply not the kind of “want” we are talking about here, the thing that a self-aware (mind-possessing) AI would have if it were genuinely self aware and possessing of a mind. Everything else really is just an appearance of want and is a result of what I’ve been calling directionality. What you’re talking about as the mindless “need to avoid death to continue” is still just the mindless non-intelligent and non-sentient directionality of evolution. And to specifically address this piece:

            to maintain tension between the world and itself and propagate itself.

            But it is part of the world (dialectics ftw!). There is a tension between inside and outside the individual cell (and also a tension between the “self” and “outside the self” of a sentient mind which is addressed further down, but this is not the same thing as the the tension between the cell and the world, as proven by the fact we aren’t aware of all our cells and frequently kill them by doing such things as scratching) but the cell still isn’t the most basic unit of replication in evolution, that would be the gene. Strands of RNA or DNA. Genes (often but not always) use cells as part of the vehicle for their replication, and either way they are still just chemicals reacting with the environment they exist within. There’s no more intentionality behind what they do than there is behind, say, a magnet clinging to a fridge. That magnet does not “want” to cling to your fridge, like genes, it is reacting to it’s environment and this will be true regardless of where you draw the boundary between the “self” of the magnet and “the outside world.” To actually desire something the way we are talking about here requires the complexity of a brain capable of producing a mind.

            I don’t think it’s as much from the neurons themselves as it is the whole inference/action dialectic and the world/organism dialectic. […] Self-awareness resulted from real material pressures, actually existing relations between organisms, and the need to distinguish the self and the other for appropriate action

            Agreed. The emergent property of the mind and sentience comes out of the complexity of the interaction of the firing of neurons in a brain and the world they exist within, at least in all likelhood. We still don’t know exactly what produces our ability to experience, where exactly qualia originate (i.e. why we aren’t just philosophical zombies) but I think most neuroscientists (and philosophers who work on this stuff) would agree, as I do too, that without an outside non-self world for those neurons to interact with, there would be no actual mind. Even that the mind is a drawing of the distinction between self and non-self. But since that complex neural structure could never even begin to come about without that outside world and all the mechanisms of evolution (aside from a Boltzmann brain!), always having to include the phrase “and with the outside world” when describing the neurological origin of qualia and experience is some severe philosophical hair-splitting.

            I’d also argue that the genuine desire to survive as a psychic phenomenon has always existed at least from the first time a neural organism perceived the world, identical to qualia.

            Um, yeah… that’s pretty much what my argument was for the necessity of any genuine AI to have wants and desires, those “wants” necessarily would have had to have been there built in for it to even become AI.

            It’s not necessary to have self-awareness for that. Want as a mental phenomena exists prior to self-awareness

            Disagree. Again, if you want to split hairs on exactly where it is in that ladder of complexity that self-awareness arises, or where in the fuzzy chain we can draw a line between organisms capable of self-awareness vs those not, or even exactly what constitutes self-awareness then feel free. But a thing having an actual desire as something genuinely experienced, it requires some sense of selfhood for that experience to happen to.

            • semioticbreakdown [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              But since that complex neural structure could never even begin to come about without that outside world and all the mechanisms of evolution (aside from a Boltzmann brain!), always having to include the phrase “and with the outside world” when describing the neurological origin of qualia and experience is some severe philosophical hair-splitting.

              You’d think, but damn the techbros really have forgotten about this one tbh. I think it’s still very relevant to the topic of AI honestly cause the people who make them keep ignoring that fact. I’ve seen emergent complexity get thrown around as justification for LLM sentience in some circles. And I don’t understand why when nearly everything in neuroscience and philosophy, as you said, contradicts that. Very frustrating, frankly. Even the term “world model” gets thrown around with LLMs too and its soooo aggravating.

              I agree with your thoughts on directionality i was just quibbling on evolution and yeah splitting hairs really. Like I have other thoughts on world models and sentience and selfhood but theyre probably pretty fringe so I’m not going to share them here.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I’d argue that the idea of self awareness or needs/wants is tangential to the notion of qualia. A system can be self aware, and develop needs without necessarily being conscious and having an internal experience. What needs and wants really boil down to is that the system is trying to maintain a particular state. To maintain homeostasis, the system needs to react to external inputs and take actions that keep it in the desired state. For example, a thermostat could be said to have a “need” to maintain a particular temperature, but it could hardly be argued that it has some sort of qualia.

          Why sentience exists is a really interesting question all of itself in my opinion as it’s not an obviously necessary quality within a self aware system. I suspect it may be related to having a theory of mind. When a system starts to model itself then perhaps you end up with some sort of a resonance where it thinks about its own thoughts and that’s what creates internal experience.

          We also have to define what we mean by intelligence here. My definition would be a system that has a model of a particular domain, and is able to make judgments regarding outcomes of different actions. I don’t think mere intelligence requires self awareness or consciousness.

          • Philosoraptor [he/him, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            I’d argue that the idea of self awareness or needs/wants is tangential to the notion of qualia.

            This is right. Having things like beliefs and desires is called “intentionality,” and is orthogonal to both sentience/sapience and first-person subjectivity (qualia). You can have beliefs and desires without any accompanying qualitative experience and vice versa.

      • Awoo [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Machine learning requires needs or wants in order to evolve. If your model is going to learn how to utilise energy efficiently between recharging then it needs to desire energy (need/want). This is just “eat” and “collect water” process of learning. Then you give it predators to learn how to avoid being killed in the process of doing this so it learns survival methods. Add complexity to the environment over time and it’ll learn more and more and more.

        Reproduction probably needs some sort of social cues to learn, the ability to communicate with other models that they wish to reproduce, or the ability to start working in teams.

        It all has the requirement of needs/wants. The basis of all animal intelligence evolving into more efficient methods of doing something is having needs.

        • purpleworm [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I think “adding predators” is kind of cargo cult, and you can just give it obstacles that are actually relevant to problems it could conceivably need to solve, because we aren’t making AI with the main hope that they can successfully survive a wolf attack, and it doesn’t need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don’t want to use it for those conditions.

          • Awoo [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            It’s going to have a completely different development path without any kind of predators. Almost everything in our world is either prey, predator, or both.

            I struggle to believe that it will be adapted well for a human reality where predation exists (where humans are sometimes predator to other humans too, socially) without having a development path that adapts it for that reality.

            it doesn’t need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don’t want to use it for those conditions.

            We don’t know how to skip straight to human level intelligence. You need to create wild animal level artificial intelligence before we can understand how to improve it into human level intelligence. If we can’t make artificial monkeys or macaws in terms of emotional/social intelligence and problem-solving then we certainly can’t make anything akin to humans or whatever the next level intelligence is that we’re hoping to get when an AI can make better versions of itself iteratively progressing technology to levels we haven’t imagined yet.

            • purpleworm [none/use name]@hexbear.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              My point is that it’s a different kind of thing. Role-playing that it’s an animal in the wild instead of a cogitating tool is counterproductive. We aren’t going to be sending it out into the jungle to survive there, we want it to deal with human situations, helping people, with its own “survival” being instrumental to that end. Even if it encounters a hostile human, it probably won’t have anything that it can do about it because we aren’t immediately talking about building androids, this AI will effectively be just a big bundle of computers in a warehouse. If you want to give an experimental AI control over the security systems of the building housing it, go off I guess, but the protocols to contain the person breaking in by . . . locking the doors with nothing else it can really do while calling the cops and the owner of the facility are not really “avoiding predators” on a level other than strained metaphor, and require no engagement in this “surviving in the wild” role-play, it’s just identifying a threat and then doing something that, the threat being identified, any modern computer can do. If you want to say it needs to learn to recognize “threats” (be they some abstraction in a game simulation, a fire, a robber, or a plane falling out of the sky) sure, that’s fair, that falls within obstacles it might actually encounter.

              Nothing I’m saying bears on the level of intelligence it exhibits or that it is capable of. I’m not saying it needs to handle these things as well as a human would, just that it needs to be given applicable situations.

              • Awoo [she/her]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I feel like you’ve misunderstood me. You’re talking consistently about one single AI.

                Machine learning is not one AI. It is thousands of generations of AI that have iteratively improved over time by their successes and failures. The best performing of the generation go on to form the basis of the next generation, or you have survival mechanics that automatically form new generations.

                This isn’t training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.

                If your process isn’t going to be similar to humans, you aren’t going to produce something similar to humans. I honestly think that’s dangerous in and of itself, you’re creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.

                • purpleworm [none/use name]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 days ago

                  This isn’t training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.

                  If your process isn’t going to be similar to humans, you aren’t going to produce something similar to humans. I honestly think that’s dangerous in and of itself, you’re creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.

                  But you’re able to designate what its goals are completely arbitrarily. It doesn’t need to think like a human – there are humans who have been at odds with coexistence with humanity – it needs to be constructed based on the value of human benefit, and you can seriously just tell it that. That isn’t changed by it cogitating in a structurally different way, which it also almost certainly would be doing anyway because the way we do is highly adapted to early humanity, but is structurally deeply based on random incidents of mutation before then. Something could think very differently and nonetheless be just as capable of flourishing in those circumstances. This difference is compounded by the fact that you probably aren’t going to actually produce an accurate simulation of an early human environment because you can’t just make a functional simulation of macroscopic reality like that. Even imagining your method made sense, it would still ultimately need to fall into aspects of what I’m saying about arbitrary stipulation because the model environment would be based on human heuristics.

                  But way more important than that is the part where you, again, can just tell it that human benefit based on human instructions is the primary goal and it will pursue that, handling things like energy acquisition and efficiency secondarily.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          It requires needs or wants to be self directed. The needs can also be externalized the way we do with LLMs today. The user prompt can generate a goal for the system, and then it will work to accomplish it. That said, I entirely agree systems that are self directed are more interesting. If a system has needs that result in it wanting to maintain homeostasis, such as maintaining an optimal energy level, then it can act and learn autonomously.

          • Awoo [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 days ago

            The user prompt can generate a goal for the system, and then it will work to accomplish it.

            Ok but how is it getting intelligent before the user prompt?

            The AI isn’t useful until it is grown and evolved. I’m talking about the earlier stages.

    • It’s always worth pointing out that the cycle of any tech company starts with a low-cost service that gains marketshare while using VC money and reinvesting most to all profit back into the company, then transitions to operating off user money and squeezing users for money. OpenAI is still in the growth phase and will only switch to the squeeze when they have a stable enough lead over Google and Anthropic so lost marketshare isn’t as much of an issue.

  • jackmaoist [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    chatgpt is dogshit anyways and only surviving due to being the first and being free. So they have to burn money to stay relevant and hopefully not lose users to better models. GPT-5 is essentially a cost saving model and is the start of enshittification of the industry.

    I use Claude for dev related stuff and it only provides limited queries so they can keep their model accurate since costs are limited.

    Gemini already produces way better results than chatgpt ever did and is really good at research.

    Perplexity can be a decent search engine.

    Even i-am-adolf-hitler AI is better than chatgpt in most things although I’d rather not use it.

    • invo_rt [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      What model do you recommend for brainstorming creative pursuits? As an adult with a “Serious Job™”, it’s hard for me to find anyone that has an interest in being creative. I find LLMs to give me enough feedback to help me flesh my ideas out.

      • piccolo [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        For writing in English, GLM 4.5 is pretty good, open weights, and free or very cheap (free if you want just a chat interface, go to z.ai; cheap if you want API access, I’d recommend OpenRouter). It’s imo the best non-closed source LLM for writing in English. DeepSeek can be good for that too, but I’ve found that it can sometimes produce sentences that flow a lot worse (you can use DeepSeek for free via their website as a chat interface). For other creative pursuits, I’m not sure - if you give me an idea of what you’d want out of one, I can try to give you advice.

  • LangleyDominos [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Thought for 34s

    Love this question. If you take the contradiction seriously (mutual dependence + opposed aims), the “resolution” can’t just be nicer HR. It has to change the ownership, governance, and survival constraints that generate the conflict in the first place. Here are a few plausible syntheses—each keeps market coordination and capital pooling (the employer’s strengths) while dissolving the master–servant relation (the employee’s weakness).

    1. Bicameral enterprise (labor–capital parity by constitution) What it is: Every firm is chartered with two sovereign chambers: a Labor Assembly (all workers) and a Capital Assembly (outside investors, lenders). Major decisions (strategy, mergers, tech adoption that displaces jobs, exec pay) require concurrent majority. Management serves at the joint pleasure of both.

    Payoffs split: Wages are set via internal bargaining plus a mandatory residual share to the Labor Assembly (an automatic labor dividend). Investors get capped priority returns + a residual share that falls as automation rents rise.

    Why it’s a synthesis: You keep large-scale coordination and risk-bearing; you erase unilateral control over work. Neither side is “the firm” alone.

    1. Capital-as-Commons + Guild Firms (socialized returns, competitive production) What it is: Most productive capital is held by permanent public/sectoral funds that pay everyone a universal capital dividend. Production is done by worker-run guild firms that lease capital from the funds and compete on price/quality.

    Mechanics: Funds allocate capital by return-and-mission mandates; guild firms can’t privately own the capital stock but control use. People can move between guilds; income = wage + universal dividend.

    Why it’s a synthesis: Survival no longer hinges on pleasing an employer; price signals remain; capital still gets stewarded for efficiency but no longer stands as a private boss over labor.

    1. Project-joint ventures (replace employment with term-limited parity contracts) What it is: Workers federate into labor companies (think: legal entities that pool skills, training, benefits). Investors form capital clubs. Production happens via JV contracts between a labor company and a capital club for a specific project/product, with preset governance parity and profit-sharing.

    Lifecycle: When the project ends, the JV dissolves; both sides retain portable reputations, training credits, and equity-like claims that vest over time.

    Why it’s a synthesis: No permanent employer; no atomized freelancers. Two peer collectives meet as equals around a concrete production plan.

    1. Inclusive Ownership by Law (automatic worker trusts + sectoral bargaining) What it is: Statute requires medium/large firms to issue a small % of equity annually into an irreducible worker trust (non-saleable, voting shares), up to a cap. Pair with sectoral bargaining to set floors for wages/benefits and with board parity at scale.

    Why it’s a synthesis: Still a single enterprise competing in markets, still raises outside capital—but control and residuals are durably shared, not granted at management’s pleasure.

    1. Public Option for Work + Open-firm Contracts (end artificial scarcity) What it is: A standing Job Guarantee offers socially useful work at a living wage with training. Private firms must beat that offer to hire. Contracts above a size threshold must publish open terms on displacement, training, and gain-sharing when productivity rises.

    Why it’s a synthesis: The boss’s power to impose scarcity is gone; firms still exist, but the “outside option” belongs to labor, forcing a cooperative equilibrium.

    1. Data/Model Dividend + Labor Royalty (in the age of automation) What it is: Treat workforce activity and data (which trains software and models) as royalty-bearing inputs. Firms owe ongoing labor royalties when prior labor becomes embodied in code that substitutes for future labor.

    Why it’s a synthesis: Preserves automation and scale while tying ownership of productivity gains to the people whose past work made them possible.

    What this buys you (and the trade-offs) Ends the unilateral right to rule (the crux of the contradiction) while keeping investment, scale, and price discovery.

    Stability: #2 and #4 create durable, compounding worker claims; #3 is flexible but needs robust contract law and unemployment insurance.

    Efficiency risks: Parity can deadlock. You mitigate with scope-limited vetoes, default mediation, and mission-tied capital mandates.

    Transition paths: Start with #4 (law), pilot #1 charters in regulated sectors, seed #2 with existing public funds, and let #3 grow in project-heavy industries (media, construction, software).

    Andrew Yang in my pocket.

      • LangleyDominos [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I also tried Claude 4.1. It still complimented me for my big brain but its response seemed more like thinking. It asked me some follow-up questions to better give a response and, surprisingly, it asked about history. So at least Claude’s training has that kind of association of history with political economy and materialism. GPT 5 didn’t seem to get that part.

        The “Productive Network” Synthesis

        Instead of firms with employees, imagine autonomous productive nodes connected through dynamic contribution networks. Here’s how it might work:

        Core mechanism: Replace employment with temporary productive associations. Individuals or small groups maintain ownership of their tools/skills/knowledge and form task-specific collaborations. Think of it like open-source software development, but for all production.
        No permanent “firms” - just fluid networks of producers
        No wages - direct claims on collective output based on contribution metrics
        No employers - coordination through algorithmic matching and democratic protocols
        Surplus doesn’t get “extracted” because there’s no separate owner class to extract it

        Historical lesson: The Paris Commune’s workshop federations and Yugoslav self-management showed coordination without traditional hierarchy is possible, but lacked the technology for dynamic, real-time coordination we now have.

        The “Social Inheritance” Model

        This one’s more radical: What if productive capacity became a birthright commons?

        Core mechanism: All major productive assets are held in regional/municipal trusts. Individuals receive periodic allotments of productive capacity (like universal basic assets, not just income).
        People combine their allotments to undertake projects
        No employment needed - you directly command a share of society’s productive power
        Coordination happens through proposal systems and resource voting
        “Work” becomes voluntary contribution to expand the common pool

        Historical lesson: The kibbutzim and Mondragon show collective ownership can work, but they remained islands. This would need to be systemic from the start.

        The “Contribution Economy”

        Core mechanism: Replace employment with a contribution-tracking system where people directly register their productive activities and automatically receive proportional claims on social output.
        Blockchain or similar tech tracks contributions across society
        No firms needed - just projects that people join/leave freely
        Algorithms (democratically governed) determine relative contribution values
        Resources flow directly to contributors without passing through “employers”
        This isn’t just “gig economy” - there’s no platform owner extracting value. It’s peer-to-peer production with social coordination.

        Historical lesson: Soviet planning failed partly due to information problems. Modern computation and networking could enable decentralized planning that the USSR couldn’t achieve. \

        The “Hybrid Transition” Model (if complete transformation proves impossible)

        If we can’t fully eliminate surplus extraction immediately:
        Mandatory worker sovereignty: All firms above a certain size automatically convert to worker-controlled entities, but with a twist:
        Capital investment is allowed but capped at fixed returns (like Islamic banking)
        Workers control all decisions about surplus distribution
        “Employment” transforms into automatic stakeholdership
        Capital becomes a utility input, not a controlling force

        This differs from co-ops because it’s not voluntary - it’s the only legal form for scaled production.

        Critical Challenges for Any Synthesis

        Coordination at scale: How do complex supply chains work without firms?
        Innovation incentives: What drives technical progress without profit?
        Transition mechanics: How do we get there without catastrophic disruption?
        Global compatibility: How does this interface with different economic systems?

        Which of these directions resonates most with your thinking? Or do you see elements that could be combined differently?

      • LangleyDominos [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        I have a tough social/historical problem for you. The contradiction between employer and employee. Employers are afforded existence by employees because a company cannot extract surplus labor value if employees (labor) do not exist. Yet, companies actively impose scarcity on labor, causing wages to fall. The employer must reduce labor costs in order to maximize the amount of surplus value extracted. Employees need employers in order to have a job, and to pay for the things they need. Employees, require higher wages and more jobs available in order to outpace the rise in the cost of living over time.

        The employer and employee are mutually dependent, neither can exist in their own right without the other. Yet they also seek mutually exclusive ends. This is like the Being/Nothing contradiction noted by Hegel. Employee/Employer is the thesis/antithesis. The real question comes as to what the synthesis would be. What resolves this contradiction, replaces the Employee/Employer, yet retains qualities of both? Some may say co-ops, but I don’t find that convincing. Co-ops simply pay a shell game with the employer/employee relationship rather than replace it. If one considers co-op to be a solution then even capitalists can be considered an employee of their own company in the case of an owner/CEO. That definitely doesn’t resolve the contradiction.

        So, ChatGPT 5, I am asking you to spitball some resolution to this contradiction. Find a synthesis, if you can.