Greetings Hexbears!

I just pounded out this long ass comment in a thread inquiring about why leftists tend to oppose GenAI and LLMs, when I realized the post was already over a week old and no one will probably read it. Thought it was insightful enough to reshare and, given your reputation, figured people here would find it interesting to read.

I recently finished reading Capital, and many of these thoughts jumped out at me during my reading. Interested in hearing what you think or if you have any critiques or addendums.

Anyways, here’s the text:

I haven’t seen any comments here that adequately address the core of the issue from a leftist perspective, so I will indulge.

LLMs are fundamentally a tool to empower capital and stack the deck against workers. This is a structural problem, and as such, is one that community efforts like FOSS are ill-equipped to solve.

Given that training LLMs from scratch requires massive computational power, you must control some means of production. i.e. You must own a server farm to scrape publicly accessible data or collect data from hosting user services. Then you must also own a server farm equipped with large arrays of GPUs or TPUs to carry out the training and most types of inference.

So the proletariat cannot simply wield these tools for their own purposes, they must use the products that capital allows to be made available (i.e. proprietary services or pre-trained “open source” models)

Then comes the fact that the core market for these “AI” products is not end users; it is capitalists. Capitalists who hope their investments will massively pay off by cutting labor costs on the most expensive portion of the proletariat: engineers, creatives, and analysts.

Even if “AI” can never truly replace most of these workers, it can convince capitalists and their manager servants to lay off workers, and it can convince workers that their position is more precarious due to pressure from the threat of replacement, discouraging workers for fighting for increased pay and benefits and better working conditions.

As is the case with all private private property, profits made by “AI” and LLM models will never reach the workers that built those models, nor the users who provided the training data. It will be repackaged as a product owned by capital and resold to the workers, either through subscription fees, token pricing, or through forfeiture of private data.

Make no mistake, once the models are sufficiently advanced, tools sufficiently embedded into workflows, and the market sufficiently saturated, the rug will be pulled, and “enshittification” will begin. Forcing workers to pay exorbitant prices for these tools in a market where experience and skills are highly commoditized and increasingly difficult to acquire.

The cherry on top is that “AI” is the ultimate capital. The promise of “AI” is that capitalists will be able to use the stolen surplus value from workers to eliminate the need for variable capital (i.e. workers) entirely. The end goal is to convert the whole of the proletariat into maximally unskilled labor, i.e. a commodity, so they can be maximally exploited, with the only recourse being a product they control the distribution of. AI was never going to be our savior, as it is built with the intent of being our enslaver.

  • tricerotops [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    11 days ago

    Given that training LLMs from scratch requires massive computational power, you must control some means of production. i.e. You must own a server farm to scrape publicly accessible data or collect data from hosting user services. Then you must also own a server farm equipped with large arrays of GPUs or TPUs to carry out the training and most types of inference.

    So the proletariat cannot simply wield these tools for their own purposes, they must use the products that capital allows to be made available (i.e. proprietary services or pre-trained “open source” models)

    How is this different from any other capital-intensive activity? Is forging steel a forbidden technology because it requires a lot of fuel or electricity to generate enough heat?

    Like I get that we like to feel as if doing computer things is an independent activity that rogue open source hackers should be able to do but some human activity requires a massive scale of coordination and energy and time. Have you seen what it takes to build a computer chip? If you want something that is even more out of reach than training a foundation model, look no further.

    I genuinely don’t see these as unique in the landscape of human technologies. As with all automation the goal for the capitalist is to squeeze more profit out of workers. LLMs don’t change that and they won’t be replacing humans. And as with any technology these are not solely buildable by capitalists. They are a technology that requires inputs at a massive scale. But how is that more limiting than building a laser that vaporizes 300 droplets of mercury per second to expose a silicon wafer to a 13.5 nanometer light? Or building a bridge? Or name any other big project that needs a lot of people and resources…

    So I think there’s a good argument about intent here on the part of the capitalist class, but I don’t find the argument about complexity or resource intensiveness very convincing.

    • NuclearDolphin@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 days ago

      I think the distinction is that resources like steel and CPUs are constant capital and intrinsically tied to material costs, whereas training models transforms constant capital into a potentially autonomous replacement for variable capital, rather than just a productivity multiplier on your existing variable capital. This is compounded manyfold by the fact that models are infinitely reproducible and inference is cheap enough to run on non specialized hardware like consumer electronics. Feedback mechanisms allowing for self-improving capital feels like a novel development in the history of the relations of production.

      Is this qualitatively different enough to be worthy of a distinction? I dunno.

      • zedcell@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        12
        ·
        11 days ago

        Any machinery including AI is constant capital. Labour is the only thing that can produce surplus value because you can pay a worker enough to keep them alive but their labour produces much more value than it costs. Constant capital is valued based on the crystallised labour held in it, and cannot produce surplus value when all branches of production that can use it are using it.

        • zedcell@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          11
          ·
          11 days ago

          I should clarify as well, that last sentence is still slightly wrong. If a firm has a first mover advantage in using new technology they can effectively earn excess surplus value off of the difference in productivity between their variable capital and other firms in their branch of production. This doesn’t mean that the technology itself is producing surplus value, just that it made the labour employed more productive relative to its competitors.

          • 0__0 [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            11 days ago

            Of course, but even that surplus labor is not invented, but instead is reallocated to the commodities that have a greater degree of automatization. The only reason one can realize surplus profit is because there exists one which is not realizing it.

  • combat_brandonism [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    11 days ago

    Then you must also own a server farm equipped with large arrays of GPUs or TPUs to carry out the training and most types of inference.

    Can add that you also need to be able to pay people shit wages to train those LLMs. idc about Deepseek, afaik most training for novel models is done by humans

      • combat_brandonism [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 days ago

        the workers that built those models, nor the users who provided the training data

        that part? I read that as ‘brogrammers that did the math, and posters who did the posts’

        in the thread linked by the OP there’s even a liberal linking some mainstream rag covering the use of underpaid mechanical turks in the global south for training, but even charitably read, OP’s comment is under-emphasizing that

        not even getting into how AI grifters have been filling gaps in model performance and shit software with cheap human labor as well while fundraising on the idea that it’s all computers doing the thing

        • NuclearDolphin@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          11 days ago

          I would file that under “the workers that built those models” but you’re probably right that there is a meaningful distinction worth making here.

          Only started reading Imperialism this week, but I vaguely know about the concept of superprofits. Would you consider the relationship between dataset labellers outside the imperial core and capital to be fundamentally different from the relationship between labor aristocratic engineers and capital? Obviously living standards are vastly different, but in terms of how they relate to the means of production?

          • combat_brandonism [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            7
            ·
            11 days ago

            honestly you’re probably better read here comrade, but I’m curious whether pensions and 401ks give labor aristocrats in the imperial core more of a material interest in empire than those on the periphery

            • NuclearDolphin@lemmy.mlOP
              link
              fedilink
              English
              arrow-up
              7
              ·
              11 days ago

              Probably not. Only recently started doing formal reading beyond internet posts.

              but I’m curious whether pensions and 401ks give labor aristocrats in the imperial core more of a material interest in empire than those on the periphery

              This seems so trivially true that im left wondering if this question is sarcastic. Any interaction with engineers makes it super obvious that even the most “leftist” of them are invested in preserving the imperial status quo whether they are cognizant of it or not.

              • combat_brandonism [they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 days ago

                not sarcastic, but I think in it lies the answer to your question:

                Would you consider the relationship between dataset labellers outside the imperial core and capital to be fundamentally different from the relationship between labor aristocratic engineers and capital?

                yes, because of pensions, basically.

                but it also provides the lever for class solidarity with the global south, because it is so trivial (compared to the surplus taken by capital)

  • Cat_Daddy [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    11 days ago

    Good analysis. It is capital, and it relies on free labor to train them. In fact, they are more closely akin to chattel slavery insofar as all the work required to feed this beast is provided for free. If they fairly compensated every original author, no such mechanism could be created under capitalism. Capitalism does not, in fact, breed innovation. Far from it. It actually stifles innovation until such a time as it can be done essentially for free, and then only for the bourgeoisie.

    • combat_brandonism [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      11 days ago

      If they fairly compensated every original author

      when you’re right-wing of the dot world :lemmitors: (actually OP but still, they post on dot world)

      artists aren’t fairly compensated under capitalism, full stop, and the idea of ‘intellectual property’ exists so that capital can exploit them more completely than when they were dependent on patronage from the aristocracy

      edit:

      free labor to train them

      the training is done by (under) paid labor in the global south. the input data is free labor

      • Cat_Daddy [any, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 days ago

        You are correct, but I did say under capitalism. We are necessarily making these statements from a capitalist perspective because that’s the world we live in. I probably should have clarified that statement by saying in the view of the AI owners, these things couldn’t have been made along with compensating the authors. They are only profitable now because of the huge glut of free content they can absorb, legally or not.

          • MLRL_Commie [comrade/them, he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            11 days ago

            Got into a shit flinging thing recently here because of it. Still kinda pissed about it and was writing out a long essay tackling it, though I don’t think I’ll ever post it because I just feel like I’m trying to explain super basic class analysis

            Fuck AI for many of the reasons mentioned: a bubble of wasted energy and tools solely to support already useless aristocratic fake jobs, while trying to eliminate real ones by filling it with worse results. It’s regressive in that it will ruin existing infrastructure (digital, likely) and hurt our abilities to think and learn. But hurting IP laws is good

          • NuclearDolphin@lemmy.mlOP
            link
            fedilink
            English
            arrow-up
            8
            ·
            11 days ago

            This is the exact sentiment that prompted me to write the original post. Mastodon is full of these property brained dorks too.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 days ago

    Oops, that post is 10 days old. I’ll just repost here~

    It’s not the technology per se, it’s the political-economy of LLM training - it’s essentially colonial. They treat all content on the internet like something they can just take without compensation for the people who made it. They treat our electricity and water the same way, just a cheap resource that they can take. They treat their mechanical turks and trainers the same way, just cheap disposable labor that they can take.

    As for their goals, to essentially industrialize and proletarianize all intellectual labor, I think this will be the foundation of their own destruction. They’re kicking these professionals down the class ladder into the working class by deskilling their labor, and that only raises the contradictions.

    Shrinking the so-called “middle class” segment in the metropoles undermines the whole basis for colonialism in the first place.

  • devils_dust [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 days ago

    Your analysis reminds me of https://cendyne.dev/posts/2023-05-11-reverse-centaur-chickenization-chatgpt.html#chickenization, and I agree that there is an implicit promise of automating qualified labor away, but on the other hand, the problem is not necessarily in reducing the average labor time required for a given activity but what it means under capitalism.

    Personally, while I find LLMs valuable for a certain subset of tasks I also do not think they are able to entirely replace people in their current form. This won’t stop capitalists from trying, unfortunately.

  • altkey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    11 days ago

    AI is also an ideal replacement for a middle-manager fall guy you can pin everything on, with an agency of something divine.

    Massive layofs, market tilts, monopolization, corner cutting, targeted incarcerations, media manipulation, warcrimes, probably everything can be explained with it’s Will and Presence. AI is a source of unchallenged power passed to capitalists by the Algorythm. And it’s integration in many systems is not unlike some state&church honeymoon phase.

    And although I myself and many dismiss it as too junky for our own workflow, ehem, the junkiness is on point. LLM works in misterious ways, you know, with all it’s weights, and it’s not an outdated dude-on-a-cloud but our cool pure math! And that techno-teo-fascism spreads like cancer.

    I’m not well versed in marxism yet, but from my knowledge of history of RE\USSR and arguments against state religion, they line up nicely with AI hype too.

    1. Russian church held a monopoly on affordable education for the masses, only nobles could choose prestige education without indocrination - introduction of SOMEONE’s AI into classrooms, eradication of useful information on the internet in general, putting headshots on public academia by opening a box of LLMandora on them too;
    2. Russian church was a source of power for the crown and vice versa. Sometimes it stuffed the deck of it’s narrative to support the king and influence the masses, but there were decades where the church was the sole biggest landowner challenging their might too. Proto-lobbyism, the god-given power to rule and overly abundant stonk rubles taking the charge!

    IAopium for the masses.

  • Good analysis. I was just thinking about this the other day along the lines of what Marx says about automation in Grundrisse and was thinking that AI could be considered fixed capital from his analysis there (provided I understood it).

    What I think is different is that for this round of automation the capitalists have managed to steal not just surplus value from labor, but the use value of everything that was ever put in the public domain, from literature to research and make it a part of this fixed capital without compensation.

    Not sure where I am going with this, mostly thinking out loud.