• Wakmrow [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      2 days ago

      It’s extremely expensive to use this tech and the benefits are negative. It’s only being propped up because capital thinks it can destroy labor with it. Also, these models are being used to put enormous surveillance into the hands of capital.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        That hasn’t been true for a long time now. You can literally run models on your laptop. Meanwhile, AI being used for nefarious purposes by capital is a problem of being ruled over by capitalists not of technology. Do you suggest we just stop all technological progress because technology will be inevitably abused by capitalists?

        • reddit [any,they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          2 days ago

          Damn, I can run the models on my laptop? How were they trained? What benefits will they give me?

          I’m sorry for being snarky here but frankly speaking LLM output in general is neutral at best, as for what I’m an expert in I have little need of it, and for what I’m not I have little trust of it. And yes, while I can do the matrix math locally now, and get output even less vanishingly useful, it still embodies the fuel and treasure burned to generate it, and it embodies the theft of labor it is backed by. That matrix being handed to me to chew on the spare compute of my laptop - and let’s sidestep the issue of that occurring at scale, as if a thousand people generating a gram of poison is somehow different than one generating a kilogram - was still generated via those expensive processes. It embodies it in the same way my coat embodies the labor and fuel used to make it and ship it to me but at least it keeps me warm.

          We can say all we want that the issue is the economic system - we would have no need of copyright, of being concerned about the theft of art and creativity, concerned about the breaking of labor, if only we simply didn’t live under capitalism. And I agree! The issue is we do. And so I’m uninterested in hand waving those concerns in the name of some notion of “technological progress”

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 days ago

            Yeah, you can run models that outperform those that required a whole data centre just a year back on your laptop today. The benefit of privacy should be obvious here. They were trained the same way every model is trained.

            If you don’t need it that’s good for you, but try to develop minimal empathy to realize that what you need and other people need may not be the same. Also, are you seriously advocating for copyrights here? The whole argument about models embodying labour is null and void when we’re talking about open source models that are publicly owned.

            Meanwhile, Marxists have never been against technological progress under capitalism. The reality is that this technology will continue to be developed whether you throw your tantrums or not. The only question is who will control it. People who think they can just boycott is out of existence are doing the same thing that libs do when they try to fix problems by voting.

            • reddit [any,they/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              1
              ·
              2 days ago

              They were trained the same way every model is trained.

              Literally the thing I’m saying is bad.

              Also, are you seriously advocating for copyrights here?

              I’m advocating for my artist friends to not die on the streets.

              People who think they can just boycott is out of existence are doing the same thing that libs do when they try to fix problems by voting.

              I’m not boycotting it out of existence, I’m arguing against its use because of the harms it causes. If your calculus is that different than mine, there’s simply no point in continuing this discussion

              • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                link
                fedilink
                English
                arrow-up
                9
                ·
                2 days ago

                I’m not boycotting it out of existence, I’m arguing against its use because of the harms it causes. If your calculus is that different than mine, there’s simply no point in continuing this discussion

                So literally boycotting then. There are two paths forward, proprietary models that are owned by corps and open source models that are publicly owned. It should be obvious which is the preferable option here.

        • Wakmrow [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          2 days ago

          Obviously not. But these models are going to entrench the power of capital over society. This isn’t the mechanical loom, this is an automated gun pointed at anyone who isn’t a billionaire.

            • Nakoichi [they/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 days ago

              Your evangelism for AI is extremely annoying. It makes my job worse, it costs people their jobs, and billions of gallons of water and fossil fuel are used to power it. We do not need this shit it’s absolutely worthless and it is entirely reasonable to be against it especially under capitalism, but probably even afterward.

              • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                The reality is that this tech isn’t going anywhere. The only question is who will control it going forward. What’s really annoying is that so many people on the left can’t seem to wrap their heqda around this basic fact.

                The worst possible scenarios is that it will be controlled by the corps who will decide who can use it and how they can use it. This is precisely what will happen if people who have concerned about this tech ineffectually boycott it.

                The only path forward is to develop it in the open and to make sure the development is community driven with regular people having a say.

                We’ve already gone through all this with proprietary software and open source, it’s absolutely incresible that we have to have this discussion all over again.

        • fox [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          edit-2
          2 days ago

          Bad take. The Always Agrees Machine causing people to intensify psychosis or delusions is a problem tied to AI. Before it was web forums of like people, but the surge in psychosis is new

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            English
            arrow-up
            8
            ·
            2 days ago

            Bad take. The reason people with psychosis turn to chat bots is because they’re completely alienated from society and have nobody to turn to.

        • BigWeed [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          2 days ago

          The creator of Eliza found that people would sneak into his office late at night in order to have a secret conversations with it. They would say that the chatbot understood them, even those who understood it was a program. He would be asked to leave the room so they could have private conversations with it. He found this to be very alarming and it was one of the reasons he wrote the book. These stories are in that book.

            • BigWeed [none/use name]@hexbear.net
              link
              fedilink
              English
              arrow-up
              12
              arrow-down
              1
              ·
              2 days ago

              Sure but the argument is that we shouldn’t be so quick to accept technology that has negative consequences. This thread is all about job layoffs and loss of positions for those first entering the labor market because of AI speculation and labor replacement for low productivity tasks. This specific technology has consequences and maybe we shouldn’t be so quick to fervently accept it with open arms.

              One big theme of the book is we have a moral obligation to withhold labor from developing technology that uniquely benefits governments and large corporations. Similarly, you’re defending the using ai to ‘stylize text’ even though it is disproportionately benefiting a fortune 500 news firm and hurting new labor entrants. The technology is not neutral and which side you are on?

              • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                link
                fedilink
                English
                arrow-up
                7
                ·
                2 days ago

                I mean any new technology can have negative consequences in a dystopian society. My point is that we should focus on the root causes rather than the symptoms.

                • BigWeed [none/use name]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  2 days ago

                  I hoped the takeaway was to not evangelize the tech because it hurts people and withhold your labor from furthering this type of work, rather than “don’t use it”. People no longer have an option with not using it, the expectation of productivity has gone up and it’s either use it or be replaced by someone who will.

                  • If someone wants to use an AI video editor to make rough cuts of socialist agitprop that they couldn’t otherwise afford to make solo, then I would encourage them to do just that. It’s better than the alternative, burying our head in the sand and cedeing that ground to some right winger. It’s just a tool.