• blunder [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    69
    ·
    4 days ago

    The punchline is at the end:

    For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.

    Some shit that a new grad in journalism would’ve been doing

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          30
          ·
          4 days ago

          I think there are plenty of legitimate criticisms to be made, but most of the problems stem from our fucked up economic system as opposed the tech itself. In my view, people end up focusing too much on the wrong thing here.

          • BigWeed [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            28
            arrow-down
            1
            ·
            edit-2
            4 days ago

            The first chatbot (ELIZA) creator, Joseph Weizenbaum, wrote a book called Computer Power and Human Reason where he argued that we shouldn’t be so ready to accept technology that have extremely native negative moral and ethical consequence. It’s a good book and very relevant for something written in 1976.

              • Wakmrow [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                21
                arrow-down
                1
                ·
                4 days ago

                It’s extremely expensive to use this tech and the benefits are negative. It’s only being propped up because capital thinks it can destroy labor with it. Also, these models are being used to put enormous surveillance into the hands of capital.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  12
                  ·
                  4 days ago

                  That hasn’t been true for a long time now. You can literally run models on your laptop. Meanwhile, AI being used for nefarious purposes by capital is a problem of being ruled over by capitalists not of technology. Do you suggest we just stop all technological progress because technology will be inevitably abused by capitalists?

                  • reddit [any,they/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    11
                    arrow-down
                    1
                    ·
                    4 days ago

                    Damn, I can run the models on my laptop? How were they trained? What benefits will they give me?

                    I’m sorry for being snarky here but frankly speaking LLM output in general is neutral at best, as for what I’m an expert in I have little need of it, and for what I’m not I have little trust of it. And yes, while I can do the matrix math locally now, and get output even less vanishingly useful, it still embodies the fuel and treasure burned to generate it, and it embodies the theft of labor it is backed by. That matrix being handed to me to chew on the spare compute of my laptop - and let’s sidestep the issue of that occurring at scale, as if a thousand people generating a gram of poison is somehow different than one generating a kilogram - was still generated via those expensive processes. It embodies it in the same way my coat embodies the labor and fuel used to make it and ship it to me but at least it keeps me warm.

                    We can say all we want that the issue is the economic system - we would have no need of copyright, of being concerned about the theft of art and creativity, concerned about the breaking of labor, if only we simply didn’t live under capitalism. And I agree! The issue is we do. And so I’m uninterested in hand waving those concerns in the name of some notion of “technological progress”

                  • Wakmrow [he/him]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    10
                    arrow-down
                    1
                    ·
                    4 days ago

                    Obviously not. But these models are going to entrench the power of capital over society. This isn’t the mechanical loom, this is an automated gun pointed at anyone who isn’t a billionaire.

                  • fox [comrade/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    16
                    arrow-down
                    1
                    ·
                    edit-2
                    4 days ago

                    Bad take. The Always Agrees Machine causing people to intensify psychosis or delusions is a problem tied to AI. Before it was web forums of like people, but the surge in psychosis is new

                  • BigWeed [none/use name]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    14
                    arrow-down
                    1
                    ·
                    4 days ago

                    The creator of Eliza found that people would sneak into his office late at night in order to have a secret conversations with it. They would say that the chatbot understood them, even those who understood it was a program. He would be asked to leave the room so they could have private conversations with it. He found this to be very alarming and it was one of the reasons he wrote the book. These stories are in that book.

          • Self_Sealing_Stem_Bolt [he/him, they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            1
            ·
            4 days ago

            Its already shown that people who rely on LLM’s quickly lose any literary and critical thinking skills they had. And they aren’t as easy to get back as to lose them. Frankly, I don’t see whats so hard about writing, but maybe thats me?

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              English
              arrow-up
              7
              ·
              4 days ago

              I’ve seen these studies as well, but some people never had good literary skills. Not everybody is a good communicator, and not everyone has time to invest in developing these skills. If a tool can help with that I don’t really see a problem here.

              • Self_Sealing_Stem_Bolt [he/him, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                22
                arrow-down
                1
                ·
                4 days ago

                but some people never had good literary skills

                This is a fault of our shitty societies and what they value, not some kind of biological essentialism. And people don’t have the time to invest in these skills because of our shitty society and what it values. “Ai” only makes people worse at these basic skills that our ancestors fought and died for us to have. And people are willingly giving them up. I don’t think as leftists we should be applauding technology designed to make us more ignorant.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  4 days ago

                  I mean nobody has time to invest in being good at everything. Everybody has specific things they find more interesting than others. It’s not just about our shitty society preventing people from investing in these skills. I fundamentally disagree with the notion that technology makes us more ignorant.

                  • RedWizard [he/him, comrade/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    10
                    arrow-down
                    1
                    ·
                    4 days ago

                    Language is what separates us from all other great apes, and you are trying to frame that like any other skill? The machine deskills the worker in the labor process. Text generation is subsuming the skill of literacy from the laborer and into the machine. Reliance on AI text generation will, over time, as the labor force is reproduced, deskill those workers in literacy.

                    It will have knock on effects as it makes those workers reliant on the model for understanding, interpretation, and comprehension. It is going to make humans, as a collection of thinkers, less capable of doing the fundamental thing that makes us human; communicate with language.

                    Everything Marx has written in regards to how machines impact the labor process and the laborer apply to AI, and it has deep ramifications for future laborers. It’s baffling to me someone might think otherwise.