I think there are plenty of legitimate criticisms to be made, but most of the problems stem from our fucked up economic system as opposed the tech itself. In my view, people end up focusing too much on the wrong thing here.
The first chatbot (ELIZA) creator, Joseph Weizenbaum, wrote a book called Computer Power and Human Reason where he argued that we shouldn’t be so ready to accept technology that have extremely native negative moral and ethical consequence. It’s a good book and very relevant for something written in 1976.
It’s extremely expensive to use this tech and the benefits are negative. It’s only being propped up because capital thinks it can destroy labor with it. Also, these models are being used to put enormous surveillance into the hands of capital.
That hasn’t been true for a long time now. You can literally run models on your laptop. Meanwhile, AI being used for nefarious purposes by capital is a problem of being ruled over by capitalists not of technology. Do you suggest we just stop all technological progress because technology will be inevitably abused by capitalists?
Damn, I can run the models on my laptop? How were they trained? What benefits will they give me?
I’m sorry for being snarky here but frankly speaking LLM output in general is neutral at best, as for what I’m an expert in I have little need of it, and for what I’m not I have little trust of it. And yes, while I can do the matrix math locally now, and get output even less vanishingly useful, it still embodies the fuel and treasure burned to generate it, and it embodies the theft of labor it is backed by. That matrix being handed to me to chew on the spare compute of my laptop - and let’s sidestep the issue of that occurring at scale, as if a thousand people generating a gram of poison is somehow different than one generating a kilogram - was still generated via those expensive processes. It embodies it in the same way my coat embodies the labor and fuel used to make it and ship it to me but at least it keeps me warm.
We can say all we want that the issue is the economic system - we would have no need of copyright, of being concerned about the theft of art and creativity, concerned about the breaking of labor, if only we simply didn’t live under capitalism. And I agree! The issue is we do. And so I’m uninterested in hand waving those concerns in the name of some notion of “technological progress”
Obviously not. But these models are going to entrench the power of capital over society. This isn’t the mechanical loom, this is an automated gun pointed at anyone who isn’t a billionaire.
Bad take. The Always Agrees Machine causing people to intensify psychosis or delusions is a problem tied to AI. Before it was web forums of like people, but the surge in psychosis is new
The creator of Eliza found that people would sneak into his office late at night in order to have a secret conversations with it. They would say that the chatbot understood them, even those who understood it was a program. He would be asked to leave the room so they could have private conversations with it. He found this to be very alarming and it was one of the reasons he wrote the book. These stories are in that book.
Its already shown that people who rely on LLM’s quickly lose any literary and critical thinking skills they had. And they aren’t as easy to get back as to lose them. Frankly, I don’t see whats so hard about writing, but maybe thats me?
I’ve seen these studies as well, but some people never had good literary skills. Not everybody is a good communicator, and not everyone has time to invest in developing these skills. If a tool can help with that I don’t really see a problem here.
This is a fault of our shitty societies and what they value, not some kind of biological essentialism. And people don’t have the time to invest in these skills because of our shitty society and what it values. “Ai” only makes people worse at these basic skills that our ancestors fought and died for us to have. And people are willingly giving them up. I don’t think as leftists we should be applauding technology designed to make us more ignorant.
I mean nobody has time to invest in being good at everything. Everybody has specific things they find more interesting than others. It’s not just about our shitty society preventing people from investing in these skills. I fundamentally disagree with the notion that technology makes us more ignorant.
Language is what separates us from all other great apes, and you are trying to frame that like any other skill? The machine deskills the worker in the labor process. Text generation is subsuming the skill of literacy from the laborer and into the machine. Reliance on AI text generation will, over time, as the labor force is reproduced, deskill those workers in literacy.
It will have knock on effects as it makes those workers reliant on the model for understanding, interpretation, and comprehension. It is going to make humans, as a collection of thinkers, less capable of doing the fundamental thing that makes us human; communicate with language.
Everything Marx has written in regards to how machines impact the labor process and the laborer apply to AI, and it has deep ramifications for future laborers. It’s baffling to me someone might think otherwise.
The punchline is at the end:
Some shit that a new grad in journalism would’ve been doing
Frankly, I don’t see the problem with using AI for styling text.
Well, you are kind of Hexbear’s resident LLM evangelist.
I think there are plenty of legitimate criticisms to be made, but most of the problems stem from our fucked up economic system as opposed the tech itself. In my view, people end up focusing too much on the wrong thing here.
The first chatbot (ELIZA) creator, Joseph Weizenbaum, wrote a book called Computer Power and Human Reason where he argued that we shouldn’t be so ready to accept technology that have extremely
nativenegative moral and ethical consequence. It’s a good book and very relevant for something written in 1976.Frankly, I don’t see what these extremely negative moral and ethical consequences that are inherent to the technology itself are.
It’s extremely expensive to use this tech and the benefits are negative. It’s only being propped up because capital thinks it can destroy labor with it. Also, these models are being used to put enormous surveillance into the hands of capital.
That hasn’t been true for a long time now. You can literally run models on your laptop. Meanwhile, AI being used for nefarious purposes by capital is a problem of being ruled over by capitalists not of technology. Do you suggest we just stop all technological progress because technology will be inevitably abused by capitalists?
Damn, I can run the models on my laptop? How were they trained? What benefits will they give me?
I’m sorry for being snarky here but frankly speaking LLM output in general is neutral at best, as for what I’m an expert in I have little need of it, and for what I’m not I have little trust of it. And yes, while I can do the matrix math locally now, and get output even less vanishingly useful, it still embodies the fuel and treasure burned to generate it, and it embodies the theft of labor it is backed by. That matrix being handed to me to chew on the spare compute of my laptop - and let’s sidestep the issue of that occurring at scale, as if a thousand people generating a gram of poison is somehow different than one generating a kilogram - was still generated via those expensive processes. It embodies it in the same way my coat embodies the labor and fuel used to make it and ship it to me but at least it keeps me warm.
We can say all we want that the issue is the economic system - we would have no need of copyright, of being concerned about the theft of art and creativity, concerned about the breaking of labor, if only we simply didn’t live under capitalism. And I agree! The issue is we do. And so I’m uninterested in hand waving those concerns in the name of some notion of “technological progress”
Obviously not. But these models are going to entrench the power of capital over society. This isn’t the mechanical loom, this is an automated gun pointed at anyone who isn’t a billionaire.
There’s all the psychosis for one, and that reliance on them makes people stupid
That’s a problem inherent in the fucked up society capitalism built that alienates people. Has fuck all to do with AI.
Bad take. The Always Agrees Machine causing people to intensify psychosis or delusions is a problem tied to AI. Before it was web forums of like people, but the surge in psychosis is new
The creator of Eliza found that people would sneak into his office late at night in order to have a secret conversations with it. They would say that the chatbot understood them, even those who understood it was a program. He would be asked to leave the room so they could have private conversations with it. He found this to be very alarming and it was one of the reasons he wrote the book. These stories are in that book.
native?
Its already shown that people who rely on LLM’s quickly lose any literary and critical thinking skills they had. And they aren’t as easy to get back as to lose them. Frankly, I don’t see whats so hard about writing, but maybe thats me?
I’ve seen these studies as well, but some people never had good literary skills. Not everybody is a good communicator, and not everyone has time to invest in developing these skills. If a tool can help with that I don’t really see a problem here.
This is a fault of our shitty societies and what they value, not some kind of biological essentialism. And people don’t have the time to invest in these skills because of our shitty society and what it values. “Ai” only makes people worse at these basic skills that our ancestors fought and died for us to have. And people are willingly giving them up. I don’t think as leftists we should be applauding technology designed to make us more ignorant.
I mean nobody has time to invest in being good at everything. Everybody has specific things they find more interesting than others. It’s not just about our shitty society preventing people from investing in these skills. I fundamentally disagree with the notion that technology makes us more ignorant.
Language is what separates us from all other great apes, and you are trying to frame that like any other skill? The machine deskills the worker in the labor process. Text generation is subsuming the skill of literacy from the laborer and into the machine. Reliance on AI text generation will, over time, as the labor force is reproduced, deskill those workers in literacy.
It will have knock on effects as it makes those workers reliant on the model for understanding, interpretation, and comprehension. It is going to make humans, as a collection of thinkers, less capable of doing the fundamental thing that makes us human; communicate with language.
Everything Marx has written in regards to how machines impact the labor process and the laborer apply to AI, and it has deep ramifications for future laborers. It’s baffling to me someone might think otherwise.
Capitalism is the baby crushing machine direct your anger there.
extruding