I think there are plenty of legitimate criticisms to be made, but most of the problems stem from our fucked up economic system as opposed the tech itself. In my view, people end up focusing too much on the wrong thing here.
The first chatbot (ELIZA) creator, Joseph Weizenbaum, wrote a book called Computer Power and Human Reason where he argued that we shouldn’t be so ready to accept technology that have extremely native negative moral and ethical consequence. It’s a good book and very relevant for something written in 1976.
It’s extremely expensive to use this tech and the benefits are negative. It’s only being propped up because capital thinks it can destroy labor with it. Also, these models are being used to put enormous surveillance into the hands of capital.
That hasn’t been true for a long time now. You can literally run models on your laptop. Meanwhile, AI being used for nefarious purposes by capital is a problem of being ruled over by capitalists not of technology. Do you suggest we just stop all technological progress because technology will be inevitably abused by capitalists?
Damn, I can run the models on my laptop? How were they trained? What benefits will they give me?
I’m sorry for being snarky here but frankly speaking LLM output in general is neutral at best, as for what I’m an expert in I have little need of it, and for what I’m not I have little trust of it. And yes, while I can do the matrix math locally now, and get output even less vanishingly useful, it still embodies the fuel and treasure burned to generate it, and it embodies the theft of labor it is backed by. That matrix being handed to me to chew on the spare compute of my laptop - and let’s sidestep the issue of that occurring at scale, as if a thousand people generating a gram of poison is somehow different than one generating a kilogram - was still generated via those expensive processes. It embodies it in the same way my coat embodies the labor and fuel used to make it and ship it to me but at least it keeps me warm.
We can say all we want that the issue is the economic system - we would have no need of copyright, of being concerned about the theft of art and creativity, concerned about the breaking of labor, if only we simply didn’t live under capitalism. And I agree! The issue is we do. And so I’m uninterested in hand waving those concerns in the name of some notion of “technological progress”
Yeah, you can run models that outperform those that required a whole data centre just a year back on your laptop today. The benefit of privacy should be obvious here. They were trained the same way every model is trained.
If you don’t need it that’s good for you, but try to develop minimal empathy to realize that what you need and other people need may not be the same. Also, are you seriously advocating for copyrights here? The whole argument about models embodying labour is null and void when we’re talking about open source models that are publicly owned.
Meanwhile, Marxists have never been against technological progress under capitalism. The reality is that this technology will continue to be developed whether you throw your tantrums or not. The only question is who will control it. People who think they can just boycott is out of existence are doing the same thing that libs do when they try to fix problems by voting.
They were trained the same way every model is trained.
Literally the thing I’m saying is bad.
Also, are you seriously advocating for copyrights here?
I’m advocating for my artist friends to not die on the streets.
People who think they can just boycott is out of existence are doing the same thing that libs do when they try to fix problems by voting.
I’m not boycotting it out of existence, I’m arguing against its use because of the harms it causes. If your calculus is that different than mine, there’s simply no point in continuing this discussion
I’m not boycotting it out of existence, I’m arguing against its use because of the harms it causes. If your calculus is that different than mine, there’s simply no point in continuing this discussion
So literally boycotting then. There are two paths forward, proprietary models that are owned by corps and open source models that are publicly owned. It should be obvious which is the preferable option here.
Obviously not. But these models are going to entrench the power of capital over society. This isn’t the mechanical loom, this is an automated gun pointed at anyone who isn’t a billionaire.
Your evangelism for AI is extremely annoying. It makes my job worse, it costs people their jobs, and billions of gallons of water and fossil fuel are used to power it. We do not need this shit it’s absolutely worthless and it is entirely reasonable to be against it especially under capitalism, but probably even afterward.
The reality is that this tech isn’t going anywhere. The only question is who will control it going forward. What’s really annoying is that so many people on the left can’t seem to wrap their heqda around this basic fact.
The worst possible scenarios is that it will be controlled by the corps who will decide who can use it and how they can use it. This is precisely what will happen if people who have concerned about this tech ineffectually boycott it.
The only path forward is to develop it in the open and to make sure the development is community driven with regular people having a say.
We’ve already gone through all this with proprietary software and open source, it’s absolutely incresible that we have to have this discussion all over again.
Bad take. The Always Agrees Machine causing people to intensify psychosis or delusions is a problem tied to AI. Before it was web forums of like people, but the surge in psychosis is new
The creator of Eliza found that people would sneak into his office late at night in order to have a secret conversations with it. They would say that the chatbot understood them, even those who understood it was a program. He would be asked to leave the room so they could have private conversations with it. He found this to be very alarming and it was one of the reasons he wrote the book. These stories are in that book.
Sure but the argument is that we shouldn’t be so quick to accept technology that has negative consequences. This thread is all about job layoffs and loss of positions for those first entering the labor market because of AI speculation and labor replacement for low productivity tasks. This specific technology has consequences and maybe we shouldn’t be so quick to fervently accept it with open arms.
One big theme of the book is we have a moral obligation to withhold labor from developing technology that uniquely benefits governments and large corporations. Similarly, you’re defending the using ai to ‘stylize text’ even though it is disproportionately benefiting a fortune 500 news firm and hurting new labor entrants. The technology is not neutral and which side you are on?
I mean any new technology can have negative consequences in a dystopian society. My point is that we should focus on the root causes rather than the symptoms.
Its already shown that people who rely on LLM’s quickly lose any literary and critical thinking skills they had. And they aren’t as easy to get back as to lose them. Frankly, I don’t see whats so hard about writing, but maybe thats me?
I’ve seen these studies as well, but some people never had good literary skills. Not everybody is a good communicator, and not everyone has time to invest in developing these skills. If a tool can help with that I don’t really see a problem here.
This is a fault of our shitty societies and what they value, not some kind of biological essentialism. And people don’t have the time to invest in these skills because of our shitty society and what it values. “Ai” only makes people worse at these basic skills that our ancestors fought and died for us to have. And people are willingly giving them up. I don’t think as leftists we should be applauding technology designed to make us more ignorant.
I mean nobody has time to invest in being good at everything. Everybody has specific things they find more interesting than others. It’s not just about our shitty society preventing people from investing in these skills. I fundamentally disagree with the notion that technology makes us more ignorant.
Language is what separates us from all other great apes, and you are trying to frame that like any other skill? The machine deskills the worker in the labor process. Text generation is subsuming the skill of literacy from the laborer and into the machine. Reliance on AI text generation will, over time, as the labor force is reproduced, deskill those workers in literacy.
It will have knock on effects as it makes those workers reliant on the model for understanding, interpretation, and comprehension. It is going to make humans, as a collection of thinkers, less capable of doing the fundamental thing that makes us human; communicate with language.
Everything Marx has written in regards to how machines impact the labor process and the laborer apply to AI, and it has deep ramifications for future laborers. It’s baffling to me someone might think otherwise.
The idea that people will stop being able to use language because models can generate text is not a serious take. Frankly, I have a hard time accepting that you genuinely believe what you wrote there.
People will become less literate, which involves the core skills; reading and writing, which are deeply involved in the act communicating with language. Reading and writing directly builds a persons vocabulary, something needed for effective communication and comprehension.
Students are already, at nearly all grade levels, using AI to write for them and to provide reading synopses. Schools have not adapted and are instead embracing AI tools in their institutions as both a show of complicity but also surrender. Surrender because of the ubiquitous access of these tools and the crushing pressure educators are under thanks to our education bureaucracy. Complicity because higher education is a broken system catered to the capitalist class, deeply in league with their demands.
The risk is developing generations of new laborers that are no longer functionally literate, which is required for participating in society at large and challenging it: opening bank accounts, reading ingredients of food products, understanding medication or technical instructions, signing contracts, etc. It puts the underclass at risk of being functionally illiterate, making them more reliant on the upper class that controls the models and means of communication.
This is one reason why the AI monopoly is aggressively targeting education. It’s to produce a functionally illiterate underclass. If you’re functionally illiterate, you are both less able to comprehend critical literature, but also less able to express yourself and your experiences. You will be less able to comprehend things like job contracts, work policy, and labor laws without assistance. You’ll be preconditioned to accept the status quo, since your capacity to understand your conditions and challenge them will be stunted.
The other reason is the privatization of education. These partnerships will be training the next generation of “ai educators” rife with capitalist white wash, ready to “replace” teachers and then point to the poor outcome of public education (on the back of their tools) to push privatization laws and public private partnerships that benefit the AI monopoly.
None of this changes the tension that is the way the machine (ai) impacts the laborer. The free access to these tools will build a dependency on them as they are more and more integrated into our means of communication. It diminishes laborers ability to independently express themselves, which makes them more likely to use the machine, which further diminishes their ability to express themselves. It starts as an attempt at being more efficient, but becomes a dependency as it diminishes previous levels of efficiency. This was true of the cotten gin and will be the same with ai. The obvious difference is that you are not spinning cotton as a means of interpersonal communication to navigate every day life, compared to reading, writing, orating, using language to navigate your every day life.
Weird how I’ve been using AI tools for over two years now and haven’t become functionally illiterate nor have I forgotten how to write code. The education system will certainly need to adapt to developments of new technology as it always has in the past. Your thesis would also suggest that CPC doesn’t understand how material dialectic work given that China is embracing AI at all levels of society. Seems to me that you’re just creating a moral panic because you have personal biases against this new technology.
I think there are plenty of legitimate criticisms to be made, but most of the problems stem from our fucked up economic system as opposed the tech itself. In my view, people end up focusing too much on the wrong thing here.
The first chatbot (ELIZA) creator, Joseph Weizenbaum, wrote a book called Computer Power and Human Reason where he argued that we shouldn’t be so ready to accept technology that have extremely
nativenegative moral and ethical consequence. It’s a good book and very relevant for something written in 1976.Frankly, I don’t see what these extremely negative moral and ethical consequences that are inherent to the technology itself are.
It’s extremely expensive to use this tech and the benefits are negative. It’s only being propped up because capital thinks it can destroy labor with it. Also, these models are being used to put enormous surveillance into the hands of capital.
That hasn’t been true for a long time now. You can literally run models on your laptop. Meanwhile, AI being used for nefarious purposes by capital is a problem of being ruled over by capitalists not of technology. Do you suggest we just stop all technological progress because technology will be inevitably abused by capitalists?
Damn, I can run the models on my laptop? How were they trained? What benefits will they give me?
I’m sorry for being snarky here but frankly speaking LLM output in general is neutral at best, as for what I’m an expert in I have little need of it, and for what I’m not I have little trust of it. And yes, while I can do the matrix math locally now, and get output even less vanishingly useful, it still embodies the fuel and treasure burned to generate it, and it embodies the theft of labor it is backed by. That matrix being handed to me to chew on the spare compute of my laptop - and let’s sidestep the issue of that occurring at scale, as if a thousand people generating a gram of poison is somehow different than one generating a kilogram - was still generated via those expensive processes. It embodies it in the same way my coat embodies the labor and fuel used to make it and ship it to me but at least it keeps me warm.
We can say all we want that the issue is the economic system - we would have no need of copyright, of being concerned about the theft of art and creativity, concerned about the breaking of labor, if only we simply didn’t live under capitalism. And I agree! The issue is we do. And so I’m uninterested in hand waving those concerns in the name of some notion of “technological progress”
Yeah, you can run models that outperform those that required a whole data centre just a year back on your laptop today. The benefit of privacy should be obvious here. They were trained the same way every model is trained.
If you don’t need it that’s good for you, but try to develop minimal empathy to realize that what you need and other people need may not be the same. Also, are you seriously advocating for copyrights here? The whole argument about models embodying labour is null and void when we’re talking about open source models that are publicly owned.
Meanwhile, Marxists have never been against technological progress under capitalism. The reality is that this technology will continue to be developed whether you throw your tantrums or not. The only question is who will control it. People who think they can just boycott is out of existence are doing the same thing that libs do when they try to fix problems by voting.
Literally the thing I’m saying is bad.
I’m advocating for my artist friends to not die on the streets.
I’m not boycotting it out of existence, I’m arguing against its use because of the harms it causes. If your calculus is that different than mine, there’s simply no point in continuing this discussion
So literally boycotting then. There are two paths forward, proprietary models that are owned by corps and open source models that are publicly owned. It should be obvious which is the preferable option here.
Obviously not. But these models are going to entrench the power of capital over society. This isn’t the mechanical loom, this is an automated gun pointed at anyone who isn’t a billionaire.
I have bad news for you if you think the rich haven’t already done that.
Your evangelism for AI is extremely annoying. It makes my job worse, it costs people their jobs, and billions of gallons of water and fossil fuel are used to power it. We do not need this shit it’s absolutely worthless and it is entirely reasonable to be against it especially under capitalism, but probably even afterward.
The reality is that this tech isn’t going anywhere. The only question is who will control it going forward. What’s really annoying is that so many people on the left can’t seem to wrap their heqda around this basic fact.
The worst possible scenarios is that it will be controlled by the corps who will decide who can use it and how they can use it. This is precisely what will happen if people who have concerned about this tech ineffectually boycott it.
The only path forward is to develop it in the open and to make sure the development is community driven with regular people having a say.
We’ve already gone through all this with proprietary software and open source, it’s absolutely incresible that we have to have this discussion all over again.
There’s all the psychosis for one, and that reliance on them makes people stupid
That’s a problem inherent in the fucked up society capitalism built that alienates people. Has fuck all to do with AI.
Bad take. The Always Agrees Machine causing people to intensify psychosis or delusions is a problem tied to AI. Before it was web forums of like people, but the surge in psychosis is new
Bad take. The reason people with psychosis turn to chat bots is because they’re completely alienated from society and have nobody to turn to.
The creator of Eliza found that people would sneak into his office late at night in order to have a secret conversations with it. They would say that the chatbot understood them, even those who understood it was a program. He would be asked to leave the room so they could have private conversations with it. He found this to be very alarming and it was one of the reasons he wrote the book. These stories are in that book.
we were falling in love with a chatbot built in 1976? things are so cooked
Have you considered that these people are a product of a society that’s deeply alienating by its very nature?
Sure but the argument is that we shouldn’t be so quick to accept technology that has negative consequences. This thread is all about job layoffs and loss of positions for those first entering the labor market because of AI speculation and labor replacement for low productivity tasks. This specific technology has consequences and maybe we shouldn’t be so quick to fervently accept it with open arms.
One big theme of the book is we have a moral obligation to withhold labor from developing technology that uniquely benefits governments and large corporations. Similarly, you’re defending the using ai to ‘stylize text’ even though it is disproportionately benefiting a fortune 500 news firm and hurting new labor entrants. The technology is not neutral and which side you are on?
I mean any new technology can have negative consequences in a dystopian society. My point is that we should focus on the root causes rather than the symptoms.
What’s the clear articulatable political agenda to address this problem then?
“Hey everyone, don’t use this stuff that you think makes you more productive at work.” Lmao
native?
Its already shown that people who rely on LLM’s quickly lose any literary and critical thinking skills they had. And they aren’t as easy to get back as to lose them. Frankly, I don’t see whats so hard about writing, but maybe thats me?
I’ve seen these studies as well, but some people never had good literary skills. Not everybody is a good communicator, and not everyone has time to invest in developing these skills. If a tool can help with that I don’t really see a problem here.
This is a fault of our shitty societies and what they value, not some kind of biological essentialism. And people don’t have the time to invest in these skills because of our shitty society and what it values. “Ai” only makes people worse at these basic skills that our ancestors fought and died for us to have. And people are willingly giving them up. I don’t think as leftists we should be applauding technology designed to make us more ignorant.
I mean nobody has time to invest in being good at everything. Everybody has specific things they find more interesting than others. It’s not just about our shitty society preventing people from investing in these skills. I fundamentally disagree with the notion that technology makes us more ignorant.
Language is what separates us from all other great apes, and you are trying to frame that like any other skill? The machine deskills the worker in the labor process. Text generation is subsuming the skill of literacy from the laborer and into the machine. Reliance on AI text generation will, over time, as the labor force is reproduced, deskill those workers in literacy.
It will have knock on effects as it makes those workers reliant on the model for understanding, interpretation, and comprehension. It is going to make humans, as a collection of thinkers, less capable of doing the fundamental thing that makes us human; communicate with language.
Everything Marx has written in regards to how machines impact the labor process and the laborer apply to AI, and it has deep ramifications for future laborers. It’s baffling to me someone might think otherwise.
The idea that people will stop being able to use language because models can generate text is not a serious take. Frankly, I have a hard time accepting that you genuinely believe what you wrote there.
People will become less literate, which involves the core skills; reading and writing, which are deeply involved in the act communicating with language. Reading and writing directly builds a persons vocabulary, something needed for effective communication and comprehension.
Students are already, at nearly all grade levels, using AI to write for them and to provide reading synopses. Schools have not adapted and are instead embracing AI tools in their institutions as both a show of complicity but also surrender. Surrender because of the ubiquitous access of these tools and the crushing pressure educators are under thanks to our education bureaucracy. Complicity because higher education is a broken system catered to the capitalist class, deeply in league with their demands.
The risk is developing generations of new laborers that are no longer functionally literate, which is required for participating in society at large and challenging it: opening bank accounts, reading ingredients of food products, understanding medication or technical instructions, signing contracts, etc. It puts the underclass at risk of being functionally illiterate, making them more reliant on the upper class that controls the models and means of communication.
This is one reason why the AI monopoly is aggressively targeting education. It’s to produce a functionally illiterate underclass. If you’re functionally illiterate, you are both less able to comprehend critical literature, but also less able to express yourself and your experiences. You will be less able to comprehend things like job contracts, work policy, and labor laws without assistance. You’ll be preconditioned to accept the status quo, since your capacity to understand your conditions and challenge them will be stunted.
The other reason is the privatization of education. These partnerships will be training the next generation of “ai educators” rife with capitalist white wash, ready to “replace” teachers and then point to the poor outcome of public education (on the back of their tools) to push privatization laws and public private partnerships that benefit the AI monopoly.
None of this changes the tension that is the way the machine (ai) impacts the laborer. The free access to these tools will build a dependency on them as they are more and more integrated into our means of communication. It diminishes laborers ability to independently express themselves, which makes them more likely to use the machine, which further diminishes their ability to express themselves. It starts as an attempt at being more efficient, but becomes a dependency as it diminishes previous levels of efficiency. This was true of the cotten gin and will be the same with ai. The obvious difference is that you are not spinning cotton as a means of interpersonal communication to navigate every day life, compared to reading, writing, orating, using language to navigate your every day life.
Weird how I’ve been using AI tools for over two years now and haven’t become functionally illiterate nor have I forgotten how to write code. The education system will certainly need to adapt to developments of new technology as it always has in the past. Your thesis would also suggest that CPC doesn’t understand how material dialectic work given that China is embracing AI at all levels of society. Seems to me that you’re just creating a moral panic because you have personal biases against this new technology.
Capitalism is the baby crushing machine direct your anger there.