he’s currently shilling for a company trying to bring AI into schools
Fucking wut? I’m going back to school to finish out my engineering degree and every single class has a giant multi-paragraph disclaimer about use of AI as scholastic dishonesty that will not be treated lightly. I mean, good, but wtf does he mean? Like elementary and high schools? What happens when these kiddos get to college after they’re completely dependent on some LLM to even think for them and the professor is like: “nah dawg”? They just flunk the fuck out because they can’t generate a single coherent thought independently? What the fuck is the end-game of this whole “AI” crap?
Even viewing it from a capitalist lens, I don’t understand what the goal is. So “AI” becomes so smart and self-recursive that human labor is arbitrary. OK. Then what? You live in capitalism. People have to buy your widgets. If nobody has any money to buy your widgets because “AI” put everyone out of work, then all you have is a warehouse full of “AI”-generated widgets that nobody can buy. So now what? I’m not being facetious. Seriously, now what? How does this work out in your favor? How does this work out in anyone’s favor?
Is AI selling AI-generated goods to other AI? WTF is the end-goal of this exercise?
I am currently doing my CS degree and I have professors who basically dont really teach the material anymore because “you should ask chatGPT for questions”
Unfortunately LLM policies are often determined on a class-by-class basis. I have met lots of educators and education administrators across k-12 and university who are pro-LLM usage. They typically don’t understand how it works, and/or believe it currently can do better than it can.
I have had undergrad students who use chatgpt without question, thinking it’s totally fine (despite my syllabus policy), and k-12 students who won’t touch it with a 10-ft pole before I cautioned them against it. Lots of individual variation and credulity.
Unfortunately this is still better in terms of disclosure than in industry where executives will just feed company secrets to Altman even if there’s an internal LLM built just to prevent that, and the BS generator still creates BS sometimes. I’m sure there are plenty of students who just vibe code without having a BS filter (faculty and staff encouraging LLM usage isn’t inherently problematic IMHO, but when they do this to offload the work of actually teaching, it really doesn’t help…) and it’s only going to make their lives harder.
Fucking wut? I’m going back to school to finish out my engineering degree and every single class has a giant multi-paragraph disclaimer about use of AI as scholastic dishonesty that will not be treated lightly. I mean, good, but wtf does he mean? Like elementary and high schools? What happens when these kiddos get to college after they’re completely dependent on some LLM to even think for them and the professor is like: “nah dawg”? They just flunk the fuck out because they can’t generate a single coherent thought independently? What the fuck is the end-game of this whole “AI” crap?
Even viewing it from a capitalist lens, I don’t understand what the goal is. So “AI” becomes so smart and self-recursive that human labor is arbitrary. OK. Then what? You live in capitalism. People have to buy your widgets. If nobody has any money to buy your widgets because “AI” put everyone out of work, then all you have is a warehouse full of “AI”-generated widgets that nobody can buy. So now what? I’m not being facetious. Seriously, now what? How does this work out in your favor? How does this work out in anyone’s favor?
Is AI selling AI-generated goods to other AI? WTF is the end-goal of this exercise?
It’s a major push. The AI companies are trying to do what Apple and Microsoft did and make kids dependant on their tech.
I am currently doing my CS degree and I have professors who basically dont really teach the material anymore because “you should ask chatGPT for questions”
Unfortunately LLM policies are often determined on a class-by-class basis. I have met lots of educators and education administrators across k-12 and university who are pro-LLM usage. They typically don’t understand how it works, and/or believe it currently can do better than it can.
I have had undergrad students who use chatgpt without question, thinking it’s totally fine (despite my syllabus policy), and k-12 students who won’t touch it with a 10-ft pole before I cautioned them against it. Lots of individual variation and credulity.
That’s next quarter’s problem
Sadly my institution just wants you to cite your use of ai. Upsetting.
Unfortunately this is still better in terms of disclosure than in industry where executives will just feed company secrets to Altman even if there’s an internal LLM built just to prevent that, and the BS generator still creates BS sometimes. I’m sure there are plenty of students who just vibe code without having a BS filter (faculty and staff encouraging LLM usage isn’t inherently problematic IMHO, but when they do this to offload the work of actually teaching, it really doesn’t help…) and it’s only going to make their lives harder.