Consider this. An AI chatbot uses 200 million times more energy than the human brain to perform the tasks it does. Once done the task, the human owner of the brain can then, if he or she so chooses, cook dinner, pick up the kids, run a triathlon, play music, write a letter, unplug the drain, shop for clothes, paint the fence – you name it. But the AI chatbot can simply run variants of the same task.
So why is so much money being sunk into a technology when we already have a cheaper, far more versatile one? The answer is that AI can do specific tasks quicker – sometimes much quicker – and more accurately than the human brain could ever do. If only good at one thing, it can nonetheless be very good.
We’re starting to see the benefits of this in, for instance, medicine and pharmaceutical research, where doctors can now produce faster and more accurate diagnoses and researchers can speed up the pace of experimental research. Similarly, accountants and lawyers are now able to hand tasks like preparation of briefs or spreadsheets, improving their productivity while reducing their need for labour.
The progress isn’t always linear, though. Research has found that doctors using AI to perform diagnostics tend, on average, to lose diagnostic skills. Smarter machines can make for dumber people so for now, at least, AI outputs need to be verified by humans. Nevertheless AI could automate some tasks while enhancing human performance in others, so its potential seems considerable.
It’s a bubble now for sure but so was the web in 2000. Like the dotcom bust I am sure we will see some genuinely transformative tech and companies emerge from this.
There is a lot of utility in a massive pattern finder, no doubt. If someone can crack making fine tuning cheaper these things will become much more useful in the corporate context, which is what’s needed to unlock the revenue floodgates. I find in practice few people actually understand the tech and its limitations, and the people who do have an interest in overselling its capabilities. But that’s neither here nor there.
The main question we should ask ourselves is, should we really be building a society whose sole drive is to construct data centers so we can have better pattern finding machines. Or are there better uses for those resources. This of course is rhetorical.
The difference between the US and China is the latter has the moral belief systems and importantly the mechanisms of government to ensure that society’s productive capacity is properly allocated.
The end game for these AI freaks is to put us all out of work. They are slamming on the accelerator towards that end, and our governments are sitting back and watching. That should be cause of great concern.
I encourage any haters who are mathematically minded to spend a bit of time learning about the tech. A qualified hater should at least know what they’re talking about. Having gone through that process myself recently, I will say the tech is super cool, very useful for certain applications, but also being overhyped / oversold. It’s not gonna get us to the singularity, but it will for sure siphon huge amounts of surplus value and productive capacity, and destroy a shitload of jobs, while driving massive increases in efficiency.
I encourage any haters who are mathematically minded to spend a bit of time learning about the tech. A qualified hater should at least know what they’re talking about.
This echoes my main criticism of AI haters. There is a lot to hate about the industry but there is also genuine utility there. This is not the same thing as the metaverse or crypto. But it also seems unlikely that the most “optimistic” scenarios the pitchmen are blathering about will come true.
I think at some point though there will be some question about TAM and that number will be a lot smaller than anybody wants to admit (but probably still a pretty big number by normal human standards). And the question will be not whether this will work for that market but whether, with the exorbitant costs in mind, it make sense to pursue that market at all.
The main criticism of models right now is they produce garbage output, because people are attempting to use general models for bespoke purposes. Absent the ability to tune the models the only alternative is to better control what data is included in prompting. This has to be a human decision, and nobody wants to do that work. They think they can just flip the switch on the text generator and hit “go”
That problem will be cracked though, either by facilitating metadata sourcing or more likely just by unlocking cheaper retraining. It’s too obvious for anyone that works with this stuff. Any job relying on producing or analyzing written documents will be disrupted. The market is huge. It’s not going to solve physics but it sure as hell will generate and analyze contracts, process invoices, and all the other tasks white collar workers are engaged in to feed their families
Yeah, I have no issue with firms still researching AI because it must have some good uses. But the scale that we are currently deploying these dogshit LLMs just to make a profit by melting peoples brains is infuriating and so wasteful
Think of it more along the lines of the “ghost cities” China built in 2010s which are now bustling. The bet these guys are taking is that the models will continue to improve and you will need the computer either way.
Now what will absolutely nuke these guys is if open source models outpace proprietary ones. But Microsoft and Amazon, along with NVIDIA will still be in the catbird seat as they sell access to parallel processing capacity. Meta would be fucked of course. They’re probably fucked anyway honestly. Google too. Apple and Tesla are pure bubbles honestly they shouldn’t even be in the conversation. Why they are catching the AI tailwind is beyond me.
Edit: an actual good reddit comment
Yeah I very much agree with this. LLMs are amazing at very specific things. There is actual use there in a way that something like crypto, which was a solution in search of a problem, has never had. Nobody needs “blockchain” anything. LLMs are really good at boilerplate code or tagging things to identify patterns and things of that nature. That said, I don’t really think it’s possible for LLMs to be “General Artificial Intelligence” or whatever, and while they will (and in many cases currently are) put a lot of people out of work, they’re not wholesale replacements for workers. You can get an LLM to code up an application autonomously. It’s just not how the tech works.
I think this AI bubble is going to pop when we have a larger “Deepseek” type moment coupled with LLMs hitting a plateau. Deepseek made an amazing model that was like 1000x more efficent, so all those data centers? Nah don’t need them. BUT it wasn’t “as good” as the frontier models like GPT4 or Claude Sonnet 3 or Gemini 2. Once there’s a (probably Chinese) model that is just as good as a “frontier model” (and that’ll happen once those “frontier models” stop improving so much and kind of taper off, which we’re already seeing with GPT5) but can do it with 1000x less the processing power, the whole bubble bursts and all that data center buildout looks real stupid. Until that happens the gravy train will keep on going.
It’s a bubble now for sure but so was the web in 2000. Like the dotcom bust I am sure we will see some genuinely transformative tech and companies emerge from this.
You forget we’re not in still in the year 2000, I mean the point is the massive investment into these data centers, along with the necessary energy requirements and investments, the continued fueling of NVIDIA’s bubble itself which relies on stable US-China-Taiwan relations is a ticking bomb. There wont be any good tech that outweights the harm being done to get there. Might as well believe Google is OK because their search engine was better. Look where Google search is today, it took barely them a decade to destroy it.
This is the second point, the “rate of enshitification” is arguably increasing as well. It wont take these companies even 10 or 15 years to destroy their own business, just look at the current power struggle between SAS providers and these model owners.
This will only get worse, these AI providers will keep fighting to somehow start to get money from somewhere and that will already lead to less features and access for more money.
At the end of the day, assume even the best case scenario, who gives a single fuck about “good AI” when that might cost 15-50$ a month or be fucked with unstoppable ads or 10 different tiers or whatever nonsense. The industry is already fucked because there is no path towards a useful product.
Circle back to the web in 2000, and the first principle of its success. The internet was entirely free and open to navigation from everywhere in the world. As long as you got internet access. That is the first issue. It wasn’t realy until the smartphone era and good high speed mobile service that the internet got popular in the 2000s for the majority of the world.
Now go back to 2000 and say to yourself the internet will be a success, except you’ll need to pay 20 different service providers and monthly fees, advertising will be almost unavoidable and btw each provider only gives you like 10% of the internet, the other 90% is locked behind some other provider and monthly plan.
THis is why this AI hopium is garbage. The 2000s business practices were different but arguably not as bad as it is today and that is a key role why these technologies were successful. I look at Apples Vision shit pro and them behaving as if its still 2006, success is not earned but guaranteed, just build some high profile “premium” garbage and people will eat it up… not.
and more accurately than the human brain could ever do
It can’t even spell simple words correctly shut the fuck up already no one is buying this shit
spelling and grammar are the two things it does well, everything else ranges from “huh, that’s neat” to terrible
I was thinking of the blueberry thing where someone asked it to examine the word just a little more closely and it repeatedly fucks it up while gaslighting them
https://kieranhealy.org/blog/archives/2025/08/07/blueberry-hill/
It’s reading it can’t do. An AI wouldn’t be able to pass a comprehension test made for 6 year olds.
Consider this. An AI chatbot uses 200 million times more energy than the human brain to perform the tasks it does. Once done the task, the human owner of the brain can then, if he or she so chooses, cook dinner, pick up the kids, run a triathlon, play music, write a letter, unplug the drain, shop for clothes, paint the fence – you name it. But the AI chatbot can simply run variants of the same task.
This is a pretty good way of beginning to explain why machines don’t create value in the Marxian sense. The AI bubble is built on implicit assumption that we are on the cusp of AGI and robotics that can match all human capabilities; basically, that we’re going to be building actual artificial people. The two fundamental problems with that thinking are 1) they’ve gotten way over their skiis in terms of expectations from the tech and 2) we do actually have that already in the form of freeborn humans.
AI is only useful for tasks that you will need to double check anyway, and it makes you dumber. So you may as well just do the work.
It has functions for processing large amounts of data, but like you said, only if it gets checked for accuracy after. Which is still faster then doing it yourself
AI revolution can be dud - for investors. Most of the current “AI” companies will be wiped out.
However, the industrialization of the white collar jobs would certainly happen and under capitalism will bring all the disastrous effects described by dude named Karl in the first tome of his fat book named Kapital.Oops, we proletarianized the professional-managerial class, now they’re forming industrial unions and bargaining for wages.
I keep seeing research thrown around as a use case but I can’t fathom what that use case would be. Fields with large datasets use neural networks and machine learning for analysis, but those aren’t LLMs and they’ve been doing this for a long time now.
I’m also wildly skeptical about them actually helping to improve medical diagnostics and outcomes. I can’t imagine that further distancing doctors from their patients will lead to improved treatment.
It’s like an intellectual PED, it completely lacks knowledge or insights but is a really good predictive engine. It would be useful for things like searching for citations, for example. Like oh I see you are covering this topic, have you seen these papers from these other authors you’ve never heard of in a niche journal?
The risk of course is that like actual PEDs people can become reliant on them for performance. The reason I don’t use LLMs for work is because I want to preserve my voice and the ability to formulate arguments.
An LLM can’t generate a logical thread for example. It only parrots. But for something like drafting a first pass at a corporate contract that’s perfect
The article says that LLMs have already proven to help researchers “speed up the pace of experimental research.”
Recommending papers that you haven’t heard of wouldn’t do that. It has no idea how to evaluate research and there are so many paper mills that will literally publish anything that even without hallucination you couldn’t trust the connections it makes. Also any researcher doing work worth a damn probably has heard of just about every relevant piece of research. I was getting physical copies faxed to me of papers from the 70’s when I was in grad school, grok isn’t doing that.
Pharmaceutical companies for sure are using alphafold combined with other physical interaction deep learning models to develop leads on new small molecules but, again, none of those are LLMs.
As for stuff like corporate contracts or legal briefs, you’re hiring those people for their knowledge, not the documents they can produce. If they’re outsourcing their practice to a machine then they’re not worth hiring.
After a point, the more money you pump into an investment the less likely it is to return any, when youre investing more than the gdp of the average country its guaranteed you wont.
The other way I think about it is, even if the investment proves accretive, the revenue takes the form of income you are siphoning out of the rest of the economy. That money would otherwise go to pay people, or build real infrastructure. Instead it goes to a small number techbros and data centers.
Physical productive capacity absent external investment is a zero sum game, there’s only so much stuff we can build. And if collectively we are building nothing but data centers, there are many societal needs that will go unmet. This is what worries me most
Of the two countries that currently dominate AI research, accounting for the vast bulk of spending, China seems to be opting for this approach. The country is rapidly finding practical applications using ‘good-enough’ technology, and the effects are showing up in everything from AI turnstiles on public transit to factory robots.
But the U.S., which accounts for two-thirds of the world’s spending on research, is going in a different direction. The revolution there is being led by Silicon Valley visionaries who want to go well beyond “good enough.” Determined instead to reach their Holy Grail of artificial superintelligence, their dream is to in effect build a better human – an AI model that can do everything a human brain does in the way of reasoning and creation, but better and faster.
It’s a highly speculative bet but one investors are, so far, willing to bankroll. The Magnificent Seven companies in the vanguard of the revolution have announced plans to spend hundreds of billions of dollars developing new chips, building data centres and developing applications.
Investors are matching their enthusiasm, bidding the share values of the Mag7 so high that Nvidia alone is now worth almost a tenth of the total value of the US stock market. Retail investors are throwing caution to the wind to join the gold rush, with margin debt at record levels, up 25 per cent in the last year alone.
But as time passes, the risks attached to this bet grow ever more evident. It’s now nearly three years since the launch of ChatGPT fired the starting gun on the AI revolution, yet still we lie in wait for any evidence AI will significantly raise output.
On the contrary the growth of labour productivity remains as sluggish as it’s been for years, with few signs yet that the widespread adoption of AI – Americans prompt ChatGPT more than 300 million times a day – is doing anything to make most workers more productive.
And because AI is sucking in a sea of capital, there’s little left for everyone else, the result being that investment in the rest of the economy is now declining and corporate earnings outside the Magnificent Seven are largely stagnating.
Meanwhile AI’s insatiable appetite for energy is driving up electricity prices, helping to keep inflation from falling. Not only does that put pressure on interest rates, but it forces spending on other activities to fall. Reflecting this slowdown on Main Street, the economy is decelerating – so much so that the Trump administration is now openly mulling suspending economic reports so that Americans can’t hear about it.
The AI revolution is real and is already proving transformative. But whether it will justify the current spending under way in the U.S. is another matter. Equally, whether the American or Chinese model wins the future is now an open question.
The Trump administration is going all in on the tech bro evangelism, lifting as many controls as possible to facilitate its vision.
If the bet pays off, Donald Trump will be remembered as the president who ushered in the next American revolution. But if it fails, millions of Americans will have lost a fortune and the economy will have been set back.
That’s quite the bet indeed.
deleted by creator