Consider this. An AI chatbot uses 200 million times more energy than the human brain to perform the tasks it does. Once done the task, the human owner of the brain can then, if he or she so chooses, cook dinner, pick up the kids, run a triathlon, play music, write a letter, unplug the drain, shop for clothes, paint the fence – you name it. But the AI chatbot can simply run variants of the same task.
So why is so much money being sunk into a technology when we already have a cheaper, far more versatile one? The answer is that AI can do specific tasks quicker – sometimes much quicker – and more accurately than the human brain could ever do. If only good at one thing, it can nonetheless be very good.
We’re starting to see the benefits of this in, for instance, medicine and pharmaceutical research, where doctors can now produce faster and more accurate diagnoses and researchers can speed up the pace of experimental research. Similarly, accountants and lawyers are now able to hand tasks like preparation of briefs or spreadsheets, improving their productivity while reducing their need for labour.
The progress isn’t always linear, though. Research has found that doctors using AI to perform diagnostics tend, on average, to lose diagnostic skills. Smarter machines can make for dumber people so for now, at least, AI outputs need to be verified by humans. Nevertheless AI could automate some tasks while enhancing human performance in others, so its potential seems considerable.
Yeah I very much agree with this. LLMs are amazing at very specific things. There is actual use there in a way that something like crypto, which was a solution in search of a problem, has never had. Nobody needs “blockchain” anything. LLMs are really good at boilerplate code or tagging things to identify patterns and things of that nature. That said, I don’t really think it’s possible for LLMs to be “General Artificial Intelligence” or whatever, and while they will (and in many cases currently are) put a lot of people out of work, they’re not wholesale replacements for workers. You can get an LLM to code up an application autonomously. It’s just not how the tech works.
I think this AI bubble is going to pop when we have a larger “Deepseek” type moment coupled with LLMs hitting a plateau. Deepseek made an amazing model that was like 1000x more efficent, so all those data centers? Nah don’t need them. BUT it wasn’t “as good” as the frontier models like GPT4 or Claude Sonnet 3 or Gemini 2. Once there’s a (probably Chinese) model that is just as good as a “frontier model” (and that’ll happen once those “frontier models” stop improving so much and kind of taper off, which we’re already seeing with GPT5) but can do it with 1000x less the processing power, the whole bubble bursts and all that data center buildout looks real stupid. Until that happens the gravy train will keep on going.