Consider this. An AI chatbot uses 200 million times more energy than the human brain to perform the tasks it does. Once done the task, the human owner of the brain can then, if he or she so chooses, cook dinner, pick up the kids, run a triathlon, play music, write a letter, unplug the drain, shop for clothes, paint the fence – you name it. But the AI chatbot can simply run variants of the same task.
So why is so much money being sunk into a technology when we already have a cheaper, far more versatile one? The answer is that AI can do specific tasks quicker – sometimes much quicker – and more accurately than the human brain could ever do. If only good at one thing, it can nonetheless be very good.
We’re starting to see the benefits of this in, for instance, medicine and pharmaceutical research, where doctors can now produce faster and more accurate diagnoses and researchers can speed up the pace of experimental research. Similarly, accountants and lawyers are now able to hand tasks like preparation of briefs or spreadsheets, improving their productivity while reducing their need for labour.
The progress isn’t always linear, though. Research has found that doctors using AI to perform diagnostics tend, on average, to lose diagnostic skills. Smarter machines can make for dumber people so for now, at least, AI outputs need to be verified by humans. Nevertheless AI could automate some tasks while enhancing human performance in others, so its potential seems considerable.
This echoes my main criticism of AI haters. There is a lot to hate about the industry but there is also genuine utility there. This is not the same thing as the metaverse or crypto. But it also seems unlikely that the most “optimistic” scenarios the pitchmen are blathering about will come true.
I think at some point though there will be some question about TAM and that number will be a lot smaller than anybody wants to admit (but probably still a pretty big number by normal human standards). And the question will be not whether this will work for that market but whether, with the exorbitant costs in mind, it make sense to pursue that market at all.
The main criticism of models right now is they produce garbage output, because people are attempting to use general models for bespoke purposes. Absent the ability to tune the models the only alternative is to better control what data is included in prompting. This has to be a human decision, and nobody wants to do that work. They think they can just flip the switch on the text generator and hit “go”
That problem will be cracked though, either by facilitating metadata sourcing or more likely just by unlocking cheaper retraining. It’s too obvious for anyone that works with this stuff. Any job relying on producing or analyzing written documents will be disrupted. The market is huge. It’s not going to solve physics but it sure as hell will generate and analyze contracts, process invoices, and all the other tasks white collar workers are engaged in to feed their families