Consider this. An AI chatbot uses 200 million times more energy than the human brain to perform the tasks it does. Once done the task, the human owner of the brain can then, if he or she so chooses, cook dinner, pick up the kids, run a triathlon, play music, write a letter, unplug the drain, shop for clothes, paint the fence – you name it. But the AI chatbot can simply run variants of the same task.
So why is so much money being sunk into a technology when we already have a cheaper, far more versatile one? The answer is that AI can do specific tasks quicker – sometimes much quicker – and more accurately than the human brain could ever do. If only good at one thing, it can nonetheless be very good.
We’re starting to see the benefits of this in, for instance, medicine and pharmaceutical research, where doctors can now produce faster and more accurate diagnoses and researchers can speed up the pace of experimental research. Similarly, accountants and lawyers are now able to hand tasks like preparation of briefs or spreadsheets, improving their productivity while reducing their need for labour.
The progress isn’t always linear, though. Research has found that doctors using AI to perform diagnostics tend, on average, to lose diagnostic skills. Smarter machines can make for dumber people so for now, at least, AI outputs need to be verified by humans. Nevertheless AI could automate some tasks while enhancing human performance in others, so its potential seems considerable.
I keep seeing research thrown around as a use case but I can’t fathom what that use case would be. Fields with large datasets use neural networks and machine learning for analysis, but those aren’t LLMs and they’ve been doing this for a long time now.
I’m also wildly skeptical about them actually helping to improve medical diagnostics and outcomes. I can’t imagine that further distancing doctors from their patients will lead to improved treatment.
It’s like an intellectual PED, it completely lacks knowledge or insights but is a really good predictive engine. It would be useful for things like searching for citations, for example. Like oh I see you are covering this topic, have you seen these papers from these other authors you’ve never heard of in a niche journal?
The risk of course is that like actual PEDs people can become reliant on them for performance. The reason I don’t use LLMs for work is because I want to preserve my voice and the ability to formulate arguments.
An LLM can’t generate a logical thread for example. It only parrots. But for something like drafting a first pass at a corporate contract that’s perfect
The article says that LLMs have already proven to help researchers “speed up the pace of experimental research.”
Recommending papers that you haven’t heard of wouldn’t do that. It has no idea how to evaluate research and there are so many paper mills that will literally publish anything that even without hallucination you couldn’t trust the connections it makes. Also any researcher doing work worth a damn probably has heard of just about every relevant piece of research. I was getting physical copies faxed to me of papers from the 70’s when I was in grad school, grok isn’t doing that.
Pharmaceutical companies for sure are using alphafold combined with other physical interaction deep learning models to develop leads on new small molecules but, again, none of those are LLMs.
As for stuff like corporate contracts or legal briefs, you’re hiring those people for their knowledge, not the documents they can produce. If they’re outsourcing their practice to a machine then they’re not worth hiring.