It requires needs or wants to be self directed. The needs can also be externalized the way we do with LLMs today. The user prompt can generate a goal for the system, and then it will work to accomplish it. That said, I entirely agree systems that are self directed are more interesting. If a system has needs that result in it wanting to maintain homeostasis, such as maintaining an optimal energy level, then it can act and learn autonomously.
It doesn’t have volition, but it does have intelligence in the domain of creating consistent simulations. So, it does seem like you can get a domain specific intelligence through reinforcement training.
It requires needs or wants to be self directed. The needs can also be externalized the way we do with LLMs today. The user prompt can generate a goal for the system, and then it will work to accomplish it. That said, I entirely agree systems that are self directed are more interesting. If a system has needs that result in it wanting to maintain homeostasis, such as maintaining an optimal energy level, then it can act and learn autonomously.
Ok but how is it getting intelligent before the user prompt?
The AI isn’t useful until it is grown and evolved. I’m talking about the earlier stages.
We can look at examples of video generating models. I’d argue they have to have a meaningful and persistent representation of the world internally. Consider something like Genie as an example https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
It doesn’t have volition, but it does have intelligence in the domain of creating consistent simulations. So, it does seem like you can get a domain specific intelligence through reinforcement training.