I think “adding predators” is kind of cargo cult, and you can just give it obstacles that are actually relevant to problems it could conceivably need to solve, because we aren’t making AI with the main hope that they can successfully survive a wolf attack, and it doesn’t need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don’t want to use it for those conditions.
It’s going to have a completely different development path without any kind of predators. Almost everything in our world is either prey, predator, or both.
I struggle to believe that it will be adapted well for a human reality where predation exists (where humans are sometimes predator to other humans too, socially) without having a development path that adapts it for that reality.
it doesn’t need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don’t want to use it for those conditions.
We don’t know how to skip straight to human level intelligence. You need to create wild animal level artificial intelligence before we can understand how to improve it into human level intelligence. If we can’t make artificial monkeys or macaws in terms of emotional/social intelligence and problem-solving then we certainly can’t make anything akin to humans or whatever the next level intelligence is that we’re hoping to get when an AI can make better versions of itself iteratively progressing technology to levels we haven’t imagined yet.
My point is that it’s a different kind of thing. Role-playing that it’s an animal in the wild instead of a cogitating tool is counterproductive. We aren’t going to be sending it out into the jungle to survive there, we want it to deal with human situations, helping people, with its own “survival” being instrumental to that end. Even if it encounters a hostile human, it probably won’t have anything that it can do about it because we aren’t immediately talking about building androids, this AI will effectively be just a big bundle of computers in a warehouse. If you want to give an experimental AI control over the security systems of the building housing it, go off I guess, but the protocols to contain the person breaking in by . . . locking the doors with nothing else it can really do while calling the cops and the owner of the facility are not really “avoiding predators” on a level other than strained metaphor, and require no engagement in this “surviving in the wild” role-play, it’s just identifying a threat and then doing something that, the threat being identified, any modern computer can do. If you want to say it needs to learn to recognize “threats” (be they some abstraction in a game simulation, a fire, a robber, or a plane falling out of the sky) sure, that’s fair, that falls within obstacles it might actually encounter.
Nothing I’m saying bears on the level of intelligence it exhibits or that it is capable of. I’m not saying it needs to handle these things as well as a human would, just that it needs to be given applicable situations.
I feel like you’ve misunderstood me. You’re talking consistently about one single AI.
Machine learning is not one AI. It is thousands of generations of AI that have iteratively improved over time by their successes and failures. The best performing of the generation go on to form the basis of the next generation, or you have survival mechanics that automatically form new generations.
This isn’t training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.
If your process isn’t going to be similar to humans, you aren’t going to produce something similar to humans. I honestly think that’s dangerous in and of itself, you’re creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.
This isn’t training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.
If your process isn’t going to be similar to humans, you aren’t going to produce something similar to humans. I honestly think that’s dangerous in and of itself, you’re creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.
But you’re able to designate what its goals are completely arbitrarily. It doesn’t need to think like a human – there are humans who have been at odds with coexistence with humanity – it needs to be constructed based on the value of human benefit, and you can seriously just tell it that. That isn’t changed by it cogitating in a structurally different way, which it also almost certainly would be doing anyway because the way we do is highly adapted to early humanity, but is structurally deeply based on random incidents of mutation before then. Something could think very differently and nonetheless be just as capable of flourishing in those circumstances. This difference is compounded by the fact that you probably aren’t going to actually produce an accurate simulation of an early human environment because you can’t just make a functional simulation of macroscopic reality like that. Even imagining your method made sense, it would still ultimately need to fall into aspects of what I’m saying about arbitrary stipulation because the model environment would be based on human heuristics.
But way more important than that is the part where you, again, can just tell it that human benefit based on human instructions is the primary goal and it will pursue that, handling things like energy acquisition and efficiency secondarily.
I think “adding predators” is kind of cargo cult, and you can just give it obstacles that are actually relevant to problems it could conceivably need to solve, because we aren’t making AI with the main hope that they can successfully survive a wolf attack, and it doesn’t need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don’t want to use it for those conditions.
It’s going to have a completely different development path without any kind of predators. Almost everything in our world is either prey, predator, or both.
I struggle to believe that it will be adapted well for a human reality where predation exists (where humans are sometimes predator to other humans too, socially) without having a development path that adapts it for that reality.
We don’t know how to skip straight to human level intelligence. You need to create wild animal level artificial intelligence before we can understand how to improve it into human level intelligence. If we can’t make artificial monkeys or macaws in terms of emotional/social intelligence and problem-solving then we certainly can’t make anything akin to humans or whatever the next level intelligence is that we’re hoping to get when an AI can make better versions of itself iteratively progressing technology to levels we haven’t imagined yet.
My point is that it’s a different kind of thing. Role-playing that it’s an animal in the wild instead of a cogitating tool is counterproductive. We aren’t going to be sending it out into the jungle to survive there, we want it to deal with human situations, helping people, with its own “survival” being instrumental to that end. Even if it encounters a hostile human, it probably won’t have anything that it can do about it because we aren’t immediately talking about building androids, this AI will effectively be just a big bundle of computers in a warehouse. If you want to give an experimental AI control over the security systems of the building housing it, go off I guess, but the protocols to contain the person breaking in by . . . locking the doors with nothing else it can really do while calling the cops and the owner of the facility are not really “avoiding predators” on a level other than strained metaphor, and require no engagement in this “surviving in the wild” role-play, it’s just identifying a threat and then doing something that, the threat being identified, any modern computer can do. If you want to say it needs to learn to recognize “threats” (be they some abstraction in a game simulation, a fire, a robber, or a plane falling out of the sky) sure, that’s fair, that falls within obstacles it might actually encounter.
Nothing I’m saying bears on the level of intelligence it exhibits or that it is capable of. I’m not saying it needs to handle these things as well as a human would, just that it needs to be given applicable situations.
I feel like you’ve misunderstood me. You’re talking consistently about one single AI.
Machine learning is not one AI. It is thousands of generations of AI that have iteratively improved over time by their successes and failures. The best performing of the generation go on to form the basis of the next generation, or you have survival mechanics that automatically form new generations.
This isn’t training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.
If your process isn’t going to be similar to humans, you aren’t going to produce something similar to humans. I honestly think that’s dangerous in and of itself, you’re creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.
But you’re able to designate what its goals are completely arbitrarily. It doesn’t need to think like a human – there are humans who have been at odds with coexistence with humanity – it needs to be constructed based on the value of human benefit, and you can seriously just tell it that. That isn’t changed by it cogitating in a structurally different way, which it also almost certainly would be doing anyway because the way we do is highly adapted to early humanity, but is structurally deeply based on random incidents of mutation before then. Something could think very differently and nonetheless be just as capable of flourishing in those circumstances. This difference is compounded by the fact that you probably aren’t going to actually produce an accurate simulation of an early human environment because you can’t just make a functional simulation of macroscopic reality like that. Even imagining your method made sense, it would still ultimately need to fall into aspects of what I’m saying about arbitrary stipulation because the model environment would be based on human heuristics.
But way more important than that is the part where you, again, can just tell it that human benefit based on human instructions is the primary goal and it will pursue that, handling things like energy acquisition and efficiency secondarily.