The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an update changed everything …
Idk if thats why. Maybe partially. But for researchers, and people who actually want answers to their questions a robot that can disagree is necessary. I think the reason they have them agree so readily is because the AIs like to hallucinate. If it can’t establish it’s own baseline “reality” then the next best thing is to just have it operate off of what people tell it as the reality. Since if it tries to come up with an answer on its own half the time its hallucinated nonsense.
Ya its just how they choose to make them.
Well its a commodity to be sold at the end of the day, and who wants a robot that could contradict them? Or, heavens forbid, talk back?
Idk if thats why. Maybe partially. But for researchers, and people who actually want answers to their questions a robot that can disagree is necessary. I think the reason they have them agree so readily is because the AIs like to hallucinate. If it can’t establish it’s own baseline “reality” then the next best thing is to just have it operate off of what people tell it as the reality. Since if it tries to come up with an answer on its own half the time its hallucinated nonsense.