• IHave69XiBucks@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    37
    ·
    3 days ago

    Anyone who really likes chatbots just wants a sycophant. They like that it always agrees with them. In fact the tendency of chat bots to be sycophantic makes them less useful for actual legit uses where you need them to operate off of some sort of factual baseline, and yet it makes these types love them.

    Like they’d rather agree with the user, and be wrong then disagree, and be right. lol. It makes them extremely unreliable for actual work unless you are super careful about how you phrase things. Since if you accidentally express an opinion it will try to mirror that opinion even if it’s clearly incorrect once the data is looked through.

    • theturtlemoves [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      26
      ·
      3 days ago

      the tendency of chat bots to be sycophantic

      They don’t have to be, right? The companies make them behave like sycophants because they think that’s what customers want. But we can make better chatbots. In fact, I would expect a chatbot that just tells (what it thinks is) the truth would be simpler to make and cheaper to run.

      • mrfugu [he/him, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        23
        ·
        3 days ago

        you can run a pretty decent LLM from your home computer and tell it to act however you want. Won’t stop it from hallucinating constantly but it will at least attempt to prioritize truth.

        • BynarsAreOk [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Attempt being the keyword, once you catch onto it deliberately trying to lie to you the confidence surely must be broken, otherwise you’re having to double and triple(or more) check the output which defeats the purpose for some applications.

          • IHave69XiBucks@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 days ago

            Idk if thats why. Maybe partially. But for researchers, and people who actually want answers to their questions a robot that can disagree is necessary. I think the reason they have them agree so readily is because the AIs like to hallucinate. If it can’t establish it’s own baseline “reality” then the next best thing is to just have it operate off of what people tell it as the reality. Since if it tries to come up with an answer on its own half the time its hallucinated nonsense.