• nesc@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    Chat bot will impersonate whoever you’ll tell them to impersonate (as stated in the article), my point is pretty simple, people don’t need a guide when using a chat bot that tells them how they should treat and interact with it.

    I get it, that was just perfunctory self depreciation with intended audience being other first worlders.

    • SaltSong@startrek.website
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 hours ago

      people don’t need a guide when using a chat bot that tells them how they should treat and interact with it.

      Then why are people always surprised to find out that chat bots will make shit up to answer their questions?

      People absolutely need a guide for using a chat bot, because people are idiots.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        ·
        12 hours ago

        Not even just because people are idiots, but also because a LLM is going to have quirks you will need to work around or exploit to get the best results out of it. Like how it’s better to edit your question to clarify a misunderstanding and regenerate the response than it is to respond again with the correction, because there is more of a risk it gets stuck on its mistake that way. Or how it can be useful in some situations to (if the interface allows this) manually edit part of the LLM output to be more in line with what you want it to be saying before generating the rest.

    • moomoomoo309@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 hours ago

      Sure, who will it impersonate if you don’t? That’s where the bias comes in.

      And yes, they do need a guide, because the way chatbots behave is not intuitive or clear, there’s lots of weird emergent behavior in them even experts don’t fully understand (see OpenAI’s 4o sycophancy articles today). Chatbots’ behavior looks obvious, and in many cases it is…until it isn’t. There’s lots of edge cases.

      • nesc@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        They will impersonate ‘helpful assistant made by companyname (following hundreds of lines of invisible rules and what to say and when)’. Experts that don’t have an incentive to understand and at least partially in the cult who would have guessed!

        • moomoomoo309@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          And you think there’s not bias in those rules that’s notable, and that the edge cases I mentioned won’t be an issue, or what?

          You seem to have sidestepped what I’ve said to rant about how OpenAI sucks when that was just meant to be an example of how even those best informed about AI in the world right now don’t really understand it.

          • nesc@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            That’s not ‘bias’, that’s intended behaviour, iirc meta published some research on it. Returning to my initial point, viewing chat bots as ‘white male who lacks self-awareness’ is dumb as fuck.

            As for not understanding, they are paid to not understand.